All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [meta-virtualization][PATCH] Adding k3s recipe
       [not found] <20200821205529.29901-1-erik.jansson@axis.com>
@ 2020-09-21  8:38 ` Joakim Roubert
  2020-09-21 11:11   ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-09-21  8:38 UTC (permalink / raw)
  To: meta-virtualization

On 2020-08-21 22:55, Erik Jansson wrote:
> Signed-off-by: Erik Jansson <erik.jansson@axis.com>

Bruce, what is the current status of the k3s activities? (I know you
were busy with some other things last month.)


BR,

/Joakim


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-21  8:38 ` [meta-virtualization][PATCH] Adding k3s recipe Joakim Roubert
@ 2020-09-21 11:11   ` Bruce Ashfield
  2020-09-21 13:15     ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-09-21 11:11 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

It is close. I had to abandon one approach, since it grew more complex
than if I just let a chunk of functionality be duplicated.

I'm at this again this week, and if I can't sort it out, I'll merge
the core recipes and start factoring in tree (I'm just worried about
people assuming what appears in tree is the final version).

Cheers,

Bruce

On Mon, Sep 21, 2020 at 4:38 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-08-21 22:55, Erik Jansson wrote:
> > Signed-off-by: Erik Jansson <erik.jansson@axis.com>
>
> Bruce, what is the current status of the k3s activities? (I know you
> were busy with some other things last month.)
>
>
> BR,
>
> /Joakim
>
>
> 
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-21 11:11   ` Bruce Ashfield
@ 2020-09-21 13:15     ` Joakim Roubert
  2020-09-24 14:02       ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-09-21 13:15 UTC (permalink / raw)
  To: meta-virtualization

On 2020-09-21 13:11, Bruce Ashfield wrote:
> It is close. I had to abandon one approach, since it grew more
> complex than if I just let a chunk of functionality be duplicated.
> 
> I'm at this again this week

Excellent news!

> and if I can't sort it out, I'll merge the core recipes and start
> factoring in tree (I'm just worried about people assuming what
> appears in tree is the final version).

But with those words you have just informed eveybody on the list about
it, so that should make things clearer for the community!

BR,

/Joakim
-- 
Joakim Roubert
Senior Engineer

Axis Communications AB
Emdalavägen 14, SE-223 69 Lund, Sweden
Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
Fax: +46 46 13 61 30, www.axis.com


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-21 13:15     ` Joakim Roubert
@ 2020-09-24 14:02       ` Bruce Ashfield
  2020-09-24 14:46         ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-09-24 14:02 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

On Mon, Sep 21, 2020 at 9:22 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-09-21 13:11, Bruce Ashfield wrote:
> > It is close. I had to abandon one approach, since it grew more
> > complex than if I just let a chunk of functionality be duplicated.
> >
> > I'm at this again this week
>
> Excellent news!
>
> > and if I can't sort it out, I'll merge the core recipes and start
> > factoring in tree (I'm just worried about people assuming what
> > appears in tree is the final version).
>
> But with those words you have just informed eveybody on the list about
> it, so that should make things clearer for the community!

That I did!

So I've decided to do this in a couple phases, since I keep making the
bbclass more complex that it needs to be.

I've gone back through the patches, and have some questions/comments.
Would you be willing to tweak things basd on those comments ? I
realize it's been quite the delay from submission to this, so I can
work through issues if you don't have the cycles.

Bruce

>
> BR,
>
> /Joakim
> --
> Joakim Roubert
> Senior Engineer
>
> Axis Communications AB
> Emdalavägen 14, SE-223 69 Lund, Sweden
> Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> Fax: +46 46 13 61 30, www.axis.com
>
>
> 
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-24 14:02       ` Bruce Ashfield
@ 2020-09-24 14:46         ` Joakim Roubert
  2020-09-24 15:41           ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-09-24 14:46 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2020-09-24 16:02, Bruce Ashfield wrote:
> 
> So I've decided to do this in a couple phases, since I keep making the
> bbclass more complex that it needs to be.

Sounds like a good plan! For what we know, concepts as "Big Bang" only
was successful once in the history of Universe, and it was not within
the field of software development. ;-)

> I've gone back through the patches, and have some questions/comments.
> Would you be willing to tweak things basd on those comments ? I
> realize it's been quite the delay from submission to this, so I can
> work through issues if you don't have the cycles.

Yes, I am happy to help in whatever way I can. Erik is a colleague of
mine but currently on parental leave, so I will try to fill his shoes
here now. But I have plenty of openembedded k3s build experience with
these recipes too, so let's do this!

BR,

/Joakim
-- 
Joakim Roubert
Senior Engineer

Axis Communications AB
Emdalavägen 14, SE-223 69 Lund, Sweden
Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
Fax: +46 46 13 61 30, www.axis.com


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-24 14:46         ` Joakim Roubert
@ 2020-09-24 15:41           ` Bruce Ashfield
  2020-09-25  6:20             ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-09-24 15:41 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

On Thu, Sep 24, 2020 at 10:46 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-09-24 16:02, Bruce Ashfield wrote:
> >
> > So I've decided to do this in a couple phases, since I keep making the
> > bbclass more complex that it needs to be.
>
> Sounds like a good plan! For what we know, concepts as "Big Bang" only
> was successful once in the history of Universe, and it was not within
> the field of software development. ;-)
>
> > I've gone back through the patches, and have some questions/comments.
> > Would you be willing to tweak things basd on those comments ? I
> > realize it's been quite the delay from submission to this, so I can
> > work through issues if you don't have the cycles.
>
> Yes, I am happy to help in whatever way I can. Erik is a colleague of
> mine but currently on parental leave, so I will try to fill his shoes
> here now. But I have plenty of openembedded k3s build experience with
> these recipes too, so let's do this!

By phases, I meant some smaller changes to the series while I work a
bit more through the bbclass and DISTRO feature flag efforts.

I can say immediately, that we have a problem with the dependencies,
and we need to sort those out.

There are at least two utilities that are shared with the main
kubernetes and not in our existing layer dependencies. We shouldn't
expand our dependencies more, so we have to look at importing a
variant of the recipes to meta-virt, or seeing about getting the
recipes into a more common location. In particular, I'm talking about
ipset and and upx, and I'm still looking for more.

As for the service files, the k3s.service is different from the one
from the latest on the k3s 1.18 branch, and there's a patch that
modifies it. Although tempting to not use /usr/local/bin, for
maintenance, we'd be better off either just installing where things
are expected. Was something broken if /usr/local/bin was used ?

The other .service files, we have no idea where they come from by
looking at the series, and as such they need some sort of comment (by
looking at the repo, you can tell they are variants of the k3s.service
file, but that needs to be clarified).

The patches themselves to the code base (not just the overall git
commit) should have Signed-off-by: and Upstream-status: flags, so they
can be maintained as I bump through releases.

The scripts being created by the recipe, need license headers and spdx
tags. Boring, but it is something we want to make sure is in new
files. Also, at a glance, we need to make sure the dependencies of the
scripts are captured in the DEPENDS of the recipe (i.e. I see 'ip'
being called, make sure that is captured).

As for the networking, what we have in the submission is specific to a
setup, that's the parts that I'm really trying to unify. We need to
document what is expected in the CNI configuration, and then control
it via a distro/image flag such that it won't clobber/conflict with
other CNI users. I can take care of that distro part, but the
documentation would be ideal in a README that comes along with the
recipe.

As I said, I'm still working through some of the factoring out and
de-duplication, but there's some things in there that are best done on
your end, versus mine.

Bruce




>
> BR,
>
> /Joakim
> --
> Joakim Roubert
> Senior Engineer
>
> Axis Communications AB
> Emdalavägen 14, SE-223 69 Lund, Sweden
> Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> Fax: +46 46 13 61 30, www.axis.com
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-24 15:41           ` Bruce Ashfield
@ 2020-09-25  6:20             ` Joakim Roubert
  2020-09-25 13:12               ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-09-25  6:20 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2020-09-24 17:41, Bruce Ashfield wrote:
> 
> There are at least two utilities that are shared with the main 
> kubernetes and not in our existing layer dependencies. We shouldn't 
> expand our dependencies more, so we have to look at importing a 
> variant of the recipes to meta-virt, or seeing about getting the 
> recipes into a more common location. In particular, I'm talking
> about ipset and and upx, and I'm still looking for more.

Ah, yes.

> As for the service files, the k3s.service is different from the one 
> from the latest on the k3s 1.18 branch, and there's a patch that 
> modifies it. Although tempting to not use /usr/local/bin, for 
> maintenance, we'd be better off either just installing where things 
> are expected. Was something broken if /usr/local/bin was used ?

The target systems I work on prohibits installations to /usr/local, so
from my perspective something would in fact be broken if /usr/local/bin
was to be used. Nevertheless, configuring the recipe to have
/usr/{$VARIABLE}/bin and VARIABLE set to "local" as default would
perhaps be an option for everybody to eat the cake and still have it?

> The other .service files, we have no idea where they come from by 
> looking at the series, and as such they need some sort of comment
> (by looking at the repo, you can tell they are variants of the
> k3s.service file, but that needs to be clarified).

I will investigate.

> The patches themselves to the code base (not just the overall git 
> commit) should have Signed-off-by: and Upstream-status: flags, so
> they can be maintained as I bump through releases.

I will see to that.

> The scripts being created by the recipe, need license headers and
> spdx tags. Boring, but it is something we want to make sure is in
> new files.

Absolutely!

> Also, at a glance, we need to make sure the dependencies of the 
> scripts are captured in the DEPENDS of the recipe (i.e. I see 'ip' 
> being called, make sure that is captured).

Ah, yes, I will go through that.

> As for the networking, what we have in the submission is specific to
> a setup, that's the parts that I'm really trying to unify. We need
> to document what is expected in the CNI configuration, and then
> control it via a distro/image flag such that it won't
> clobber/conflict with other CNI users. I can take care of that distro
> part, but the documentation would be ideal in a README that comes
> along with the recipe.

That also sounds like a good plan. I am not relly sure what text to
write in that README file, though. But I guess that is something that
can be iteratively created.

> As I said, I'm still working through some of the factoring out and 
> de-duplication, but there's some things in there that are best done
> on your end, versus mine.

Excellent!

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-25  6:20             ` Joakim Roubert
@ 2020-09-25 13:12               ` Bruce Ashfield
  2020-09-25 13:50                 ` Joakim Roubert
       [not found]                 ` <16380B0CA000AB98.28124@lists.yoctoproject.org>
  0 siblings, 2 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-09-25 13:12 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

On Fri, Sep 25, 2020 at 2:20 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-09-24 17:41, Bruce Ashfield wrote:
> >
> > There are at least two utilities that are shared with the main
> > kubernetes and not in our existing layer dependencies. We shouldn't
> > expand our dependencies more, so we have to look at importing a
> > variant of the recipes to meta-virt, or seeing about getting the
> > recipes into a more common location. In particular, I'm talking
> > about ipset and and upx, and I'm still looking for more.
>
> Ah, yes.
>
> > As for the service files, the k3s.service is different from the one
> > from the latest on the k3s 1.18 branch, and there's a patch that
> > modifies it. Although tempting to not use /usr/local/bin, for
> > maintenance, we'd be better off either just installing where things
> > are expected. Was something broken if /usr/local/bin was used ?
>
> The target systems I work on prohibits installations to /usr/local, so
> from my perspective something would in fact be broken if /usr/local/bin
> was to be used. Nevertheless, configuring the recipe to have
> /usr/{$VARIABLE}/bin and VARIABLE set to "local" as default would
> perhaps be an option for everybody to eat the cake and still have it?

Absolutely. I'm always in favour of making these configurable, so we
can let users customize as they see fit.

Alternatively, rather than substituting just the 'local', the path up to
'bin' could be controlled.  And by that, I mean a variable local to the
recipe, like you have above.

That allows the use of $bindir by default, and also covers the case
automatically where someone has changed $bindir to /usr/local/bin.

The local variable allows yet another override if the global $bindir
hasn't been changed, and you just want to modify it for this recipe
(which is actually what I think will be most common).

>
> > The other .service files, we have no idea where they come from by
> > looking at the series, and as such they need some sort of comment
> > (by looking at the repo, you can tell they are variants of the
> > k3s.service file, but that needs to be clarified).
>
> I will investigate.
>
> > The patches themselves to the code base (not just the overall git
> > commit) should have Signed-off-by: and Upstream-status: flags, so
> > they can be maintained as I bump through releases.
>
> I will see to that.
>
> > The scripts being created by the recipe, need license headers and
> > spdx tags. Boring, but it is something we want to make sure is in
> > new files.
>
> Absolutely!

I just had to go through the License / SPDX header addition for
hundreds of files. So I know what a pain this can be, so thanks!

>
> > Also, at a glance, we need to make sure the dependencies of the
> > scripts are captured in the DEPENDS of the recipe (i.e. I see 'ip'
> > being called, make sure that is captured).
>
> Ah, yes, I will go through that.
>
> > As for the networking, what we have in the submission is specific to
> > a setup, that's the parts that I'm really trying to unify. We need
> > to document what is expected in the CNI configuration, and then
> > control it via a distro/image flag such that it won't
> > clobber/conflict with other CNI users. I can take care of that distro
> > part, but the documentation would be ideal in a README that comes
> > along with the recipe.
>
> That also sounds like a good plan. I am not relly sure what text to
> write in that README file, though. But I guess that is something that
> can be iteratively created.

Absolutely. Just a stream of thoughts is fine for now, with a quick
overview, and perhaps a guideline on how to build a single node
system ? With a few notes about what is configured out of the box
and what we expect the end user to continue to configure / customize ?

That would be more than enough as a start.

Bruce

>
> > As I said, I'm still working through some of the factoring out and
> > de-duplication, but there's some things in there that are best done
> > on your end, versus mine.
>
> Excellent!
>
> BR,
>
> /Joakim



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-25 13:12               ` Bruce Ashfield
@ 2020-09-25 13:50                 ` Joakim Roubert
       [not found]                 ` <16380B0CA000AB98.28124@lists.yoctoproject.org>
  1 sibling, 0 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-09-25 13:50 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2020-09-25 15:12, Bruce Ashfield wrote:
> 
> Absolutely. I'm always in favour of making these configurable, so we
> can let users customize as they see fit.

Wonderful.

I finalized a quite (IMO) nice solution during the day, where
${exec_prefix} would be the fixed starting point and /bin the ending
dito, and what is in between configurable. But I like your suggestion
with even more flexibility better and will update according to that.

> I just had to go through the License / SPDX header addition for
> hundreds of files. So I know what a pain this can be, so thanks!

No worries, I find those things very important too.

> Absolutely. Just a stream of thoughts is fine for now, with a quick
> overview, and perhaps a guideline on how to build a single node
> system ? With a few notes about what is configured out of the box
> and what we expect the end user to continue to configure / customize ?

Ok!

I am running out of time here today, so even though I have ticked off
several of the tasks on the list, it will be next week where I get back
with an update!

BR,

/Joakim


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
       [not found]                 ` <16380B0CA000AB98.28124@lists.yoctoproject.org>
@ 2020-09-28 13:48                   ` Joakim Roubert
  2020-09-29 19:58                     ` Bruce Ashfield
  2020-10-13 12:22                     ` [meta-virtualization][PATCH] " Bruce Ashfield
  0 siblings, 2 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-09-28 13:48 UTC (permalink / raw)
  To: meta-virtualization

Signed-off-by: Joakim Roubert <joakimr@axis.com>
---
  recipes-containers/k3s/README.md              |  26 +++++
  ...01-Finding-host-local-in-usr-libexec.patch |  27 +++++
  .../k3s/k3s/cni-containerd-net.conf           |  24 +++++
  recipes-containers/k3s/k3s/k3s-agent          | 100 ++++++++++++++++++
  recipes-containers/k3s/k3s/k3s-agent.service  |  26 +++++
  recipes-containers/k3s/k3s/k3s-clean          |  25 +++++
  recipes-containers/k3s/k3s/k3s.service        |  27 +++++
  recipes-containers/k3s/k3s_git.bb             |  75 +++++++++++++
  8 files changed, 330 insertions(+)
  create mode 100644 recipes-containers/k3s/README.md
  create mode 100644 
recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
  create mode 100644 recipes-containers/k3s/k3s/cni-containerd-net.conf
  create mode 100755 recipes-containers/k3s/k3s/k3s-agent
  create mode 100644 recipes-containers/k3s/k3s/k3s-agent.service
  create mode 100755 recipes-containers/k3s/k3s/k3s-clean
  create mode 100644 recipes-containers/k3s/k3s/k3s.service
  create mode 100644 recipes-containers/k3s/k3s_git.bb

diff --git a/recipes-containers/k3s/README.md 
b/recipes-containers/k3s/README.md
new file mode 100644
index 0000000..8a0a994
--- /dev/null
+++ b/recipes-containers/k3s/README.md
@@ -0,0 +1,26 @@
+# k3s: Lightweight Kubernetes
+
+Rancher's [k3s](https://k3s.io/), available under
+[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), provides
+lightweight Kubernetes suitable for small/edge devices. There are use cases
+where the
+[installation procedures provided by 
Rancher](https://rancher.com/docs/k3s/latest/en/installation/)
+are not ideal but a bitbake-built version is what is needed. And only a few
+mods to the [k3s source code](https://github.com/rancher/k3s) is needed to
+accomplish that.
+
+## CNI
+By default, K3s will run with flannel as the CNI, using VXLAN as the 
default
+backend. It is both possible to change the flannel backend and to 
change from
+flannel to another CNI.
+
+Please see 
https://rancher.com/docs/k3s/latest/en/installation/network-options/
+for further k3s networking details.
+
+## Configure and run a k3s agent
+The convenience script `k3s-agent` can be used to set up a k3s agent 
(service):
+
+    k3s-agent -t <token> -s https://<master>:6443
+
+(Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
+k3s master.)
diff --git 
a/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch 
b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
new file mode 100644
index 0000000..8205d73
--- /dev/null
+++ 
b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
@@ -0,0 +1,27 @@
+From 4faf68d68c97cfd10947e1152f711acc59f39647 Mon Sep 17 00:00:00 2001
+From: Erik Jansson <erikja@axis.com>
+Date: Wed, 16 Oct 2019 15:07:48 +0200
+Subject: [PATCH] Finding host-local in /usr/libexec
+
+Upstream-status: Inappropriate [embedded specific]
+Signed-off-by: <erikja@axis.com>
+---
+ pkg/agent/config/config.go | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
+index b4296f360a..6af9dab895 100644
+--- a/pkg/agent/config/config.go
++++ b/pkg/agent/config/config.go
+@@ -308,7 +308,7 @@ func get(envInfo *cmds.Agent) (*config.Node, error) {
+                return nil, err
+        }
+
+-      hostLocal, err := exec.LookPath("host-local")
++      hostLocal, err := exec.LookPath("/usr/libexec/cni/host-local")
+        if err != nil {
+                return nil, errors.Wrapf(err, "failed to find host-local")
+        }
+--
+2.11.0
+
diff --git a/recipes-containers/k3s/k3s/cni-containerd-net.conf 
b/recipes-containers/k3s/k3s/cni-containerd-net.conf
new file mode 100644
index 0000000..ca434d6
--- /dev/null
+++ b/recipes-containers/k3s/k3s/cni-containerd-net.conf
@@ -0,0 +1,24 @@
+{
+  "cniVersion": "0.4.0",
+  "name": "containerd-net",
+  "plugins": [
+    {
+      "type": "bridge",
+      "bridge": "cni0",
+      "isGateway": true,
+      "ipMasq": true,
+      "promiscMode": true,
+      "ipam": {
+        "type": "host-local",
+        "subnet": "10.88.0.0/16",
+        "routes": [
+          { "dst": "0.0.0.0/0" }
+        ]
+      }
+    },
+    {
+      "type": "portmap",
+      "capabilities": {"portMappings": true}
+    }
+  ]
+}
diff --git a/recipes-containers/k3s/k3s/k3s-agent 
b/recipes-containers/k3s/k3s/k3s-agent
new file mode 100755
index 0000000..1bb4c78
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-agent
@@ -0,0 +1,100 @@
+#!/bin/sh -eu
+# SPDX-License-Identifier: Apache-2.0
+
+ENV_CONF=/etc/systemd/system/k3s-agent.service.d/10-env.conf
+
+usage() {
+       echo "
+USAGE:
+    ${0##*/} [OPTIONS]
+OPTIONS:
+    --token value, -t value             Token to use for authentication 
[\$K3S_TOKEN]
+    --token-file value                  Token file to use for 
authentication [\$K3S_TOKEN_FILE]
+    --server value, -s value            Server to connect to [\$K3S_URL]
+    --node-name value                   Node name [\$K3S_NODE_NAME]
+    --resolv-conf value                 Kubelet resolv.conf file 
[\$K3S_RESOLV_CONF]
+    --cluster-secret value              Shared secret used to bootstrap 
a cluster [\$K3S_CLUSTER_SECRET]
+    -h                                  print this
+"
+}
+
+[ $# -gt 0 ] || {
+       usage
+       exit
+}
+
+case $1 in
+       -*)
+               ;;
+       *)
+               usage
+               exit 1
+               ;;
+esac
+
+rm -f $ENV_CONF
+mkdir -p ${ENV_CONF%/*}
+echo [Service] > $ENV_CONF
+
+while getopts "t:s:-:h" opt; do
+       case $opt in
+               h)
+                       usage
+                       exit
+                       ;;
+               t)
+                       VAR_NAME=K3S_TOKEN
+                       ;;
+               s)
+                       VAR_NAME=K3S_URL
+                       ;;
+               -)
+                       [ $# -ge $OPTIND ] || {
+                               usage
+                               exit 1
+                       }
+                       opt=$OPTARG
+                       eval OPTARG='$'$OPTIND
+                       OPTIND=$(($OPTIND + 1))
+                       case $opt in
+                               token)
+                                       VAR_NAME=K3S_TOKEN
+                                       ;;
+                               token-file)
+                                       VAR_NAME=K3S_TOKEN_FILE
+                                       ;;
+                               server)
+                                       VAR_NAME=K3S_URL
+                                       ;;
+                               node-name)
+                                       VAR_NAME=K3S_NODE_NAME
+                                       ;;
+                               resolv-conf)
+                                       VAR_NAME=K3S_RESOLV_CONF
+                                       ;;
+                               cluster-secret)
+                                       VAR_NAME=K3S_CLUSTER_SECRET
+                                       ;;
+                               help)
+                                       usage
+                                       exit
+                                       ;;
+                               *)
+                                       usage
+                                       exit 1
+                                       ;;
+                       esac
+                       ;;
+               *)
+                       usage
+                       exit 1
+                       ;;
+       esac
+    echo Environment=$VAR_NAME=$OPTARG >> $ENV_CONF
+done
+
+chmod 0644 $ENV_CONF
+rm -rf /var/lib/rancher/k3s/agent
+systemctl daemon-reload
+systemctl restart k3s-agent
+systemctl enable k3s-agent.service
diff --git a/recipes-containers/k3s/k3s/k3s-agent.service 
b/recipes-containers/k3s/k3s/k3s-agent.service
new file mode 100644
index 0000000..9f9016d
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-agent.service
@@ -0,0 +1,26 @@
+# Derived from the k3s install.sh's create_systemd_service_file() function
+[Unit]
+Description=Lightweight Kubernetes Agent
+Documentation=https://k3s.io
+Requires=containerd.service
+After=containerd.service
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=notify
+KillMode=control-group
+Delegate=yes
+LimitNOFILE=infinity
+LimitNPROC=infinity
+LimitCORE=infinity
+TasksMax=infinity
+TimeoutStartSec=0
+Restart=always
+RestartSec=5s
+ExecStartPre=-/sbin/modprobe br_netfilter
+ExecStartPre=-/sbin/modprobe overlay
+ExecStart=/usr/local/bin/k3s agent
+ExecStopPost=/usr/local/bin/k3s-clean
+
diff --git a/recipes-containers/k3s/k3s/k3s-clean 
b/recipes-containers/k3s/k3s/k3s-clean
new file mode 100755
index 0000000..8eff829
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-clean
@@ -0,0 +1,25 @@
+#!/bin/sh -eu
+# SPDX-License-Identifier: Apache-2.0
+do_unmount() {
+       [ $# -eq 2 ] || return
+       local mounts=
+       while read ignore mount ignore; do
+               case $mount in
+                       $1/*|$2/*)
+                               mounts="$mount $mounts"
+                               ;;
+               esac
+       done </proc/self/mounts
+       [ -z "$mounts" ] || umount $mounts
+}
+
+do_unmount /run/k3s /var/lib/rancher/k3s
+
+ip link show | grep 'master cni0' | while read ignore iface ignore; do
+    iface=${iface%%@*}
+    [ -z "$iface" ] || ip link delete $iface
+done
+
+ip link delete cni0
+ip link delete flannel.1
+rm -rf /var/lib/cni/
diff --git a/recipes-containers/k3s/k3s/k3s.service 
b/recipes-containers/k3s/k3s/k3s.service
new file mode 100644
index 0000000..34c7a80
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s.service
@@ -0,0 +1,27 @@
+# Derived from the k3s install.sh's create_systemd_service_file() function
+[Unit]
+Description=Lightweight Kubernetes
+Documentation=https://k3s.io
+Requires=containerd.service
+After=containerd.service
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=notify
+KillMode=process
+Delegate=yes
+# Having non-zero Limit*s causes performance problems due to accounting 
overhead
+# in the kernel. We recommend using cgroups to do container-local 
accounting.
+LimitNOFILE=1048576
+LimitNPROC=infinity
+LimitCORE=infinity
+TasksMax=infinity
+TimeoutStartSec=0
+Restart=always
+RestartSec=5s
+ExecStartPre=-/sbin/modprobe br_netfilter
+ExecStartPre=-/sbin/modprobe overlay
+ExecStart=/usr/local/bin/k3s server
+
diff --git a/recipes-containers/k3s/k3s_git.bb 
b/recipes-containers/k3s/k3s_git.bb
new file mode 100644
index 0000000..cfc2c64
--- /dev/null
+++ b/recipes-containers/k3s/k3s_git.bb
@@ -0,0 +1,75 @@
+SUMMARY = "Production-Grade Container Scheduling and Management"
+DESCRIPTION = "Lightweight Kubernetes, intended to be a fully compliant 
Kubernetes."
+HOMEPAGE = "https://k3s.io/"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = 
"file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
+PV = "v1.18.9+k3s1-dirty"
+
+SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
+           file://k3s.service \
+           file://k3s-agent.service \
+           file://k3s-agent \
+           file://k3s-clean \
+           file://cni-containerd-net.conf \
+ 
file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
+          "
+SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
+SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
+
+inherit go
+inherit goarch
+inherit systemd
+
+PACKAGECONFIG = ""
+PACKAGECONFIG[upx] = ",,upx-native"
+GO_IMPORT = "import"
+GO_BUILD_LDFLAGS = "-X github.com/rancher/k3s/pkg/version.Version=${PV} \
+                    -X 
github.com/rancher/k3s/pkg/version.GitCommit=${@d.getVar('SRCREV_k3s', 
d, 1)[:8]} \
+                    -w -s \
+                   "
+BIN_PREFIX ?= "${exec_prefix}/local"
+
+do_compile() {
+        export 
GOPATH="${S}/src/import/.gopath:${S}/src/import/vendor:${STAGING_DIR_TARGET}/${prefix}/local/go"
+        export CGO_ENABLED="1"
+        export GOFLAGS="-mod=vendor"
+        cd ${S}/src/import
+        ${GO} build -tags providerless -ldflags "${GO_BUILD_LDFLAGS}" 
-o ./dist/artifacts/k3s ./cmd/server/main.go
+        # Use UPX if it is enabled (and thus exists) to compress binary
+        if command -v upx > /dev/null 2>&1; then
+                upx -9 ./dist/artifacts/k3s
+        fi
+}
+do_install() {
+        install -d "${D}${BIN_PREFIX}/bin"
+        install -m 755 "${S}/src/import/dist/artifacts/k3s" 
"${D}${BIN_PREFIX}/bin"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/crictl"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/ctr"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/kubectl"
+        install -m 755 "${WORKDIR}/k3s-clean" "${D}${BIN_PREFIX}/bin"
+        install -D -m 0644 "${WORKDIR}/cni-containerd-net.conf" 
"${D}/${sysconfdir}/cni/net.d/10-containerd-net.conf"
+        if 
${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
+                install -D -m 0644 "${WORKDIR}/k3s.service" 
"${D}${systemd_system_unitdir}/k3s.service"
+                install -D -m 0644 "${WORKDIR}/k3s-agent.service" 
"${D}${systemd_system_unitdir}/k3s-agent.service"
+                sed -i 
"s#\(Exec\)\(.*\)=\(.*\)\(k3s\)#\1\2=${BIN_PREFIX}/bin/\4#g" 
"${D}${systemd_system_unitdir}/k3s.service" 
"${D}${systemd_system_unitdir}/k3s-agent.service"
+                install -m 755 "${WORKDIR}/k3s-agent" 
"${D}${BIN_PREFIX}/bin"
+        fi
+}
+
+PACKAGES =+ "${PN}-server ${PN}-agent"
+
+SYSTEMD_PACKAGES = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','${PN}-server 
${PN}-agent','',d)}"
+SYSTEMD_SERVICE_${PN}-server = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s.service','',d)}"
+SYSTEMD_SERVICE_${PN}-agent = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s-agent.service','',d)}"
+SYSTEMD_AUTO_ENABLE_${PN}-agent = "disable"
+
+FILES_${PN}-agent = "${BIN_PREFIX}/bin/k3s-agent"
+
+RDEPENDS_${PN} = "cni conntrack-tools coreutils findutils iproute2 
ipset virtual/containerd"
+RDEPENDS_${PN}-server = "${PN}"
+RDEPENDS_${PN}-agent = "${PN}"
+
+RCONFLICTS_${PN} = "kubectl"
+
+INHIBIT_PACKAGE_STRIP = "1"
+INSANE_SKIP_${PN} += "ldflags already-stripped"
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-28 13:48                   ` Joakim Roubert
@ 2020-09-29 19:58                     ` Bruce Ashfield
  2020-09-30  8:12                       ` Joakim Roubert
       [not found]                       ` <1639818C3E50A226.8589@lists.yoctoproject.org>
  2020-10-13 12:22                     ` [meta-virtualization][PATCH] " Bruce Ashfield
  1 sibling, 2 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-09-29 19:58 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

Thanks for the updated series, see some comments inline.

On Mon, Sep 28, 2020 at 9:49 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> Signed-off-by: Joakim Roubert <joakimr@axis.com>
> ---
>   recipes-containers/k3s/README.md              |  26 +++++
>   ...01-Finding-host-local-in-usr-libexec.patch |  27 +++++
>   .../k3s/k3s/cni-containerd-net.conf           |  24 +++++
>   recipes-containers/k3s/k3s/k3s-agent          | 100 ++++++++++++++++++
>   recipes-containers/k3s/k3s/k3s-agent.service  |  26 +++++
>   recipes-containers/k3s/k3s/k3s-clean          |  25 +++++
>   recipes-containers/k3s/k3s/k3s.service        |  27 +++++
>   recipes-containers/k3s/k3s_git.bb             |  75 +++++++++++++
>   8 files changed, 330 insertions(+)
>   create mode 100644 recipes-containers/k3s/README.md
>   create mode 100644
> recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
>   create mode 100644 recipes-containers/k3s/k3s/cni-containerd-net.conf
>   create mode 100755 recipes-containers/k3s/k3s/k3s-agent
>   create mode 100644 recipes-containers/k3s/k3s/k3s-agent.service
>   create mode 100755 recipes-containers/k3s/k3s/k3s-clean
>   create mode 100644 recipes-containers/k3s/k3s/k3s.service
>   create mode 100644 recipes-containers/k3s/k3s_git.bb
>
> diff --git a/recipes-containers/k3s/README.md
> b/recipes-containers/k3s/README.md
> new file mode 100644
> index 0000000..8a0a994
> --- /dev/null
> +++ b/recipes-containers/k3s/README.md
> @@ -0,0 +1,26 @@
> +# k3s: Lightweight Kubernetes
> +
> +Rancher's [k3s](https://k3s.io/), available under
> +[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), provides
> +lightweight Kubernetes suitable for small/edge devices. There are use cases
> +where the
> +[installation procedures provided by
> Rancher](https://rancher.com/docs/k3s/latest/en/installation/)
> +are not ideal but a bitbake-built version is what is needed. And only a few
> +mods to the [k3s source code](https://github.com/rancher/k3s) is needed to
> +accomplish that.
> +
> +## CNI
> +By default, K3s will run with flannel as the CNI, using VXLAN as the
> default
> +backend. It is both possible to change the flannel backend and to
> change from
> +flannel to another CNI.
> +
> +Please see
> https://rancher.com/docs/k3s/latest/en/installation/network-options/
> +for further k3s networking details.
> +
> +## Configure and run a k3s agent
> +The convenience script `k3s-agent` can be used to set up a k3s agent
> (service):
> +
> +    k3s-agent -t <token> -s https://<master>:6443
> +
> +(Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
> +k3s master.)

Thanks for the README. It's a good start and pretty much all we need for now.
I'll be doing some build and runtime testing and can do some tweaks as required.

> diff --git
> a/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> new file mode 100644
> index 0000000..8205d73
> --- /dev/null
> +++
> b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> @@ -0,0 +1,27 @@
> +From 4faf68d68c97cfd10947e1152f711acc59f39647 Mon Sep 17 00:00:00 2001
> +From: Erik Jansson <erikja@axis.com>
> +Date: Wed, 16 Oct 2019 15:07:48 +0200
> +Subject: [PATCH] Finding host-local in /usr/libexec
> +
> +Upstream-status: Inappropriate [embedded specific]
> +Signed-off-by: <erikja@axis.com>
> +---
> + pkg/agent/config/config.go | 2 +-
> + 1 file changed, 1 insertion(+), 1 deletion(-)
> +
> +diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
> +index b4296f360a..6af9dab895 100644
> +--- a/pkg/agent/config/config.go
> ++++ b/pkg/agent/config/config.go
> +@@ -308,7 +308,7 @@ func get(envInfo *cmds.Agent) (*config.Node, error) {
> +                return nil, err
> +        }
> +
> +-      hostLocal, err := exec.LookPath("host-local")
> ++      hostLocal, err := exec.LookPath("/usr/libexec/cni/host-local")
> +        if err != nil {
> +                return nil, errors.Wrapf(err, "failed to find host-local")
> +        }
> +--
> +2.11.0
> +
> diff --git a/recipes-containers/k3s/k3s/cni-containerd-net.conf
> b/recipes-containers/k3s/k3s/cni-containerd-net.conf
> new file mode 100644
> index 0000000..ca434d6
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/cni-containerd-net.conf
> @@ -0,0 +1,24 @@
> +{
> +  "cniVersion": "0.4.0",
> +  "name": "containerd-net",
> +  "plugins": [
> +    {
> +      "type": "bridge",
> +      "bridge": "cni0",
> +      "isGateway": true,
> +      "ipMasq": true,
> +      "promiscMode": true,
> +      "ipam": {
> +        "type": "host-local",
> +        "subnet": "10.88.0.0/16",
> +        "routes": [
> +          { "dst": "0.0.0.0/0" }
> +        ]
> +      }
> +    },
> +    {
> +      "type": "portmap",
> +      "capabilities": {"portMappings": true}
> +    }
> +  ]
> +}
> diff --git a/recipes-containers/k3s/k3s/k3s-agent
> b/recipes-containers/k3s/k3s/k3s-agent
> new file mode 100755
> index 0000000..1bb4c78
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-agent
> @@ -0,0 +1,100 @@
> +#!/bin/sh -eu

For these scripts, we have the license, but not a copyright. Which should be ok,
but are the scripts completely written by you (or someone at your company?), if
so, it is a good idea to put a copyright header on the scripts, so we can know
the origin.

> +# SPDX-License-Identifier: Apache-2.0

The SPDX headers look good.

> +
> +ENV_CONF=/etc/systemd/system/k3s-agent.service.d/10-env.conf
> +
> +usage() {
> +       echo "
> +USAGE:
> +    ${0##*/} [OPTIONS]
> +OPTIONS:
> +    --token value, -t value             Token to use for authentication
> [\$K3S_TOKEN]
> +    --token-file value                  Token file to use for
> authentication [\$K3S_TOKEN_FILE]
> +    --server value, -s value            Server to connect to [\$K3S_URL]
> +    --node-name value                   Node name [\$K3S_NODE_NAME]
> +    --resolv-conf value                 Kubelet resolv.conf file
> [\$K3S_RESOLV_CONF]
> +    --cluster-secret value              Shared secret used to bootstrap
> a cluster [\$K3S_CLUSTER_SECRET]
> +    -h                                  print this
> +"
> +}
> +
> +[ $# -gt 0 ] || {
> +       usage
> +       exit
> +}
> +
> +case $1 in
> +       -*)
> +               ;;
> +       *)
> +               usage
> +               exit 1
> +               ;;
> +esac
> +
> +rm -f $ENV_CONF
> +mkdir -p ${ENV_CONF%/*}
> +echo [Service] > $ENV_CONF
> +
> +while getopts "t:s:-:h" opt; do
> +       case $opt in
> +               h)
> +                       usage
> +                       exit
> +                       ;;
> +               t)
> +                       VAR_NAME=K3S_TOKEN
> +                       ;;
> +               s)
> +                       VAR_NAME=K3S_URL
> +                       ;;
> +               -)
> +                       [ $# -ge $OPTIND ] || {
> +                               usage
> +                               exit 1
> +                       }
> +                       opt=$OPTARG
> +                       eval OPTARG='$'$OPTIND
> +                       OPTIND=$(($OPTIND + 1))
> +                       case $opt in
> +                               token)
> +                                       VAR_NAME=K3S_TOKEN
> +                                       ;;
> +                               token-file)
> +                                       VAR_NAME=K3S_TOKEN_FILE
> +                                       ;;
> +                               server)
> +                                       VAR_NAME=K3S_URL
> +                                       ;;
> +                               node-name)
> +                                       VAR_NAME=K3S_NODE_NAME
> +                                       ;;
> +                               resolv-conf)
> +                                       VAR_NAME=K3S_RESOLV_CONF
> +                                       ;;
> +                               cluster-secret)
> +                                       VAR_NAME=K3S_CLUSTER_SECRET
> +                                       ;;
> +                               help)
> +                                       usage
> +                                       exit
> +                                       ;;
> +                               *)
> +                                       usage
> +                                       exit 1
> +                                       ;;
> +                       esac
> +                       ;;
> +               *)
> +                       usage
> +                       exit 1
> +                       ;;
> +       esac
> +    echo Environment=$VAR_NAME=$OPTARG >> $ENV_CONF
> +done
> +
> +chmod 0644 $ENV_CONF
> +rm -rf /var/lib/rancher/k3s/agent
> +systemctl daemon-reload
> +systemctl restart k3s-agent
> +systemctl enable k3s-agent.service
> diff --git a/recipes-containers/k3s/k3s/k3s-agent.service
> b/recipes-containers/k3s/k3s/k3s-agent.service
> new file mode 100644
> index 0000000..9f9016d
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-agent.service
> @@ -0,0 +1,26 @@
> +# Derived from the k3s install.sh's create_systemd_service_file() function

Perfect. This is what I was looking for.

> +[Unit]
> +Description=Lightweight Kubernetes Agent
> +Documentation=https://k3s.io
> +Requires=containerd.service
> +After=containerd.service
> +
> +[Install]
> +WantedBy=multi-user.target
> +
> +[Service]
> +Type=notify
> +KillMode=control-group
> +Delegate=yes
> +LimitNOFILE=infinity
> +LimitNPROC=infinity
> +LimitCORE=infinity
> +TasksMax=infinity
> +TimeoutStartSec=0
> +Restart=always
> +RestartSec=5s
> +ExecStartPre=-/sbin/modprobe br_netfilter
> +ExecStartPre=-/sbin/modprobe overlay
> +ExecStart=/usr/local/bin/k3s agent
> +ExecStopPost=/usr/local/bin/k3s-clean
> +
> diff --git a/recipes-containers/k3s/k3s/k3s-clean
> b/recipes-containers/k3s/k3s/k3s-clean
> new file mode 100755
> index 0000000..8eff829
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-clean
> @@ -0,0 +1,25 @@
> +#!/bin/sh -eu
> +# SPDX-License-Identifier: Apache-2.0
> +do_unmount() {
> +       [ $# -eq 2 ] || return
> +       local mounts=
> +       while read ignore mount ignore; do
> +               case $mount in
> +                       $1/*|$2/*)
> +                               mounts="$mount $mounts"
> +                               ;;
> +               esac
> +       done </proc/self/mounts
> +       [ -z "$mounts" ] || umount $mounts
> +}
> +
> +do_unmount /run/k3s /var/lib/rancher/k3s
> +
> +ip link show | grep 'master cni0' | while read ignore iface ignore; do
> +    iface=${iface%%@*}
> +    [ -z "$iface" ] || ip link delete $iface
> +done
> +
> +ip link delete cni0
> +ip link delete flannel.1
> +rm -rf /var/lib/cni/
> diff --git a/recipes-containers/k3s/k3s/k3s.service
> b/recipes-containers/k3s/k3s/k3s.service
> new file mode 100644
> index 0000000..34c7a80
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s.service
> @@ -0,0 +1,27 @@
> +# Derived from the k3s install.sh's create_systemd_service_file() function
> +[Unit]
> +Description=Lightweight Kubernetes
> +Documentation=https://k3s.io
> +Requires=containerd.service
> +After=containerd.service
> +
> +[Install]
> +WantedBy=multi-user.target
> +
> +[Service]
> +Type=notify
> +KillMode=process
> +Delegate=yes
> +# Having non-zero Limit*s causes performance problems due to accounting
> overhead
> +# in the kernel. We recommend using cgroups to do container-local
> accounting.
> +LimitNOFILE=1048576
> +LimitNPROC=infinity
> +LimitCORE=infinity
> +TasksMax=infinity
> +TimeoutStartSec=0
> +Restart=always
> +RestartSec=5s
> +ExecStartPre=-/sbin/modprobe br_netfilter
> +ExecStartPre=-/sbin/modprobe overlay
> +ExecStart=/usr/local/bin/k3s server
> +
> diff --git a/recipes-containers/k3s/k3s_git.bb
> b/recipes-containers/k3s/k3s_git.bb
> new file mode 100644
> index 0000000..cfc2c64
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s_git.bb
> @@ -0,0 +1,75 @@
> +SUMMARY = "Production-Grade Container Scheduling and Management"
> +DESCRIPTION = "Lightweight Kubernetes, intended to be a fully compliant
> Kubernetes."
> +HOMEPAGE = "https://k3s.io/"
> +LICENSE = "Apache-2.0"
> +LIC_FILES_CHKSUM =
> "file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
> +PV = "v1.18.9+k3s1-dirty"
> +
> +SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
> +           file://k3s.service \
> +           file://k3s-agent.service \
> +           file://k3s-agent \
> +           file://k3s-clean \
> +           file://cni-containerd-net.conf \
> +
> file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
> +          "
> +SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
> +SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
> +
> +inherit go
> +inherit goarch
> +inherit systemd
> +
> +PACKAGECONFIG = ""
> +PACKAGECONFIG[upx] = ",,upx-native"
> +GO_IMPORT = "import"
> +GO_BUILD_LDFLAGS = "-X github.com/rancher/k3s/pkg/version.Version=${PV} \
> +                    -X
> github.com/rancher/k3s/pkg/version.GitCommit=${@d.getVar('SRCREV_k3s',
> d, 1)[:8]} \
> +                    -w -s \
> +                   "
> +BIN_PREFIX ?= "${exec_prefix}/local"
> +
> +do_compile() {
> +        export
> GOPATH="${S}/src/import/.gopath:${S}/src/import/vendor:${STAGING_DIR_TARGET}/${prefix}/local/go"
> +        export CGO_ENABLED="1"
> +        export GOFLAGS="-mod=vendor"
> +        cd ${S}/src/import
> +        ${GO} build -tags providerless -ldflags "${GO_BUILD_LDFLAGS}"
> -o ./dist/artifacts/k3s ./cmd/server/main.go
> +        # Use UPX if it is enabled (and thus exists) to compress binary
> +        if command -v upx > /dev/null 2>&1; then
> +                upx -9 ./dist/artifacts/k3s
> +        fi
> +}
> +do_install() {
> +        install -d "${D}${BIN_PREFIX}/bin"
> +        install -m 755 "${S}/src/import/dist/artifacts/k3s"
> "${D}${BIN_PREFIX}/bin"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/crictl"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/ctr"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/kubectl"
> +        install -m 755 "${WORKDIR}/k3s-clean" "${D}${BIN_PREFIX}/bin"
> +        install -D -m 0644 "${WORKDIR}/cni-containerd-net.conf"
> "${D}/${sysconfdir}/cni/net.d/10-containerd-net.conf"
> +        if

I'm going to abstract the networking configuration into a kubernetes-networking
package, so we can share it amongst the various recipes, and have a way to
control whether or not this configuration is installed. That allows an
easy way to
both share and override the networking.

> ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
> +                install -D -m 0644 "${WORKDIR}/k3s.service"
> "${D}${systemd_system_unitdir}/k3s.service"
> +                install -D -m 0644 "${WORKDIR}/k3s-agent.service"
> "${D}${systemd_system_unitdir}/k3s-agent.service"
> +                sed -i
> "s#\(Exec\)\(.*\)=\(.*\)\(k3s\)#\1\2=${BIN_PREFIX}/bin/\4#g"
> "${D}${systemd_system_unitdir}/k3s.service"
> "${D}${systemd_system_unitdir}/k3s-agent.service"
> +                install -m 755 "${WORKDIR}/k3s-agent"
> "${D}${BIN_PREFIX}/bin"
> +        fi
> +}
> +
> +PACKAGES =+ "${PN}-server ${PN}-agent"
> +
> +SYSTEMD_PACKAGES =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','${PN}-server
> ${PN}-agent','',d)}"
> +SYSTEMD_SERVICE_${PN}-server =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s.service','',d)}"
> +SYSTEMD_SERVICE_${PN}-agent =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s-agent.service','',d)}"
> +SYSTEMD_AUTO_ENABLE_${PN}-agent = "disable"
> +
> +FILES_${PN}-agent = "${BIN_PREFIX}/bin/k3s-agent"
> +
> +RDEPENDS_${PN} = "cni conntrack-tools coreutils findutils iproute2
> ipset virtual/containerd"

I'll also take care of getting ipset in a place where we don't have to
add extra layer
dependencies.

Bruce

> +RDEPENDS_${PN}-server = "${PN}"
> +RDEPENDS_${PN}-agent = "${PN}"
> +
> +RCONFLICTS_${PN} = "kubectl"
> +
> +INHIBIT_PACKAGE_STRIP = "1"
> +INSANE_SKIP_${PN} += "ldflags already-stripped"
> --
> 2.20.1
>
>
> 
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-29 19:58                     ` Bruce Ashfield
@ 2020-09-30  8:12                       ` Joakim Roubert
       [not found]                       ` <1639818C3E50A226.8589@lists.yoctoproject.org>
  1 sibling, 0 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-09-30  8:12 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2020-09-29 21:58, Bruce Ashfield wrote:
> Thanks for the updated series, see some comments inline.

Awesome, I am happy that you liked it!

> Thanks for the README. It's a good start and pretty much all we need
> for now.

I realized I had forgotten to run markdownlint for it, so a slightly
updated version is included in the upcoming patch update.

> I'll be doing some build and runtime testing and can do some tweaks
> as required.

Perfect!

> For these scripts, we have the license, but not a copyright. Which 
> should be ok, but are the scripts completely written by you (or
> someone at your company?), if so, it is a good idea to put a
> copyright header on the scripts, so we can know the origin.

I have added that now; I am quite sure that Erik is the author of those. 
By the look of the coding style, it sure seems like it. (I have sent a 
text to him just to verify and if I am wrong in my assumption here, I 
will update accordingly.)

> I'm going to abstract the networking configuration into a 
> kubernetes-networking package, so we can share it amongst the various
> recipes, and have a way to control whether or not this configuration
> is installed. That allows an easy way to both share and override the
> networking.

I like that!

> I'll also take care of getting ipset in a place where we don't have
> to add extra layer dependencies.

Perfect, thank you!

Following this reply, I will send the latest updates in a separate e-mail.

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
       [not found]                       ` <1639818C3E50A226.8589@lists.yoctoproject.org>
@ 2020-09-30  8:14                         ` Joakim Roubert
  2020-10-01 10:32                         ` Joakim Roubert
       [not found]                         ` <1639D7B9311FC65C.18704@lists.yoctoproject.org>
  2 siblings, 0 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-09-30  8:14 UTC (permalink / raw)
  To: meta-virtualization


Change-Id: Id1c52727593bc5ea8d0cd2de192faa44304d7a45
Signed-off-by: Joakim Roubert <joakimr@axis.com>
---
  recipes-containers/k3s/README.md              |  30 +++++
  ...01-Finding-host-local-in-usr-libexec.patch |  27 +++++
  .../k3s/k3s/cni-containerd-net.conf           |  24 ++++
  recipes-containers/k3s/k3s/k3s-agent          | 103 ++++++++++++++++++
  recipes-containers/k3s/k3s/k3s-agent.service  |  26 +++++
  recipes-containers/k3s/k3s/k3s-clean          |  29 +++++
  recipes-containers/k3s/k3s/k3s.service        |  27 +++++
  recipes-containers/k3s/k3s_git.bb             |  75 +++++++++++++
  8 files changed, 341 insertions(+)
  create mode 100644 recipes-containers/k3s/README.md
  create mode 100644 
recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
  create mode 100644 recipes-containers/k3s/k3s/cni-containerd-net.conf
  create mode 100755 recipes-containers/k3s/k3s/k3s-agent
  create mode 100644 recipes-containers/k3s/k3s/k3s-agent.service
  create mode 100755 recipes-containers/k3s/k3s/k3s-clean
  create mode 100644 recipes-containers/k3s/k3s/k3s.service
  create mode 100644 recipes-containers/k3s/k3s_git.bb

diff --git a/recipes-containers/k3s/README.md 
b/recipes-containers/k3s/README.md
new file mode 100644
index 0000000..3fe5ccd
--- /dev/null
+++ b/recipes-containers/k3s/README.md
@@ -0,0 +1,30 @@
+# k3s: Lightweight Kubernetes
+
+Rancher's [k3s](https://k3s.io/), available under
+[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), provides
+lightweight Kubernetes suitable for small/edge devices. There are use cases
+where the
+[installation procedures provided by 
Rancher](https://rancher.com/docs/k3s/latest/en/installation/)
+are not ideal but a bitbake-built version is what is needed. And only a few
+mods to the [k3s source code](https://github.com/rancher/k3s) is needed to
+accomplish that.
+
+## CNI
+
+By default, K3s will run with flannel as the CNI, using VXLAN as the 
default
+backend. It is both possible to change the flannel backend and to 
change from
+flannel to another CNI.
+
+Please see 
<https://rancher.com/docs/k3s/latest/en/installation/network-options/>
+for further k3s networking details.
+
+## Configure and run a k3s agent
+
+The convenience script `k3s-agent` can be used to set up a k3s agent 
(service):
+
+```shell
+k3s-agent -t <token> -s https://<master>:6443
+```
+
+(Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
+k3s master.)
diff --git 
a/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
new file mode 100644
index 0000000..8205d73
--- /dev/null
+++ 
b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
@@ -0,0 +1,27 @@
+From 4faf68d68c97cfd10947e1152f711acc59f39647 Mon Sep 17 00:00:00 2001
+From: Erik Jansson <erikja@axis.com>
+Date: Wed, 16 Oct 2019 15:07:48 +0200
+Subject: [PATCH] Finding host-local in /usr/libexec
+
+Upstream-status: Inappropriate [embedded specific]
+Signed-off-by: <erikja@axis.com>
+---
+ pkg/agent/config/config.go | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
+index b4296f360a..6af9dab895 100644
+--- a/pkg/agent/config/config.go
++++ b/pkg/agent/config/config.go
+@@ -308,7 +308,7 @@ func get(envInfo *cmds.Agent) (*config.Node, error) {
+                return nil, err
+        }
+
+-      hostLocal, err := exec.LookPath("host-local")
++      hostLocal, err := exec.LookPath("/usr/libexec/cni/host-local")
+        if err != nil {
+                return nil, errors.Wrapf(err, "failed to find host-local")
+        }
+--
+2.11.0
+
diff --git a/recipes-containers/k3s/k3s/cni-containerd-net.conf 
b/recipes-containers/k3s/k3s/cni-containerd-net.conf
new file mode 100644
index 0000000..ca434d6
--- /dev/null
+++ b/recipes-containers/k3s/k3s/cni-containerd-net.conf
@@ -0,0 +1,24 @@
+{
+  "cniVersion": "0.4.0",
+  "name": "containerd-net",
+  "plugins": [
+    {
+      "type": "bridge",
+      "bridge": "cni0",
+      "isGateway": true,
+      "ipMasq": true,
+      "promiscMode": true,
+      "ipam": {
+        "type": "host-local",
+        "subnet": "10.88.0.0/16",
+        "routes": [
+          { "dst": "0.0.0.0/0" }
+        ]
+      }
+    },
+    {
+      "type": "portmap",
+      "capabilities": {"portMappings": true}
+    }
+  ]
+}
diff --git a/recipes-containers/k3s/k3s/k3s-agent 
b/recipes-containers/k3s/k3s/k3s-agent
new file mode 100755
index 0000000..b6c6cb6
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-agent
@@ -0,0 +1,103 @@
+#!/bin/sh -eu
+#
+# Copyright (C) 2020 Axis Communications AB
+#
+# SPDX-License-Identifier: Apache-2.0
+
+ENV_CONF=/etc/systemd/system/k3s-agent.service.d/10-env.conf
+
+usage() {
+       echo "
+USAGE:
+    ${0##*/} [OPTIONS]
+OPTIONS:
+    --token value, -t value             Token to use for authentication 
[\$K3S_TOKEN]
+    --token-file value                  Token file to use for 
authentication [\$K3S_TOKEN_FILE]
+    --server value, -s value            Server to connect to [\$K3S_URL]
+    --node-name value                   Node name [\$K3S_NODE_NAME]
+    --resolv-conf value                 Kubelet resolv.conf file 
[\$K3S_RESOLV_CONF]
+    --cluster-secret value              Shared secret used to bootstrap 
a cluster [\$K3S_CLUSTER_SECRET]
+    -h                                  print this
+"
+}
+
+[ $# -gt 0 ] || {
+       usage
+       exit
+}
+
+case $1 in
+       -*)
+               ;;
+       *)
+               usage
+               exit 1
+               ;;
+esac
+
+rm -f $ENV_CONF
+mkdir -p ${ENV_CONF%/*}
+echo [Service] > $ENV_CONF
+
+while getopts "t:s:-:h" opt; do
+       case $opt in
+               h)
+                       usage
+                       exit
+                       ;;
+               t)
+                       VAR_NAME=K3S_TOKEN
+                       ;;
+               s)
+                       VAR_NAME=K3S_URL
+                       ;;
+               -)
+                       [ $# -ge $OPTIND ] || {
+                               usage
+                               exit 1
+                       }
+                       opt=$OPTARG
+                       eval OPTARG='$'$OPTIND
+                       OPTIND=$(($OPTIND + 1))
+                       case $opt in
+                               token)
+                                       VAR_NAME=K3S_TOKEN
+                                       ;;
+                               token-file)
+                                       VAR_NAME=K3S_TOKEN_FILE
+                                       ;;
+                               server)
+                                       VAR_NAME=K3S_URL
+                                       ;;
+                               node-name)
+                                       VAR_NAME=K3S_NODE_NAME
+                                       ;;
+                               resolv-conf)
+                                       VAR_NAME=K3S_RESOLV_CONF
+                                       ;;
+                               cluster-secret)
+                                       VAR_NAME=K3S_CLUSTER_SECRET
+                                       ;;
+                               help)
+                                       usage
+                                       exit
+                                       ;;
+                               *)
+                                       usage
+                                       exit 1
+                                       ;;
+                       esac
+                       ;;
+               *)
+                       usage
+                       exit 1
+                       ;;
+       esac
+    echo Environment=$VAR_NAME=$OPTARG >> $ENV_CONF
+done
+
+chmod 0644 $ENV_CONF
+rm -rf /var/lib/rancher/k3s/agent
+systemctl daemon-reload
+systemctl restart k3s-agent
+systemctl enable k3s-agent.service
diff --git a/recipes-containers/k3s/k3s/k3s-agent.service 
b/recipes-containers/k3s/k3s/k3s-agent.service
new file mode 100644
index 0000000..9f9016d
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-agent.service
@@ -0,0 +1,26 @@
+# Derived from the k3s install.sh's create_systemd_service_file() function
+[Unit]
+Description=Lightweight Kubernetes Agent
+Documentation=https://k3s.io
+Requires=containerd.service
+After=containerd.service
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=notify
+KillMode=control-group
+Delegate=yes
+LimitNOFILE=infinity
+LimitNPROC=infinity
+LimitCORE=infinity
+TasksMax=infinity
+TimeoutStartSec=0
+Restart=always
+RestartSec=5s
+ExecStartPre=-/sbin/modprobe br_netfilter
+ExecStartPre=-/sbin/modprobe overlay
+ExecStart=/usr/local/bin/k3s agent
+ExecStopPost=/usr/local/bin/k3s-clean
+
diff --git a/recipes-containers/k3s/k3s/k3s-clean 
b/recipes-containers/k3s/k3s/k3s-clean
new file mode 100755
index 0000000..30b74a7
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-clean
@@ -0,0 +1,29 @@
+#!/bin/sh -eu
+#
+# Copyright (C) 2020 Axis Communications AB
+#
+# SPDX-License-Identifier: Apache-2.0
+
+do_unmount() {
+       [ $# -eq 2 ] || return
+       local mounts=
+       while read ignore mount ignore; do
+               case $mount in
+                       $1/*|$2/*)
+                               mounts="$mount $mounts"
+                               ;;
+               esac
+       done </proc/self/mounts
+       [ -z "$mounts" ] || umount $mounts
+}
+
+do_unmount /run/k3s /var/lib/rancher/k3s
+
+ip link show | grep 'master cni0' | while read ignore iface ignore; do
+    iface=${iface%%@*}
+    [ -z "$iface" ] || ip link delete $iface
+done
+
+ip link delete cni0
+ip link delete flannel.1
+rm -rf /var/lib/cni/
diff --git a/recipes-containers/k3s/k3s/k3s.service 
b/recipes-containers/k3s/k3s/k3s.service
new file mode 100644
index 0000000..34c7a80
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s.service
@@ -0,0 +1,27 @@
+# Derived from the k3s install.sh's create_systemd_service_file() function
+[Unit]
+Description=Lightweight Kubernetes
+Documentation=https://k3s.io
+Requires=containerd.service
+After=containerd.service
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=notify
+KillMode=process
+Delegate=yes
+# Having non-zero Limit*s causes performance problems due to accounting 
overhead
+# in the kernel. We recommend using cgroups to do container-local 
accounting.
+LimitNOFILE=1048576
+LimitNPROC=infinity
+LimitCORE=infinity
+TasksMax=infinity
+TimeoutStartSec=0
+Restart=always
+RestartSec=5s
+ExecStartPre=-/sbin/modprobe br_netfilter
+ExecStartPre=-/sbin/modprobe overlay
+ExecStart=/usr/local/bin/k3s server
+
diff --git a/recipes-containers/k3s/k3s_git.bb 
b/recipes-containers/k3s/k3s_git.bb
new file mode 100644
index 0000000..cfc2c64
--- /dev/null
+++ b/recipes-containers/k3s/k3s_git.bb
@@ -0,0 +1,75 @@
+SUMMARY = "Production-Grade Container Scheduling and Management"
+DESCRIPTION = "Lightweight Kubernetes, intended to be a fully compliant 
Kubernetes."
+HOMEPAGE = "https://k3s.io/"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = 
"file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
+PV = "v1.18.9+k3s1-dirty"
+
+SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
+           file://k3s.service \
+           file://k3s-agent.service \
+           file://k3s-agent \
+           file://k3s-clean \
+           file://cni-containerd-net.conf \
+ 
file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
+          "
+SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
+SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
+
+inherit go
+inherit goarch
+inherit systemd
+
+PACKAGECONFIG = ""
+PACKAGECONFIG[upx] = ",,upx-native"
+GO_IMPORT = "import"
+GO_BUILD_LDFLAGS = "-X github.com/rancher/k3s/pkg/version.Version=${PV} \
+                    -X 
github.com/rancher/k3s/pkg/version.GitCommit=${@d.getVar('SRCREV_k3s', 
d, 1)[:8]} \
+                    -w -s \
+                   "
+BIN_PREFIX ?= "${exec_prefix}/local"
+
+do_compile() {
+        export 
GOPATH="${S}/src/import/.gopath:${S}/src/import/vendor:${STAGING_DIR_TARGET}/${prefix}/local/go"
+        export CGO_ENABLED="1"
+        export GOFLAGS="-mod=vendor"
+        cd ${S}/src/import
+        ${GO} build -tags providerless -ldflags "${GO_BUILD_LDFLAGS}" 
-o ./dist/artifacts/k3s ./cmd/server/main.go
+        # Use UPX if it is enabled (and thus exists) to compress binary
+        if command -v upx > /dev/null 2>&1; then
+                upx -9 ./dist/artifacts/k3s
+        fi
+}
+do_install() {
+        install -d "${D}${BIN_PREFIX}/bin"
+        install -m 755 "${S}/src/import/dist/artifacts/k3s" 
"${D}${BIN_PREFIX}/bin"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/crictl"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/ctr"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/kubectl"
+        install -m 755 "${WORKDIR}/k3s-clean" "${D}${BIN_PREFIX}/bin"
+        install -D -m 0644 "${WORKDIR}/cni-containerd-net.conf" 
"${D}/${sysconfdir}/cni/net.d/10-containerd-net.conf"
+        if 
${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
+                install -D -m 0644 "${WORKDIR}/k3s.service" 
"${D}${systemd_system_unitdir}/k3s.service"
+                install -D -m 0644 "${WORKDIR}/k3s-agent.service" 
"${D}${systemd_system_unitdir}/k3s-agent.service"
+                sed -i 
"s#\(Exec\)\(.*\)=\(.*\)\(k3s\)#\1\2=${BIN_PREFIX}/bin/\4#g" 
"${D}${systemd_system_unitdir}/k3s.service" 
"${D}${systemd_system_unitdir}/k3s-agent.service"
+                install -m 755 "${WORKDIR}/k3s-agent" 
"${D}${BIN_PREFIX}/bin"
+        fi
+}
+
+PACKAGES =+ "${PN}-server ${PN}-agent"
+
+SYSTEMD_PACKAGES = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','${PN}-server 
${PN}-agent','',d)}"
+SYSTEMD_SERVICE_${PN}-server = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s.service','',d)}"
+SYSTEMD_SERVICE_${PN}-agent = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s-agent.service','',d)}"
+SYSTEMD_AUTO_ENABLE_${PN}-agent = "disable"
+
+FILES_${PN}-agent = "${BIN_PREFIX}/bin/k3s-agent"
+
+RDEPENDS_${PN} = "cni conntrack-tools coreutils findutils iproute2 
ipset virtual/containerd"
+RDEPENDS_${PN}-server = "${PN}"
+RDEPENDS_${PN}-agent = "${PN}"
+
+RCONFLICTS_${PN} = "kubectl"
+
+INHIBIT_PACKAGE_STRIP = "1"
+INSANE_SKIP_${PN} += "ldflags already-stripped"
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
       [not found]                       ` <1639818C3E50A226.8589@lists.yoctoproject.org>
  2020-09-30  8:14                         ` Joakim Roubert
@ 2020-10-01 10:32                         ` Joakim Roubert
       [not found]                         ` <1639D7B9311FC65C.18704@lists.yoctoproject.org>
  2 siblings, 0 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-10-01 10:32 UTC (permalink / raw)
  To: meta-virtualization

On 2020-09-30 10:12, Joakim Roubert wrote:
>
> I have added that now; I am quite sure that Erik is the author of those.
> By the look of the coding style, it sure seems like it. (I have sent a
> text to him just to verify and if I am wrong in my assumption here, I
> will update accordingly.)

Update accordingly: the last couple of lines in k3s-clean are from 
install.sh's create_killall.sh function. I have now added a comment that 
clearly states that. Hopefully now everything is in order so that we do 
not claim someone else's cred.

The patchset with this update follows this reply in a separate e-mail.

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
       [not found]                         ` <1639D7B9311FC65C.18704@lists.yoctoproject.org>
@ 2020-10-01 10:32                           ` Joakim Roubert
  2020-10-14 16:38                             ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-10-01 10:32 UTC (permalink / raw)
  To: meta-virtualization; +Cc: Bruce Ashfield



Change-Id: Id1c52727593bc5ea8d0cd2de192faa44304d7a45
Signed-off-by: Joakim Roubert <joakimr@axis.com>
---
  recipes-containers/k3s/README.md              |  30 +++++
  ...01-Finding-host-local-in-usr-libexec.patch |  27 +++++
  .../k3s/k3s/cni-containerd-net.conf           |  24 ++++
  recipes-containers/k3s/k3s/k3s-agent          | 103 ++++++++++++++++++
  recipes-containers/k3s/k3s/k3s-agent.service  |  26 +++++
  recipes-containers/k3s/k3s/k3s-clean          |  30 +++++
  recipes-containers/k3s/k3s/k3s.service        |  27 +++++
  recipes-containers/k3s/k3s_git.bb             |  75 +++++++++++++
  8 files changed, 342 insertions(+)
  create mode 100644 recipes-containers/k3s/README.md
  create mode 100644 
recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
  create mode 100644 recipes-containers/k3s/k3s/cni-containerd-net.conf
  create mode 100755 recipes-containers/k3s/k3s/k3s-agent
  create mode 100644 recipes-containers/k3s/k3s/k3s-agent.service
  create mode 100755 recipes-containers/k3s/k3s/k3s-clean
  create mode 100644 recipes-containers/k3s/k3s/k3s.service
  create mode 100644 recipes-containers/k3s/k3s_git.bb

diff --git a/recipes-containers/k3s/README.md 
b/recipes-containers/k3s/README.md
new file mode 100644
index 0000000..3fe5ccd
--- /dev/null
+++ b/recipes-containers/k3s/README.md
@@ -0,0 +1,30 @@
+# k3s: Lightweight Kubernetes
+
+Rancher's [k3s](https://k3s.io/), available under
+[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), provides
+lightweight Kubernetes suitable for small/edge devices. There are use cases
+where the
+[installation procedures provided by 
Rancher](https://rancher.com/docs/k3s/latest/en/installation/)
+are not ideal but a bitbake-built version is what is needed. And only a few
+mods to the [k3s source code](https://github.com/rancher/k3s) is needed to
+accomplish that.
+
+## CNI
+
+By default, K3s will run with flannel as the CNI, using VXLAN as the 
default
+backend. It is both possible to change the flannel backend and to 
change from
+flannel to another CNI.
+
+Please see 
<https://rancher.com/docs/k3s/latest/en/installation/network-options/>
+for further k3s networking details.
+
+## Configure and run a k3s agent
+
+The convenience script `k3s-agent` can be used to set up a k3s agent 
(service):
+
+```shell
+k3s-agent -t <token> -s https://<master>:6443
+```
+
+(Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
+k3s master.)
diff --git 
a/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
new file mode 100644
index 0000000..8205d73
--- /dev/null
+++ 
b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
@@ -0,0 +1,27 @@
+From 4faf68d68c97cfd10947e1152f711acc59f39647 Mon Sep 17 00:00:00 2001
+From: Erik Jansson <erikja@axis.com>
+Date: Wed, 16 Oct 2019 15:07:48 +0200
+Subject: [PATCH] Finding host-local in /usr/libexec
+
+Upstream-status: Inappropriate [embedded specific]
+Signed-off-by: <erikja@axis.com>
+---
+ pkg/agent/config/config.go | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
+index b4296f360a..6af9dab895 100644
+--- a/pkg/agent/config/config.go
++++ b/pkg/agent/config/config.go
+@@ -308,7 +308,7 @@ func get(envInfo *cmds.Agent) (*config.Node, error) {
+ 		return nil, err
+ 	}
+
+-	hostLocal, err := exec.LookPath("host-local")
++	hostLocal, err := exec.LookPath("/usr/libexec/cni/host-local")
+ 	if err != nil {
+ 		return nil, errors.Wrapf(err, "failed to find host-local")
+ 	}
+--
+2.11.0
+
diff --git a/recipes-containers/k3s/k3s/cni-containerd-net.conf 
b/recipes-containers/k3s/k3s/cni-containerd-net.conf
new file mode 100644
index 0000000..ca434d6
--- /dev/null
+++ b/recipes-containers/k3s/k3s/cni-containerd-net.conf
@@ -0,0 +1,24 @@
+{
+  "cniVersion": "0.4.0",
+  "name": "containerd-net",
+  "plugins": [
+    {
+      "type": "bridge",
+      "bridge": "cni0",
+      "isGateway": true,
+      "ipMasq": true,
+      "promiscMode": true,
+      "ipam": {
+        "type": "host-local",
+        "subnet": "10.88.0.0/16",
+        "routes": [
+          { "dst": "0.0.0.0/0" }
+        ]
+      }
+    },
+    {
+      "type": "portmap",
+      "capabilities": {"portMappings": true}
+    }
+  ]
+}
diff --git a/recipes-containers/k3s/k3s/k3s-agent 
b/recipes-containers/k3s/k3s/k3s-agent
new file mode 100755
index 0000000..b6c6cb6
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-agent
@@ -0,0 +1,103 @@
+#!/bin/sh -eu
+#
+# Copyright (C) 2020 Axis Communications AB
+#
+# SPDX-License-Identifier: Apache-2.0
+
+ENV_CONF=/etc/systemd/system/k3s-agent.service.d/10-env.conf
+
+usage() {
+	echo "
+USAGE:
+    ${0##*/} [OPTIONS]
+OPTIONS:
+    --token value, -t value             Token to use for authentication 
[\$K3S_TOKEN]
+    --token-file value                  Token file to use for 
authentication [\$K3S_TOKEN_FILE]
+    --server value, -s value            Server to connect to [\$K3S_URL]
+    --node-name value                   Node name [\$K3S_NODE_NAME]
+    --resolv-conf value                 Kubelet resolv.conf file 
[\$K3S_RESOLV_CONF]
+    --cluster-secret value              Shared secret used to bootstrap 
a cluster [\$K3S_CLUSTER_SECRET]
+    -h                                  print this
+"
+}
+
+[ $# -gt 0 ] || {
+	usage
+	exit
+}
+
+case $1 in
+	-*)
+		;;
+	*)
+		usage
+		exit 1
+		;;
+esac
+
+rm -f $ENV_CONF
+mkdir -p ${ENV_CONF%/*}
+echo [Service] > $ENV_CONF
+
+while getopts "t:s:-:h" opt; do
+	case $opt in
+		h)
+			usage
+			exit
+			;;
+		t)
+			VAR_NAME=K3S_TOKEN
+			;;
+		s)
+			VAR_NAME=K3S_URL
+			;;
+		-)
+			[ $# -ge $OPTIND ] || {
+				usage
+				exit 1
+			}
+			opt=$OPTARG
+			eval OPTARG='$'$OPTIND
+			OPTIND=$(($OPTIND + 1))
+			case $opt in
+				token)
+					VAR_NAME=K3S_TOKEN
+					;;
+				token-file)
+					VAR_NAME=K3S_TOKEN_FILE
+					;;
+				server)
+					VAR_NAME=K3S_URL
+					;;
+				node-name)
+					VAR_NAME=K3S_NODE_NAME
+					;;
+				resolv-conf)
+					VAR_NAME=K3S_RESOLV_CONF
+					;;
+				cluster-secret)
+					VAR_NAME=K3S_CLUSTER_SECRET
+					;;
+				help)
+					usage
+					exit
+					;;
+				*)
+					usage
+					exit 1
+					;;
+			esac
+			;;
+		*)
+			usage
+			exit 1
+			;;
+	esac
+    echo Environment=$VAR_NAME=$OPTARG >> $ENV_CONF
+done
+
+chmod 0644 $ENV_CONF
+rm -rf /var/lib/rancher/k3s/agent
+systemctl daemon-reload
+systemctl restart k3s-agent
+systemctl enable k3s-agent.service
diff --git a/recipes-containers/k3s/k3s/k3s-agent.service 
b/recipes-containers/k3s/k3s/k3s-agent.service
new file mode 100644
index 0000000..9f9016d
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-agent.service
@@ -0,0 +1,26 @@
+# Derived from the k3s install.sh's create_systemd_service_file() function
+[Unit]
+Description=Lightweight Kubernetes Agent
+Documentation=https://k3s.io
+Requires=containerd.service
+After=containerd.service
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=notify
+KillMode=control-group
+Delegate=yes
+LimitNOFILE=infinity
+LimitNPROC=infinity
+LimitCORE=infinity
+TasksMax=infinity
+TimeoutStartSec=0
+Restart=always
+RestartSec=5s
+ExecStartPre=-/sbin/modprobe br_netfilter
+ExecStartPre=-/sbin/modprobe overlay
+ExecStart=/usr/local/bin/k3s agent
+ExecStopPost=/usr/local/bin/k3s-clean
+
diff --git a/recipes-containers/k3s/k3s/k3s-clean 
b/recipes-containers/k3s/k3s/k3s-clean
new file mode 100755
index 0000000..8eca918
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-clean
@@ -0,0 +1,30 @@
+#!/bin/sh -eu
+#
+# Copyright (C) 2020 Axis Communications AB
+#
+# SPDX-License-Identifier: Apache-2.0
+
+do_unmount() {
+	[ $# -eq 2 ] || return
+	local mounts=
+	while read ignore mount ignore; do
+		case $mount in
+			$1/*|$2/*)
+				mounts="$mount $mounts"
+				;;
+		esac
+	done </proc/self/mounts
+	[ -z "$mounts" ] || umount $mounts
+}
+
+do_unmount /run/k3s /var/lib/rancher/k3s
+
+# The lines below come from install.sh's create_killall() function:
+ip link show 2>/dev/null | grep 'master cni0' | while read ignore iface 
ignore; do
+    iface=${iface%%@*}
+    [ -z "$iface" ] || ip link delete $iface
+done
+
+ip link delete cni0
+ip link delete flannel.1
+rm -rf /var/lib/cni/
diff --git a/recipes-containers/k3s/k3s/k3s.service 
b/recipes-containers/k3s/k3s/k3s.service
new file mode 100644
index 0000000..34c7a80
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s.service
@@ -0,0 +1,27 @@
+# Derived from the k3s install.sh's create_systemd_service_file() function
+[Unit]
+Description=Lightweight Kubernetes
+Documentation=https://k3s.io
+Requires=containerd.service
+After=containerd.service
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=notify
+KillMode=process
+Delegate=yes
+# Having non-zero Limit*s causes performance problems due to accounting 
overhead
+# in the kernel. We recommend using cgroups to do container-local 
accounting.
+LimitNOFILE=1048576
+LimitNPROC=infinity
+LimitCORE=infinity
+TasksMax=infinity
+TimeoutStartSec=0
+Restart=always
+RestartSec=5s
+ExecStartPre=-/sbin/modprobe br_netfilter
+ExecStartPre=-/sbin/modprobe overlay
+ExecStart=/usr/local/bin/k3s server
+
diff --git a/recipes-containers/k3s/k3s_git.bb 
b/recipes-containers/k3s/k3s_git.bb
new file mode 100644
index 0000000..cfc2c64
--- /dev/null
+++ b/recipes-containers/k3s/k3s_git.bb
@@ -0,0 +1,75 @@
+SUMMARY = "Production-Grade Container Scheduling and Management"
+DESCRIPTION = "Lightweight Kubernetes, intended to be a fully compliant 
Kubernetes."
+HOMEPAGE = "https://k3s.io/"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = 
"file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
+PV = "v1.18.9+k3s1-dirty"
+
+SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
+           file://k3s.service \
+           file://k3s-agent.service \
+           file://k3s-agent \
+           file://k3s-clean \
+           file://cni-containerd-net.conf \
+ 
file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
+          "
+SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
+SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
+
+inherit go
+inherit goarch
+inherit systemd
+
+PACKAGECONFIG = ""
+PACKAGECONFIG[upx] = ",,upx-native"
+GO_IMPORT = "import"
+GO_BUILD_LDFLAGS = "-X github.com/rancher/k3s/pkg/version.Version=${PV} \
+                    -X 
github.com/rancher/k3s/pkg/version.GitCommit=${@d.getVar('SRCREV_k3s', 
d, 1)[:8]} \
+                    -w -s \
+                   "
+BIN_PREFIX ?= "${exec_prefix}/local"
+
+do_compile() {
+        export 
GOPATH="${S}/src/import/.gopath:${S}/src/import/vendor:${STAGING_DIR_TARGET}/${prefix}/local/go"
+        export CGO_ENABLED="1"
+        export GOFLAGS="-mod=vendor"
+        cd ${S}/src/import
+        ${GO} build -tags providerless -ldflags "${GO_BUILD_LDFLAGS}" 
-o ./dist/artifacts/k3s ./cmd/server/main.go
+        # Use UPX if it is enabled (and thus exists) to compress binary
+        if command -v upx > /dev/null 2>&1; then
+                upx -9 ./dist/artifacts/k3s
+        fi
+}
+do_install() {
+        install -d "${D}${BIN_PREFIX}/bin"
+        install -m 755 "${S}/src/import/dist/artifacts/k3s" 
"${D}${BIN_PREFIX}/bin"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/crictl"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/ctr"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/kubectl"
+        install -m 755 "${WORKDIR}/k3s-clean" "${D}${BIN_PREFIX}/bin"
+        install -D -m 0644 "${WORKDIR}/cni-containerd-net.conf" 
"${D}/${sysconfdir}/cni/net.d/10-containerd-net.conf"
+        if 
${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
+                install -D -m 0644 "${WORKDIR}/k3s.service" 
"${D}${systemd_system_unitdir}/k3s.service"
+                install -D -m 0644 "${WORKDIR}/k3s-agent.service" 
"${D}${systemd_system_unitdir}/k3s-agent.service"
+                sed -i 
"s#\(Exec\)\(.*\)=\(.*\)\(k3s\)#\1\2=${BIN_PREFIX}/bin/\4#g" 
"${D}${systemd_system_unitdir}/k3s.service" 
"${D}${systemd_system_unitdir}/k3s-agent.service"
+                install -m 755 "${WORKDIR}/k3s-agent" 
"${D}${BIN_PREFIX}/bin"
+        fi
+}
+
+PACKAGES =+ "${PN}-server ${PN}-agent"
+
+SYSTEMD_PACKAGES = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','${PN}-server 
${PN}-agent','',d)}"
+SYSTEMD_SERVICE_${PN}-server = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s.service','',d)}"
+SYSTEMD_SERVICE_${PN}-agent = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s-agent.service','',d)}"
+SYSTEMD_AUTO_ENABLE_${PN}-agent = "disable"
+
+FILES_${PN}-agent = "${BIN_PREFIX}/bin/k3s-agent"
+
+RDEPENDS_${PN} = "cni conntrack-tools coreutils findutils iproute2 
ipset virtual/containerd"
+RDEPENDS_${PN}-server = "${PN}"
+RDEPENDS_${PN}-agent = "${PN}"
+
+RCONFLICTS_${PN} = "kubectl"
+
+INHIBIT_PACKAGE_STRIP = "1"
+INSANE_SKIP_${PN} += "ldflags already-stripped"
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-09-28 13:48                   ` Joakim Roubert
  2020-09-29 19:58                     ` Bruce Ashfield
@ 2020-10-13 12:22                     ` Bruce Ashfield
  1 sibling, 0 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-10-13 12:22 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

FYI: This version of the patch hasn't been forgotten.

I'm just having some issues with containerd and alternate runtimes +
the networking consolidation + preparing slides for ELC-e (x3!!) .. so
progress has been slower than I wanted.

That being said, I expect to have it in place before branching the release.

Cheers,

Bruce

On Mon, Sep 28, 2020 at 9:49 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> Signed-off-by: Joakim Roubert <joakimr@axis.com>
> ---
>   recipes-containers/k3s/README.md              |  26 +++++
>   ...01-Finding-host-local-in-usr-libexec.patch |  27 +++++
>   .../k3s/k3s/cni-containerd-net.conf           |  24 +++++
>   recipes-containers/k3s/k3s/k3s-agent          | 100 ++++++++++++++++++
>   recipes-containers/k3s/k3s/k3s-agent.service  |  26 +++++
>   recipes-containers/k3s/k3s/k3s-clean          |  25 +++++
>   recipes-containers/k3s/k3s/k3s.service        |  27 +++++
>   recipes-containers/k3s/k3s_git.bb             |  75 +++++++++++++
>   8 files changed, 330 insertions(+)
>   create mode 100644 recipes-containers/k3s/README.md
>   create mode 100644
> recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
>   create mode 100644 recipes-containers/k3s/k3s/cni-containerd-net.conf
>   create mode 100755 recipes-containers/k3s/k3s/k3s-agent
>   create mode 100644 recipes-containers/k3s/k3s/k3s-agent.service
>   create mode 100755 recipes-containers/k3s/k3s/k3s-clean
>   create mode 100644 recipes-containers/k3s/k3s/k3s.service
>   create mode 100644 recipes-containers/k3s/k3s_git.bb
>
> diff --git a/recipes-containers/k3s/README.md
> b/recipes-containers/k3s/README.md
> new file mode 100644
> index 0000000..8a0a994
> --- /dev/null
> +++ b/recipes-containers/k3s/README.md
> @@ -0,0 +1,26 @@
> +# k3s: Lightweight Kubernetes
> +
> +Rancher's [k3s](https://k3s.io/), available under
> +[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), provides
> +lightweight Kubernetes suitable for small/edge devices. There are use cases
> +where the
> +[installation procedures provided by
> Rancher](https://rancher.com/docs/k3s/latest/en/installation/)
> +are not ideal but a bitbake-built version is what is needed. And only a few
> +mods to the [k3s source code](https://github.com/rancher/k3s) is needed to
> +accomplish that.
> +
> +## CNI
> +By default, K3s will run with flannel as the CNI, using VXLAN as the
> default
> +backend. It is both possible to change the flannel backend and to
> change from
> +flannel to another CNI.
> +
> +Please see
> https://rancher.com/docs/k3s/latest/en/installation/network-options/
> +for further k3s networking details.
> +
> +## Configure and run a k3s agent
> +The convenience script `k3s-agent` can be used to set up a k3s agent
> (service):
> +
> +    k3s-agent -t <token> -s https://<master>:6443
> +
> +(Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
> +k3s master.)
> diff --git
> a/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> new file mode 100644
> index 0000000..8205d73
> --- /dev/null
> +++
> b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> @@ -0,0 +1,27 @@
> +From 4faf68d68c97cfd10947e1152f711acc59f39647 Mon Sep 17 00:00:00 2001
> +From: Erik Jansson <erikja@axis.com>
> +Date: Wed, 16 Oct 2019 15:07:48 +0200
> +Subject: [PATCH] Finding host-local in /usr/libexec
> +
> +Upstream-status: Inappropriate [embedded specific]
> +Signed-off-by: <erikja@axis.com>
> +---
> + pkg/agent/config/config.go | 2 +-
> + 1 file changed, 1 insertion(+), 1 deletion(-)
> +
> +diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
> +index b4296f360a..6af9dab895 100644
> +--- a/pkg/agent/config/config.go
> ++++ b/pkg/agent/config/config.go
> +@@ -308,7 +308,7 @@ func get(envInfo *cmds.Agent) (*config.Node, error) {
> +                return nil, err
> +        }
> +
> +-      hostLocal, err := exec.LookPath("host-local")
> ++      hostLocal, err := exec.LookPath("/usr/libexec/cni/host-local")
> +        if err != nil {
> +                return nil, errors.Wrapf(err, "failed to find host-local")
> +        }
> +--
> +2.11.0
> +
> diff --git a/recipes-containers/k3s/k3s/cni-containerd-net.conf
> b/recipes-containers/k3s/k3s/cni-containerd-net.conf
> new file mode 100644
> index 0000000..ca434d6
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/cni-containerd-net.conf
> @@ -0,0 +1,24 @@
> +{
> +  "cniVersion": "0.4.0",
> +  "name": "containerd-net",
> +  "plugins": [
> +    {
> +      "type": "bridge",
> +      "bridge": "cni0",
> +      "isGateway": true,
> +      "ipMasq": true,
> +      "promiscMode": true,
> +      "ipam": {
> +        "type": "host-local",
> +        "subnet": "10.88.0.0/16",
> +        "routes": [
> +          { "dst": "0.0.0.0/0" }
> +        ]
> +      }
> +    },
> +    {
> +      "type": "portmap",
> +      "capabilities": {"portMappings": true}
> +    }
> +  ]
> +}
> diff --git a/recipes-containers/k3s/k3s/k3s-agent
> b/recipes-containers/k3s/k3s/k3s-agent
> new file mode 100755
> index 0000000..1bb4c78
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-agent
> @@ -0,0 +1,100 @@
> +#!/bin/sh -eu
> +# SPDX-License-Identifier: Apache-2.0
> +
> +ENV_CONF=/etc/systemd/system/k3s-agent.service.d/10-env.conf
> +
> +usage() {
> +       echo "
> +USAGE:
> +    ${0##*/} [OPTIONS]
> +OPTIONS:
> +    --token value, -t value             Token to use for authentication
> [\$K3S_TOKEN]
> +    --token-file value                  Token file to use for
> authentication [\$K3S_TOKEN_FILE]
> +    --server value, -s value            Server to connect to [\$K3S_URL]
> +    --node-name value                   Node name [\$K3S_NODE_NAME]
> +    --resolv-conf value                 Kubelet resolv.conf file
> [\$K3S_RESOLV_CONF]
> +    --cluster-secret value              Shared secret used to bootstrap
> a cluster [\$K3S_CLUSTER_SECRET]
> +    -h                                  print this
> +"
> +}
> +
> +[ $# -gt 0 ] || {
> +       usage
> +       exit
> +}
> +
> +case $1 in
> +       -*)
> +               ;;
> +       *)
> +               usage
> +               exit 1
> +               ;;
> +esac
> +
> +rm -f $ENV_CONF
> +mkdir -p ${ENV_CONF%/*}
> +echo [Service] > $ENV_CONF
> +
> +while getopts "t:s:-:h" opt; do
> +       case $opt in
> +               h)
> +                       usage
> +                       exit
> +                       ;;
> +               t)
> +                       VAR_NAME=K3S_TOKEN
> +                       ;;
> +               s)
> +                       VAR_NAME=K3S_URL
> +                       ;;
> +               -)
> +                       [ $# -ge $OPTIND ] || {
> +                               usage
> +                               exit 1
> +                       }
> +                       opt=$OPTARG
> +                       eval OPTARG='$'$OPTIND
> +                       OPTIND=$(($OPTIND + 1))
> +                       case $opt in
> +                               token)
> +                                       VAR_NAME=K3S_TOKEN
> +                                       ;;
> +                               token-file)
> +                                       VAR_NAME=K3S_TOKEN_FILE
> +                                       ;;
> +                               server)
> +                                       VAR_NAME=K3S_URL
> +                                       ;;
> +                               node-name)
> +                                       VAR_NAME=K3S_NODE_NAME
> +                                       ;;
> +                               resolv-conf)
> +                                       VAR_NAME=K3S_RESOLV_CONF
> +                                       ;;
> +                               cluster-secret)
> +                                       VAR_NAME=K3S_CLUSTER_SECRET
> +                                       ;;
> +                               help)
> +                                       usage
> +                                       exit
> +                                       ;;
> +                               *)
> +                                       usage
> +                                       exit 1
> +                                       ;;
> +                       esac
> +                       ;;
> +               *)
> +                       usage
> +                       exit 1
> +                       ;;
> +       esac
> +    echo Environment=$VAR_NAME=$OPTARG >> $ENV_CONF
> +done
> +
> +chmod 0644 $ENV_CONF
> +rm -rf /var/lib/rancher/k3s/agent
> +systemctl daemon-reload
> +systemctl restart k3s-agent
> +systemctl enable k3s-agent.service
> diff --git a/recipes-containers/k3s/k3s/k3s-agent.service
> b/recipes-containers/k3s/k3s/k3s-agent.service
> new file mode 100644
> index 0000000..9f9016d
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-agent.service
> @@ -0,0 +1,26 @@
> +# Derived from the k3s install.sh's create_systemd_service_file() function
> +[Unit]
> +Description=Lightweight Kubernetes Agent
> +Documentation=https://k3s.io
> +Requires=containerd.service
> +After=containerd.service
> +
> +[Install]
> +WantedBy=multi-user.target
> +
> +[Service]
> +Type=notify
> +KillMode=control-group
> +Delegate=yes
> +LimitNOFILE=infinity
> +LimitNPROC=infinity
> +LimitCORE=infinity
> +TasksMax=infinity
> +TimeoutStartSec=0
> +Restart=always
> +RestartSec=5s
> +ExecStartPre=-/sbin/modprobe br_netfilter
> +ExecStartPre=-/sbin/modprobe overlay
> +ExecStart=/usr/local/bin/k3s agent
> +ExecStopPost=/usr/local/bin/k3s-clean
> +
> diff --git a/recipes-containers/k3s/k3s/k3s-clean
> b/recipes-containers/k3s/k3s/k3s-clean
> new file mode 100755
> index 0000000..8eff829
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-clean
> @@ -0,0 +1,25 @@
> +#!/bin/sh -eu
> +# SPDX-License-Identifier: Apache-2.0
> +do_unmount() {
> +       [ $# -eq 2 ] || return
> +       local mounts=
> +       while read ignore mount ignore; do
> +               case $mount in
> +                       $1/*|$2/*)
> +                               mounts="$mount $mounts"
> +                               ;;
> +               esac
> +       done </proc/self/mounts
> +       [ -z "$mounts" ] || umount $mounts
> +}
> +
> +do_unmount /run/k3s /var/lib/rancher/k3s
> +
> +ip link show | grep 'master cni0' | while read ignore iface ignore; do
> +    iface=${iface%%@*}
> +    [ -z "$iface" ] || ip link delete $iface
> +done
> +
> +ip link delete cni0
> +ip link delete flannel.1
> +rm -rf /var/lib/cni/
> diff --git a/recipes-containers/k3s/k3s/k3s.service
> b/recipes-containers/k3s/k3s/k3s.service
> new file mode 100644
> index 0000000..34c7a80
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s.service
> @@ -0,0 +1,27 @@
> +# Derived from the k3s install.sh's create_systemd_service_file() function
> +[Unit]
> +Description=Lightweight Kubernetes
> +Documentation=https://k3s.io
> +Requires=containerd.service
> +After=containerd.service
> +
> +[Install]
> +WantedBy=multi-user.target
> +
> +[Service]
> +Type=notify
> +KillMode=process
> +Delegate=yes
> +# Having non-zero Limit*s causes performance problems due to accounting
> overhead
> +# in the kernel. We recommend using cgroups to do container-local
> accounting.
> +LimitNOFILE=1048576
> +LimitNPROC=infinity
> +LimitCORE=infinity
> +TasksMax=infinity
> +TimeoutStartSec=0
> +Restart=always
> +RestartSec=5s
> +ExecStartPre=-/sbin/modprobe br_netfilter
> +ExecStartPre=-/sbin/modprobe overlay
> +ExecStart=/usr/local/bin/k3s server
> +
> diff --git a/recipes-containers/k3s/k3s_git.bb
> b/recipes-containers/k3s/k3s_git.bb
> new file mode 100644
> index 0000000..cfc2c64
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s_git.bb
> @@ -0,0 +1,75 @@
> +SUMMARY = "Production-Grade Container Scheduling and Management"
> +DESCRIPTION = "Lightweight Kubernetes, intended to be a fully compliant
> Kubernetes."
> +HOMEPAGE = "https://k3s.io/"
> +LICENSE = "Apache-2.0"
> +LIC_FILES_CHKSUM =
> "file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
> +PV = "v1.18.9+k3s1-dirty"
> +
> +SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
> +           file://k3s.service \
> +           file://k3s-agent.service \
> +           file://k3s-agent \
> +           file://k3s-clean \
> +           file://cni-containerd-net.conf \
> +
> file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
> +          "
> +SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
> +SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
> +
> +inherit go
> +inherit goarch
> +inherit systemd
> +
> +PACKAGECONFIG = ""
> +PACKAGECONFIG[upx] = ",,upx-native"
> +GO_IMPORT = "import"
> +GO_BUILD_LDFLAGS = "-X github.com/rancher/k3s/pkg/version.Version=${PV} \
> +                    -X
> github.com/rancher/k3s/pkg/version.GitCommit=${@d.getVar('SRCREV_k3s',
> d, 1)[:8]} \
> +                    -w -s \
> +                   "
> +BIN_PREFIX ?= "${exec_prefix}/local"
> +
> +do_compile() {
> +        export
> GOPATH="${S}/src/import/.gopath:${S}/src/import/vendor:${STAGING_DIR_TARGET}/${prefix}/local/go"
> +        export CGO_ENABLED="1"
> +        export GOFLAGS="-mod=vendor"
> +        cd ${S}/src/import
> +        ${GO} build -tags providerless -ldflags "${GO_BUILD_LDFLAGS}"
> -o ./dist/artifacts/k3s ./cmd/server/main.go
> +        # Use UPX if it is enabled (and thus exists) to compress binary
> +        if command -v upx > /dev/null 2>&1; then
> +                upx -9 ./dist/artifacts/k3s
> +        fi
> +}
> +do_install() {
> +        install -d "${D}${BIN_PREFIX}/bin"
> +        install -m 755 "${S}/src/import/dist/artifacts/k3s"
> "${D}${BIN_PREFIX}/bin"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/crictl"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/ctr"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/kubectl"
> +        install -m 755 "${WORKDIR}/k3s-clean" "${D}${BIN_PREFIX}/bin"
> +        install -D -m 0644 "${WORKDIR}/cni-containerd-net.conf"
> "${D}/${sysconfdir}/cni/net.d/10-containerd-net.conf"
> +        if
> ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
> +                install -D -m 0644 "${WORKDIR}/k3s.service"
> "${D}${systemd_system_unitdir}/k3s.service"
> +                install -D -m 0644 "${WORKDIR}/k3s-agent.service"
> "${D}${systemd_system_unitdir}/k3s-agent.service"
> +                sed -i
> "s#\(Exec\)\(.*\)=\(.*\)\(k3s\)#\1\2=${BIN_PREFIX}/bin/\4#g"
> "${D}${systemd_system_unitdir}/k3s.service"
> "${D}${systemd_system_unitdir}/k3s-agent.service"
> +                install -m 755 "${WORKDIR}/k3s-agent"
> "${D}${BIN_PREFIX}/bin"
> +        fi
> +}
> +
> +PACKAGES =+ "${PN}-server ${PN}-agent"
> +
> +SYSTEMD_PACKAGES =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','${PN}-server
> ${PN}-agent','',d)}"
> +SYSTEMD_SERVICE_${PN}-server =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s.service','',d)}"
> +SYSTEMD_SERVICE_${PN}-agent =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s-agent.service','',d)}"
> +SYSTEMD_AUTO_ENABLE_${PN}-agent = "disable"
> +
> +FILES_${PN}-agent = "${BIN_PREFIX}/bin/k3s-agent"
> +
> +RDEPENDS_${PN} = "cni conntrack-tools coreutils findutils iproute2
> ipset virtual/containerd"
> +RDEPENDS_${PN}-server = "${PN}"
> +RDEPENDS_${PN}-agent = "${PN}"
> +
> +RCONFLICTS_${PN} = "kubectl"
> +
> +INHIBIT_PACKAGE_STRIP = "1"
> +INSANE_SKIP_${PN} += "ldflags already-stripped"
> --
> 2.20.1
>
>
> 
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-10-01 10:32                           ` Joakim Roubert
@ 2020-10-14 16:38                             ` Bruce Ashfield
  2020-10-15 11:40                               ` Joakim Roubert
  2020-10-15 11:47                               ` [meta-virtualization][PATCH v4] " Joakim Roubert
  0 siblings, 2 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-10-14 16:38 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

Something has happened to this version of the patch in transit.

When I save and apply it using my normal workflow, the patch is corrupted.

Can you re-send it with a "v4" in the subject ? That will hopefully
let it escape gmail's conversation filter.

I'll try and hack the patch by hand, but would like to see a v4 for comparison.

I'm currently importing some recipes for temporary support in
meta-virt and creating a networking recipe while I wait for v4.

Cheers,

Bruce

On Thu, Oct 1, 2020 at 6:32 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
>
>
> Change-Id: Id1c52727593bc5ea8d0cd2de192faa44304d7a45
> Signed-off-by: Joakim Roubert <joakimr@axis.com>
> ---
>   recipes-containers/k3s/README.md              |  30 +++++
>   ...01-Finding-host-local-in-usr-libexec.patch |  27 +++++
>   .../k3s/k3s/cni-containerd-net.conf           |  24 ++++
>   recipes-containers/k3s/k3s/k3s-agent          | 103 ++++++++++++++++++
>   recipes-containers/k3s/k3s/k3s-agent.service  |  26 +++++
>   recipes-containers/k3s/k3s/k3s-clean          |  30 +++++
>   recipes-containers/k3s/k3s/k3s.service        |  27 +++++
>   recipes-containers/k3s/k3s_git.bb             |  75 +++++++++++++
>   8 files changed, 342 insertions(+)
>   create mode 100644 recipes-containers/k3s/README.md
>   create mode 100644
> recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
>   create mode 100644 recipes-containers/k3s/k3s/cni-containerd-net.conf
>   create mode 100755 recipes-containers/k3s/k3s/k3s-agent
>   create mode 100644 recipes-containers/k3s/k3s/k3s-agent.service
>   create mode 100755 recipes-containers/k3s/k3s/k3s-clean
>   create mode 100644 recipes-containers/k3s/k3s/k3s.service
>   create mode 100644 recipes-containers/k3s/k3s_git.bb
>
> diff --git a/recipes-containers/k3s/README.md
> b/recipes-containers/k3s/README.md
> new file mode 100644
> index 0000000..3fe5ccd
> --- /dev/null
> +++ b/recipes-containers/k3s/README.md
> @@ -0,0 +1,30 @@
> +# k3s: Lightweight Kubernetes
> +
> +Rancher's [k3s](https://k3s.io/), available under
> +[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), provides
> +lightweight Kubernetes suitable for small/edge devices. There are use cases
> +where the
> +[installation procedures provided by
> Rancher](https://rancher.com/docs/k3s/latest/en/installation/)
> +are not ideal but a bitbake-built version is what is needed. And only a few
> +mods to the [k3s source code](https://github.com/rancher/k3s) is needed to
> +accomplish that.
> +
> +## CNI
> +
> +By default, K3s will run with flannel as the CNI, using VXLAN as the
> default
> +backend. It is both possible to change the flannel backend and to
> change from
> +flannel to another CNI.
> +
> +Please see
> <https://rancher.com/docs/k3s/latest/en/installation/network-options/>
> +for further k3s networking details.
> +
> +## Configure and run a k3s agent
> +
> +The convenience script `k3s-agent` can be used to set up a k3s agent
> (service):
> +
> +```shell
> +k3s-agent -t <token> -s https://<master>:6443
> +```
> +
> +(Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
> +k3s master.)
> diff --git
> a/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> new file mode 100644
> index 0000000..8205d73
> --- /dev/null
> +++
> b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> @@ -0,0 +1,27 @@
> +From 4faf68d68c97cfd10947e1152f711acc59f39647 Mon Sep 17 00:00:00 2001
> +From: Erik Jansson <erikja@axis.com>
> +Date: Wed, 16 Oct 2019 15:07:48 +0200
> +Subject: [PATCH] Finding host-local in /usr/libexec
> +
> +Upstream-status: Inappropriate [embedded specific]
> +Signed-off-by: <erikja@axis.com>
> +---
> + pkg/agent/config/config.go | 2 +-
> + 1 file changed, 1 insertion(+), 1 deletion(-)
> +
> +diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
> +index b4296f360a..6af9dab895 100644
> +--- a/pkg/agent/config/config.go
> ++++ b/pkg/agent/config/config.go
> +@@ -308,7 +308,7 @@ func get(envInfo *cmds.Agent) (*config.Node, error) {
> +               return nil, err
> +       }
> +
> +-      hostLocal, err := exec.LookPath("host-local")
> ++      hostLocal, err := exec.LookPath("/usr/libexec/cni/host-local")
> +       if err != nil {
> +               return nil, errors.Wrapf(err, "failed to find host-local")
> +       }
> +--
> +2.11.0
> +
> diff --git a/recipes-containers/k3s/k3s/cni-containerd-net.conf
> b/recipes-containers/k3s/k3s/cni-containerd-net.conf
> new file mode 100644
> index 0000000..ca434d6
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/cni-containerd-net.conf
> @@ -0,0 +1,24 @@
> +{
> +  "cniVersion": "0.4.0",
> +  "name": "containerd-net",
> +  "plugins": [
> +    {
> +      "type": "bridge",
> +      "bridge": "cni0",
> +      "isGateway": true,
> +      "ipMasq": true,
> +      "promiscMode": true,
> +      "ipam": {
> +        "type": "host-local",
> +        "subnet": "10.88.0.0/16",
> +        "routes": [
> +          { "dst": "0.0.0.0/0" }
> +        ]
> +      }
> +    },
> +    {
> +      "type": "portmap",
> +      "capabilities": {"portMappings": true}
> +    }
> +  ]
> +}
> diff --git a/recipes-containers/k3s/k3s/k3s-agent
> b/recipes-containers/k3s/k3s/k3s-agent
> new file mode 100755
> index 0000000..b6c6cb6
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-agent
> @@ -0,0 +1,103 @@
> +#!/bin/sh -eu
> +#
> +# Copyright (C) 2020 Axis Communications AB
> +#
> +# SPDX-License-Identifier: Apache-2.0
> +
> +ENV_CONF=/etc/systemd/system/k3s-agent.service.d/10-env.conf
> +
> +usage() {
> +       echo "
> +USAGE:
> +    ${0##*/} [OPTIONS]
> +OPTIONS:
> +    --token value, -t value             Token to use for authentication
> [\$K3S_TOKEN]
> +    --token-file value                  Token file to use for
> authentication [\$K3S_TOKEN_FILE]
> +    --server value, -s value            Server to connect to [\$K3S_URL]
> +    --node-name value                   Node name [\$K3S_NODE_NAME]
> +    --resolv-conf value                 Kubelet resolv.conf file
> [\$K3S_RESOLV_CONF]
> +    --cluster-secret value              Shared secret used to bootstrap
> a cluster [\$K3S_CLUSTER_SECRET]
> +    -h                                  print this
> +"
> +}
> +
> +[ $# -gt 0 ] || {
> +       usage
> +       exit
> +}
> +
> +case $1 in
> +       -*)
> +               ;;
> +       *)
> +               usage
> +               exit 1
> +               ;;
> +esac
> +
> +rm -f $ENV_CONF
> +mkdir -p ${ENV_CONF%/*}
> +echo [Service] > $ENV_CONF
> +
> +while getopts "t:s:-:h" opt; do
> +       case $opt in
> +               h)
> +                       usage
> +                       exit
> +                       ;;
> +               t)
> +                       VAR_NAME=K3S_TOKEN
> +                       ;;
> +               s)
> +                       VAR_NAME=K3S_URL
> +                       ;;
> +               -)
> +                       [ $# -ge $OPTIND ] || {
> +                               usage
> +                               exit 1
> +                       }
> +                       opt=$OPTARG
> +                       eval OPTARG='$'$OPTIND
> +                       OPTIND=$(($OPTIND + 1))
> +                       case $opt in
> +                               token)
> +                                       VAR_NAME=K3S_TOKEN
> +                                       ;;
> +                               token-file)
> +                                       VAR_NAME=K3S_TOKEN_FILE
> +                                       ;;
> +                               server)
> +                                       VAR_NAME=K3S_URL
> +                                       ;;
> +                               node-name)
> +                                       VAR_NAME=K3S_NODE_NAME
> +                                       ;;
> +                               resolv-conf)
> +                                       VAR_NAME=K3S_RESOLV_CONF
> +                                       ;;
> +                               cluster-secret)
> +                                       VAR_NAME=K3S_CLUSTER_SECRET
> +                                       ;;
> +                               help)
> +                                       usage
> +                                       exit
> +                                       ;;
> +                               *)
> +                                       usage
> +                                       exit 1
> +                                       ;;
> +                       esac
> +                       ;;
> +               *)
> +                       usage
> +                       exit 1
> +                       ;;
> +       esac
> +    echo Environment=$VAR_NAME=$OPTARG >> $ENV_CONF
> +done
> +
> +chmod 0644 $ENV_CONF
> +rm -rf /var/lib/rancher/k3s/agent
> +systemctl daemon-reload
> +systemctl restart k3s-agent
> +systemctl enable k3s-agent.service
> diff --git a/recipes-containers/k3s/k3s/k3s-agent.service
> b/recipes-containers/k3s/k3s/k3s-agent.service
> new file mode 100644
> index 0000000..9f9016d
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-agent.service
> @@ -0,0 +1,26 @@
> +# Derived from the k3s install.sh's create_systemd_service_file() function
> +[Unit]
> +Description=Lightweight Kubernetes Agent
> +Documentation=https://k3s.io
> +Requires=containerd.service
> +After=containerd.service
> +
> +[Install]
> +WantedBy=multi-user.target
> +
> +[Service]
> +Type=notify
> +KillMode=control-group
> +Delegate=yes
> +LimitNOFILE=infinity
> +LimitNPROC=infinity
> +LimitCORE=infinity
> +TasksMax=infinity
> +TimeoutStartSec=0
> +Restart=always
> +RestartSec=5s
> +ExecStartPre=-/sbin/modprobe br_netfilter
> +ExecStartPre=-/sbin/modprobe overlay
> +ExecStart=/usr/local/bin/k3s agent
> +ExecStopPost=/usr/local/bin/k3s-clean
> +
> diff --git a/recipes-containers/k3s/k3s/k3s-clean
> b/recipes-containers/k3s/k3s/k3s-clean
> new file mode 100755
> index 0000000..8eca918
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-clean
> @@ -0,0 +1,30 @@
> +#!/bin/sh -eu
> +#
> +# Copyright (C) 2020 Axis Communications AB
> +#
> +# SPDX-License-Identifier: Apache-2.0
> +
> +do_unmount() {
> +       [ $# -eq 2 ] || return
> +       local mounts=
> +       while read ignore mount ignore; do
> +               case $mount in
> +                       $1/*|$2/*)
> +                               mounts="$mount $mounts"
> +                               ;;
> +               esac
> +       done </proc/self/mounts
> +       [ -z "$mounts" ] || umount $mounts
> +}
> +
> +do_unmount /run/k3s /var/lib/rancher/k3s
> +
> +# The lines below come from install.sh's create_killall() function:
> +ip link show 2>/dev/null | grep 'master cni0' | while read ignore iface
> ignore; do
> +    iface=${iface%%@*}
> +    [ -z "$iface" ] || ip link delete $iface
> +done
> +
> +ip link delete cni0
> +ip link delete flannel.1
> +rm -rf /var/lib/cni/
> diff --git a/recipes-containers/k3s/k3s/k3s.service
> b/recipes-containers/k3s/k3s/k3s.service
> new file mode 100644
> index 0000000..34c7a80
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s.service
> @@ -0,0 +1,27 @@
> +# Derived from the k3s install.sh's create_systemd_service_file() function
> +[Unit]
> +Description=Lightweight Kubernetes
> +Documentation=https://k3s.io
> +Requires=containerd.service
> +After=containerd.service
> +
> +[Install]
> +WantedBy=multi-user.target
> +
> +[Service]
> +Type=notify
> +KillMode=process
> +Delegate=yes
> +# Having non-zero Limit*s causes performance problems due to accounting
> overhead
> +# in the kernel. We recommend using cgroups to do container-local
> accounting.
> +LimitNOFILE=1048576
> +LimitNPROC=infinity
> +LimitCORE=infinity
> +TasksMax=infinity
> +TimeoutStartSec=0
> +Restart=always
> +RestartSec=5s
> +ExecStartPre=-/sbin/modprobe br_netfilter
> +ExecStartPre=-/sbin/modprobe overlay
> +ExecStart=/usr/local/bin/k3s server
> +
> diff --git a/recipes-containers/k3s/k3s_git.bb
> b/recipes-containers/k3s/k3s_git.bb
> new file mode 100644
> index 0000000..cfc2c64
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s_git.bb
> @@ -0,0 +1,75 @@
> +SUMMARY = "Production-Grade Container Scheduling and Management"
> +DESCRIPTION = "Lightweight Kubernetes, intended to be a fully compliant
> Kubernetes."
> +HOMEPAGE = "https://k3s.io/"
> +LICENSE = "Apache-2.0"
> +LIC_FILES_CHKSUM =
> "file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
> +PV = "v1.18.9+k3s1-dirty"
> +
> +SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
> +           file://k3s.service \
> +           file://k3s-agent.service \
> +           file://k3s-agent \
> +           file://k3s-clean \
> +           file://cni-containerd-net.conf \
> +
> file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
> +          "
> +SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
> +SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
> +
> +inherit go
> +inherit goarch
> +inherit systemd
> +
> +PACKAGECONFIG = ""
> +PACKAGECONFIG[upx] = ",,upx-native"
> +GO_IMPORT = "import"
> +GO_BUILD_LDFLAGS = "-X github.com/rancher/k3s/pkg/version.Version=${PV} \
> +                    -X
> github.com/rancher/k3s/pkg/version.GitCommit=${@d.getVar('SRCREV_k3s',
> d, 1)[:8]} \
> +                    -w -s \
> +                   "
> +BIN_PREFIX ?= "${exec_prefix}/local"
> +
> +do_compile() {
> +        export
> GOPATH="${S}/src/import/.gopath:${S}/src/import/vendor:${STAGING_DIR_TARGET}/${prefix}/local/go"
> +        export CGO_ENABLED="1"
> +        export GOFLAGS="-mod=vendor"
> +        cd ${S}/src/import
> +        ${GO} build -tags providerless -ldflags "${GO_BUILD_LDFLAGS}"
> -o ./dist/artifacts/k3s ./cmd/server/main.go
> +        # Use UPX if it is enabled (and thus exists) to compress binary
> +        if command -v upx > /dev/null 2>&1; then
> +                upx -9 ./dist/artifacts/k3s
> +        fi
> +}
> +do_install() {
> +        install -d "${D}${BIN_PREFIX}/bin"
> +        install -m 755 "${S}/src/import/dist/artifacts/k3s"
> "${D}${BIN_PREFIX}/bin"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/crictl"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/ctr"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/kubectl"
> +        install -m 755 "${WORKDIR}/k3s-clean" "${D}${BIN_PREFIX}/bin"
> +        install -D -m 0644 "${WORKDIR}/cni-containerd-net.conf"
> "${D}/${sysconfdir}/cni/net.d/10-containerd-net.conf"
> +        if
> ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
> +                install -D -m 0644 "${WORKDIR}/k3s.service"
> "${D}${systemd_system_unitdir}/k3s.service"
> +                install -D -m 0644 "${WORKDIR}/k3s-agent.service"
> "${D}${systemd_system_unitdir}/k3s-agent.service"
> +                sed -i
> "s#\(Exec\)\(.*\)=\(.*\)\(k3s\)#\1\2=${BIN_PREFIX}/bin/\4#g"
> "${D}${systemd_system_unitdir}/k3s.service"
> "${D}${systemd_system_unitdir}/k3s-agent.service"
> +                install -m 755 "${WORKDIR}/k3s-agent"
> "${D}${BIN_PREFIX}/bin"
> +        fi
> +}
> +
> +PACKAGES =+ "${PN}-server ${PN}-agent"
> +
> +SYSTEMD_PACKAGES =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','${PN}-server
> ${PN}-agent','',d)}"
> +SYSTEMD_SERVICE_${PN}-server =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s.service','',d)}"
> +SYSTEMD_SERVICE_${PN}-agent =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s-agent.service','',d)}"
> +SYSTEMD_AUTO_ENABLE_${PN}-agent = "disable"
> +
> +FILES_${PN}-agent = "${BIN_PREFIX}/bin/k3s-agent"
> +
> +RDEPENDS_${PN} = "cni conntrack-tools coreutils findutils iproute2
> ipset virtual/containerd"
> +RDEPENDS_${PN}-server = "${PN}"
> +RDEPENDS_${PN}-agent = "${PN}"
> +
> +RCONFLICTS_${PN} = "kubectl"
> +
> +INHIBIT_PACKAGE_STRIP = "1"
> +INSANE_SKIP_${PN} += "ldflags already-stripped"
> --
> 2.20.1
>
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] Adding k3s recipe
  2020-10-14 16:38                             ` Bruce Ashfield
@ 2020-10-15 11:40                               ` Joakim Roubert
  2020-10-15 11:47                               ` [meta-virtualization][PATCH v4] " Joakim Roubert
  1 sibling, 0 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-10-15 11:40 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2020-10-14 18:38, Bruce Ashfield wrote:
> 
> Can you re-send it with a "v4" in the subject ? That will hopefully
> let it escape gmail's conversation filter.

Sure! No problem.

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [meta-virtualization][PATCH v4] Adding k3s recipe
  2020-10-14 16:38                             ` Bruce Ashfield
  2020-10-15 11:40                               ` Joakim Roubert
@ 2020-10-15 11:47                               ` Joakim Roubert
  2020-10-15 15:02                                 ` Bruce Ashfield
  1 sibling, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-10-15 11:47 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization



Change-Id: Id1c52727593bc5ea8d0cd2de192faa44304d7a45
Signed-off-by: Joakim Roubert <joakimr@axis.com>
---
  recipes-containers/k3s/README.md              |  30 +++++
  ...01-Finding-host-local-in-usr-libexec.patch |  27 +++++
  .../k3s/k3s/cni-containerd-net.conf           |  24 ++++
  recipes-containers/k3s/k3s/k3s-agent          | 103 ++++++++++++++++++
  recipes-containers/k3s/k3s/k3s-agent.service  |  26 +++++
  recipes-containers/k3s/k3s/k3s-clean          |  30 +++++
  recipes-containers/k3s/k3s/k3s.service        |  27 +++++
  recipes-containers/k3s/k3s_git.bb             |  75 +++++++++++++
  8 files changed, 342 insertions(+)
  create mode 100644 recipes-containers/k3s/README.md
  create mode 100644 
recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
  create mode 100644 recipes-containers/k3s/k3s/cni-containerd-net.conf
  create mode 100755 recipes-containers/k3s/k3s/k3s-agent
  create mode 100644 recipes-containers/k3s/k3s/k3s-agent.service
  create mode 100755 recipes-containers/k3s/k3s/k3s-clean
  create mode 100644 recipes-containers/k3s/k3s/k3s.service
  create mode 100644 recipes-containers/k3s/k3s_git.bb

diff --git a/recipes-containers/k3s/README.md 
b/recipes-containers/k3s/README.md
new file mode 100644
index 0000000..3fe5ccd
--- /dev/null
+++ b/recipes-containers/k3s/README.md
@@ -0,0 +1,30 @@
+# k3s: Lightweight Kubernetes
+
+Rancher's [k3s](https://k3s.io/), available under
+[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), provides
+lightweight Kubernetes suitable for small/edge devices. There are use cases
+where the
+[installation procedures provided by 
Rancher](https://rancher.com/docs/k3s/latest/en/installation/)
+are not ideal but a bitbake-built version is what is needed. And only a few
+mods to the [k3s source code](https://github.com/rancher/k3s) is needed to
+accomplish that.
+
+## CNI
+
+By default, K3s will run with flannel as the CNI, using VXLAN as the 
default
+backend. It is both possible to change the flannel backend and to 
change from
+flannel to another CNI.
+
+Please see 
<https://rancher.com/docs/k3s/latest/en/installation/network-options/>
+for further k3s networking details.
+
+## Configure and run a k3s agent
+
+The convenience script `k3s-agent` can be used to set up a k3s agent 
(service):
+
+```shell
+k3s-agent -t <token> -s https://<master>:6443
+```
+
+(Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
+k3s master.)
diff --git 
a/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
new file mode 100644
index 0000000..8205d73
--- /dev/null
+++ 
b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
@@ -0,0 +1,27 @@
+From 4faf68d68c97cfd10947e1152f711acc59f39647 Mon Sep 17 00:00:00 2001
+From: Erik Jansson <erikja@axis.com>
+Date: Wed, 16 Oct 2019 15:07:48 +0200
+Subject: [PATCH] Finding host-local in /usr/libexec
+
+Upstream-status: Inappropriate [embedded specific]
+Signed-off-by: <erikja@axis.com>
+---
+ pkg/agent/config/config.go | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
+index b4296f360a..6af9dab895 100644
+--- a/pkg/agent/config/config.go
++++ b/pkg/agent/config/config.go
+@@ -308,7 +308,7 @@ func get(envInfo *cmds.Agent) (*config.Node, error) {
+ 		return nil, err
+ 	}
+
+-	hostLocal, err := exec.LookPath("host-local")
++	hostLocal, err := exec.LookPath("/usr/libexec/cni/host-local")
+ 	if err != nil {
+ 		return nil, errors.Wrapf(err, "failed to find host-local")
+ 	}
+--
+2.11.0
+
diff --git a/recipes-containers/k3s/k3s/cni-containerd-net.conf 
b/recipes-containers/k3s/k3s/cni-containerd-net.conf
new file mode 100644
index 0000000..ca434d6
--- /dev/null
+++ b/recipes-containers/k3s/k3s/cni-containerd-net.conf
@@ -0,0 +1,24 @@
+{
+  "cniVersion": "0.4.0",
+  "name": "containerd-net",
+  "plugins": [
+    {
+      "type": "bridge",
+      "bridge": "cni0",
+      "isGateway": true,
+      "ipMasq": true,
+      "promiscMode": true,
+      "ipam": {
+        "type": "host-local",
+        "subnet": "10.88.0.0/16",
+        "routes": [
+          { "dst": "0.0.0.0/0" }
+        ]
+      }
+    },
+    {
+      "type": "portmap",
+      "capabilities": {"portMappings": true}
+    }
+  ]
+}
diff --git a/recipes-containers/k3s/k3s/k3s-agent 
b/recipes-containers/k3s/k3s/k3s-agent
new file mode 100755
index 0000000..b6c6cb6
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-agent
@@ -0,0 +1,103 @@
+#!/bin/sh -eu
+#
+# Copyright (C) 2020 Axis Communications AB
+#
+# SPDX-License-Identifier: Apache-2.0
+
+ENV_CONF=/etc/systemd/system/k3s-agent.service.d/10-env.conf
+
+usage() {
+	echo "
+USAGE:
+    ${0##*/} [OPTIONS]
+OPTIONS:
+    --token value, -t value             Token to use for authentication 
[\$K3S_TOKEN]
+    --token-file value                  Token file to use for 
authentication [\$K3S_TOKEN_FILE]
+    --server value, -s value            Server to connect to [\$K3S_URL]
+    --node-name value                   Node name [\$K3S_NODE_NAME]
+    --resolv-conf value                 Kubelet resolv.conf file 
[\$K3S_RESOLV_CONF]
+    --cluster-secret value              Shared secret used to bootstrap 
a cluster [\$K3S_CLUSTER_SECRET]
+    -h                                  print this
+"
+}
+
+[ $# -gt 0 ] || {
+	usage
+	exit
+}
+
+case $1 in
+	-*)
+		;;
+	*)
+		usage
+		exit 1
+		;;
+esac
+
+rm -f $ENV_CONF
+mkdir -p ${ENV_CONF%/*}
+echo [Service] > $ENV_CONF
+
+while getopts "t:s:-:h" opt; do
+	case $opt in
+		h)
+			usage
+			exit
+			;;
+		t)
+			VAR_NAME=K3S_TOKEN
+			;;
+		s)
+			VAR_NAME=K3S_URL
+			;;
+		-)
+			[ $# -ge $OPTIND ] || {
+				usage
+				exit 1
+			}
+			opt=$OPTARG
+			eval OPTARG='$'$OPTIND
+			OPTIND=$(($OPTIND + 1))
+			case $opt in
+				token)
+					VAR_NAME=K3S_TOKEN
+					;;
+				token-file)
+					VAR_NAME=K3S_TOKEN_FILE
+					;;
+				server)
+					VAR_NAME=K3S_URL
+					;;
+				node-name)
+					VAR_NAME=K3S_NODE_NAME
+					;;
+				resolv-conf)
+					VAR_NAME=K3S_RESOLV_CONF
+					;;
+				cluster-secret)
+					VAR_NAME=K3S_CLUSTER_SECRET
+					;;
+				help)
+					usage
+					exit
+					;;
+				*)
+					usage
+					exit 1
+					;;
+			esac
+			;;
+		*)
+			usage
+			exit 1
+			;;
+	esac
+    echo Environment=$VAR_NAME=$OPTARG >> $ENV_CONF
+done
+
+chmod 0644 $ENV_CONF
+rm -rf /var/lib/rancher/k3s/agent
+systemctl daemon-reload
+systemctl restart k3s-agent
+systemctl enable k3s-agent.service
diff --git a/recipes-containers/k3s/k3s/k3s-agent.service 
b/recipes-containers/k3s/k3s/k3s-agent.service
new file mode 100644
index 0000000..9f9016d
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-agent.service
@@ -0,0 +1,26 @@
+# Derived from the k3s install.sh's create_systemd_service_file() function
+[Unit]
+Description=Lightweight Kubernetes Agent
+Documentation=https://k3s.io
+Requires=containerd.service
+After=containerd.service
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=notify
+KillMode=control-group
+Delegate=yes
+LimitNOFILE=infinity
+LimitNPROC=infinity
+LimitCORE=infinity
+TasksMax=infinity
+TimeoutStartSec=0
+Restart=always
+RestartSec=5s
+ExecStartPre=-/sbin/modprobe br_netfilter
+ExecStartPre=-/sbin/modprobe overlay
+ExecStart=/usr/local/bin/k3s agent
+ExecStopPost=/usr/local/bin/k3s-clean
+
diff --git a/recipes-containers/k3s/k3s/k3s-clean 
b/recipes-containers/k3s/k3s/k3s-clean
new file mode 100755
index 0000000..8eca918
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-clean
@@ -0,0 +1,30 @@
+#!/bin/sh -eu
+#
+# Copyright (C) 2020 Axis Communications AB
+#
+# SPDX-License-Identifier: Apache-2.0
+
+do_unmount() {
+	[ $# -eq 2 ] || return
+	local mounts=
+	while read ignore mount ignore; do
+		case $mount in
+			$1/*|$2/*)
+				mounts="$mount $mounts"
+				;;
+		esac
+	done </proc/self/mounts
+	[ -z "$mounts" ] || umount $mounts
+}
+
+do_unmount /run/k3s /var/lib/rancher/k3s
+
+# The lines below come from install.sh's create_killall() function:
+ip link show 2>/dev/null | grep 'master cni0' | while read ignore iface 
ignore; do
+    iface=${iface%%@*}
+    [ -z "$iface" ] || ip link delete $iface
+done
+
+ip link delete cni0
+ip link delete flannel.1
+rm -rf /var/lib/cni/
diff --git a/recipes-containers/k3s/k3s/k3s.service 
b/recipes-containers/k3s/k3s/k3s.service
new file mode 100644
index 0000000..34c7a80
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s.service
@@ -0,0 +1,27 @@
+# Derived from the k3s install.sh's create_systemd_service_file() function
+[Unit]
+Description=Lightweight Kubernetes
+Documentation=https://k3s.io
+Requires=containerd.service
+After=containerd.service
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=notify
+KillMode=process
+Delegate=yes
+# Having non-zero Limit*s causes performance problems due to accounting 
overhead
+# in the kernel. We recommend using cgroups to do container-local 
accounting.
+LimitNOFILE=1048576
+LimitNPROC=infinity
+LimitCORE=infinity
+TasksMax=infinity
+TimeoutStartSec=0
+Restart=always
+RestartSec=5s
+ExecStartPre=-/sbin/modprobe br_netfilter
+ExecStartPre=-/sbin/modprobe overlay
+ExecStart=/usr/local/bin/k3s server
+
diff --git a/recipes-containers/k3s/k3s_git.bb 
b/recipes-containers/k3s/k3s_git.bb
new file mode 100644
index 0000000..cfc2c64
--- /dev/null
+++ b/recipes-containers/k3s/k3s_git.bb
@@ -0,0 +1,75 @@
+SUMMARY = "Production-Grade Container Scheduling and Management"
+DESCRIPTION = "Lightweight Kubernetes, intended to be a fully compliant 
Kubernetes."
+HOMEPAGE = "https://k3s.io/"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = 
"file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
+PV = "v1.18.9+k3s1-dirty"
+
+SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
+           file://k3s.service \
+           file://k3s-agent.service \
+           file://k3s-agent \
+           file://k3s-clean \
+           file://cni-containerd-net.conf \
+ 
file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
+          "
+SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
+SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
+
+inherit go
+inherit goarch
+inherit systemd
+
+PACKAGECONFIG = ""
+PACKAGECONFIG[upx] = ",,upx-native"
+GO_IMPORT = "import"
+GO_BUILD_LDFLAGS = "-X github.com/rancher/k3s/pkg/version.Version=${PV} \
+                    -X 
github.com/rancher/k3s/pkg/version.GitCommit=${@d.getVar('SRCREV_k3s', 
d, 1)[:8]} \
+                    -w -s \
+                   "
+BIN_PREFIX ?= "${exec_prefix}/local"
+
+do_compile() {
+        export 
GOPATH="${S}/src/import/.gopath:${S}/src/import/vendor:${STAGING_DIR_TARGET}/${prefix}/local/go"
+        export CGO_ENABLED="1"
+        export GOFLAGS="-mod=vendor"
+        cd ${S}/src/import
+        ${GO} build -tags providerless -ldflags "${GO_BUILD_LDFLAGS}" 
-o ./dist/artifacts/k3s ./cmd/server/main.go
+        # Use UPX if it is enabled (and thus exists) to compress binary
+        if command -v upx > /dev/null 2>&1; then
+                upx -9 ./dist/artifacts/k3s
+        fi
+}
+do_install() {
+        install -d "${D}${BIN_PREFIX}/bin"
+        install -m 755 "${S}/src/import/dist/artifacts/k3s" 
"${D}${BIN_PREFIX}/bin"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/crictl"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/ctr"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/kubectl"
+        install -m 755 "${WORKDIR}/k3s-clean" "${D}${BIN_PREFIX}/bin"
+        install -D -m 0644 "${WORKDIR}/cni-containerd-net.conf" 
"${D}/${sysconfdir}/cni/net.d/10-containerd-net.conf"
+        if 
${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
+                install -D -m 0644 "${WORKDIR}/k3s.service" 
"${D}${systemd_system_unitdir}/k3s.service"
+                install -D -m 0644 "${WORKDIR}/k3s-agent.service" 
"${D}${systemd_system_unitdir}/k3s-agent.service"
+                sed -i 
"s#\(Exec\)\(.*\)=\(.*\)\(k3s\)#\1\2=${BIN_PREFIX}/bin/\4#g" 
"${D}${systemd_system_unitdir}/k3s.service" 
"${D}${systemd_system_unitdir}/k3s-agent.service"
+                install -m 755 "${WORKDIR}/k3s-agent" 
"${D}${BIN_PREFIX}/bin"
+        fi
+}
+
+PACKAGES =+ "${PN}-server ${PN}-agent"
+
+SYSTEMD_PACKAGES = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','${PN}-server 
${PN}-agent','',d)}"
+SYSTEMD_SERVICE_${PN}-server = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s.service','',d)}"
+SYSTEMD_SERVICE_${PN}-agent = 
"${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s-agent.service','',d)}"
+SYSTEMD_AUTO_ENABLE_${PN}-agent = "disable"
+
+FILES_${PN}-agent = "${BIN_PREFIX}/bin/k3s-agent"
+
+RDEPENDS_${PN} = "cni conntrack-tools coreutils findutils iproute2 
ipset virtual/containerd"
+RDEPENDS_${PN}-server = "${PN}"
+RDEPENDS_${PN}-agent = "${PN}"
+
+RCONFLICTS_${PN} = "kubectl"
+
+INHIBIT_PACKAGE_STRIP = "1"
+INSANE_SKIP_${PN} += "ldflags already-stripped"
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v4] Adding k3s recipe
  2020-10-15 11:47                               ` [meta-virtualization][PATCH v4] " Joakim Roubert
@ 2020-10-15 15:02                                 ` Bruce Ashfield
  2020-10-20 11:14                                   ` [meta-virtualization][PATCH v5] " Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-10-15 15:02 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

I just merged 3 patches from the mailing list, using my standard
workflow (save mbox, apply, buid, push).

But with your patch, saved the same way. I get:

------------

build [/home/bruc...ualization]> git am -s k3.mbox
Applying: Adding k3s recipe
.git/rebase-apply/patch:35: trailing whitespace.
[installation procedures provided by
error: corrupt patch at line 36
Patch failed at 0001 Adding k3s recipe
hint: Use 'git am --show-current-patch' to see the failed patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

----------

I've tried hacking the patch by hand to apply, but it is just making
things worse. Is there another way
you can send the patch ?

Bruce


On Thu, Oct 15, 2020 at 7:47 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
>
>
> Change-Id: Id1c52727593bc5ea8d0cd2de192faa44304d7a45
> Signed-off-by: Joakim Roubert <joakimr@axis.com>
> ---
>   recipes-containers/k3s/README.md              |  30 +++++
>   ...01-Finding-host-local-in-usr-libexec.patch |  27 +++++
>   .../k3s/k3s/cni-containerd-net.conf           |  24 ++++
>   recipes-containers/k3s/k3s/k3s-agent          | 103 ++++++++++++++++++
>   recipes-containers/k3s/k3s/k3s-agent.service  |  26 +++++
>   recipes-containers/k3s/k3s/k3s-clean          |  30 +++++
>   recipes-containers/k3s/k3s/k3s.service        |  27 +++++
>   recipes-containers/k3s/k3s_git.bb             |  75 +++++++++++++
>   8 files changed, 342 insertions(+)
>   create mode 100644 recipes-containers/k3s/README.md
>   create mode 100644
> recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
>   create mode 100644 recipes-containers/k3s/k3s/cni-containerd-net.conf
>   create mode 100755 recipes-containers/k3s/k3s/k3s-agent
>   create mode 100644 recipes-containers/k3s/k3s/k3s-agent.service
>   create mode 100755 recipes-containers/k3s/k3s/k3s-clean
>   create mode 100644 recipes-containers/k3s/k3s/k3s.service
>   create mode 100644 recipes-containers/k3s/k3s_git.bb
>
> diff --git a/recipes-containers/k3s/README.md
> b/recipes-containers/k3s/README.md
> new file mode 100644
> index 0000000..3fe5ccd
> --- /dev/null
> +++ b/recipes-containers/k3s/README.md
> @@ -0,0 +1,30 @@
> +# k3s: Lightweight Kubernetes
> +
> +Rancher's [k3s](https://k3s.io/), available under
> +[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), provides
> +lightweight Kubernetes suitable for small/edge devices. There are use cases
> +where the
> +[installation procedures provided by
> Rancher](https://rancher.com/docs/k3s/latest/en/installation/)
> +are not ideal but a bitbake-built version is what is needed. And only a few
> +mods to the [k3s source code](https://github.com/rancher/k3s) is needed to
> +accomplish that.
> +
> +## CNI
> +
> +By default, K3s will run with flannel as the CNI, using VXLAN as the
> default
> +backend. It is both possible to change the flannel backend and to
> change from
> +flannel to another CNI.
> +
> +Please see
> <https://rancher.com/docs/k3s/latest/en/installation/network-options/>
> +for further k3s networking details.
> +
> +## Configure and run a k3s agent
> +
> +The convenience script `k3s-agent` can be used to set up a k3s agent
> (service):
> +
> +```shell
> +k3s-agent -t <token> -s https://<master>:6443
> +```
> +
> +(Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
> +k3s master.)
> diff --git
> a/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> new file mode 100644
> index 0000000..8205d73
> --- /dev/null
> +++
> b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> @@ -0,0 +1,27 @@
> +From 4faf68d68c97cfd10947e1152f711acc59f39647 Mon Sep 17 00:00:00 2001
> +From: Erik Jansson <erikja@axis.com>
> +Date: Wed, 16 Oct 2019 15:07:48 +0200
> +Subject: [PATCH] Finding host-local in /usr/libexec
> +
> +Upstream-status: Inappropriate [embedded specific]
> +Signed-off-by: <erikja@axis.com>
> +---
> + pkg/agent/config/config.go | 2 +-
> + 1 file changed, 1 insertion(+), 1 deletion(-)
> +
> +diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
> +index b4296f360a..6af9dab895 100644
> +--- a/pkg/agent/config/config.go
> ++++ b/pkg/agent/config/config.go
> +@@ -308,7 +308,7 @@ func get(envInfo *cmds.Agent) (*config.Node, error) {
> +               return nil, err
> +       }
> +
> +-      hostLocal, err := exec.LookPath("host-local")
> ++      hostLocal, err := exec.LookPath("/usr/libexec/cni/host-local")
> +       if err != nil {
> +               return nil, errors.Wrapf(err, "failed to find host-local")
> +       }
> +--
> +2.11.0
> +
> diff --git a/recipes-containers/k3s/k3s/cni-containerd-net.conf
> b/recipes-containers/k3s/k3s/cni-containerd-net.conf
> new file mode 100644
> index 0000000..ca434d6
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/cni-containerd-net.conf
> @@ -0,0 +1,24 @@
> +{
> +  "cniVersion": "0.4.0",
> +  "name": "containerd-net",
> +  "plugins": [
> +    {
> +      "type": "bridge",
> +      "bridge": "cni0",
> +      "isGateway": true,
> +      "ipMasq": true,
> +      "promiscMode": true,
> +      "ipam": {
> +        "type": "host-local",
> +        "subnet": "10.88.0.0/16",
> +        "routes": [
> +          { "dst": "0.0.0.0/0" }
> +        ]
> +      }
> +    },
> +    {
> +      "type": "portmap",
> +      "capabilities": {"portMappings": true}
> +    }
> +  ]
> +}
> diff --git a/recipes-containers/k3s/k3s/k3s-agent
> b/recipes-containers/k3s/k3s/k3s-agent
> new file mode 100755
> index 0000000..b6c6cb6
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-agent
> @@ -0,0 +1,103 @@
> +#!/bin/sh -eu
> +#
> +# Copyright (C) 2020 Axis Communications AB
> +#
> +# SPDX-License-Identifier: Apache-2.0
> +
> +ENV_CONF=/etc/systemd/system/k3s-agent.service.d/10-env.conf
> +
> +usage() {
> +       echo "
> +USAGE:
> +    ${0##*/} [OPTIONS]
> +OPTIONS:
> +    --token value, -t value             Token to use for authentication
> [\$K3S_TOKEN]
> +    --token-file value                  Token file to use for
> authentication [\$K3S_TOKEN_FILE]
> +    --server value, -s value            Server to connect to [\$K3S_URL]
> +    --node-name value                   Node name [\$K3S_NODE_NAME]
> +    --resolv-conf value                 Kubelet resolv.conf file
> [\$K3S_RESOLV_CONF]
> +    --cluster-secret value              Shared secret used to bootstrap
> a cluster [\$K3S_CLUSTER_SECRET]
> +    -h                                  print this
> +"
> +}
> +
> +[ $# -gt 0 ] || {
> +       usage
> +       exit
> +}
> +
> +case $1 in
> +       -*)
> +               ;;
> +       *)
> +               usage
> +               exit 1
> +               ;;
> +esac
> +
> +rm -f $ENV_CONF
> +mkdir -p ${ENV_CONF%/*}
> +echo [Service] > $ENV_CONF
> +
> +while getopts "t:s:-:h" opt; do
> +       case $opt in
> +               h)
> +                       usage
> +                       exit
> +                       ;;
> +               t)
> +                       VAR_NAME=K3S_TOKEN
> +                       ;;
> +               s)
> +                       VAR_NAME=K3S_URL
> +                       ;;
> +               -)
> +                       [ $# -ge $OPTIND ] || {
> +                               usage
> +                               exit 1
> +                       }
> +                       opt=$OPTARG
> +                       eval OPTARG='$'$OPTIND
> +                       OPTIND=$(($OPTIND + 1))
> +                       case $opt in
> +                               token)
> +                                       VAR_NAME=K3S_TOKEN
> +                                       ;;
> +                               token-file)
> +                                       VAR_NAME=K3S_TOKEN_FILE
> +                                       ;;
> +                               server)
> +                                       VAR_NAME=K3S_URL
> +                                       ;;
> +                               node-name)
> +                                       VAR_NAME=K3S_NODE_NAME
> +                                       ;;
> +                               resolv-conf)
> +                                       VAR_NAME=K3S_RESOLV_CONF
> +                                       ;;
> +                               cluster-secret)
> +                                       VAR_NAME=K3S_CLUSTER_SECRET
> +                                       ;;
> +                               help)
> +                                       usage
> +                                       exit
> +                                       ;;
> +                               *)
> +                                       usage
> +                                       exit 1
> +                                       ;;
> +                       esac
> +                       ;;
> +               *)
> +                       usage
> +                       exit 1
> +                       ;;
> +       esac
> +    echo Environment=$VAR_NAME=$OPTARG >> $ENV_CONF
> +done
> +
> +chmod 0644 $ENV_CONF
> +rm -rf /var/lib/rancher/k3s/agent
> +systemctl daemon-reload
> +systemctl restart k3s-agent
> +systemctl enable k3s-agent.service
> diff --git a/recipes-containers/k3s/k3s/k3s-agent.service
> b/recipes-containers/k3s/k3s/k3s-agent.service
> new file mode 100644
> index 0000000..9f9016d
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-agent.service
> @@ -0,0 +1,26 @@
> +# Derived from the k3s install.sh's create_systemd_service_file() function
> +[Unit]
> +Description=Lightweight Kubernetes Agent
> +Documentation=https://k3s.io
> +Requires=containerd.service
> +After=containerd.service
> +
> +[Install]
> +WantedBy=multi-user.target
> +
> +[Service]
> +Type=notify
> +KillMode=control-group
> +Delegate=yes
> +LimitNOFILE=infinity
> +LimitNPROC=infinity
> +LimitCORE=infinity
> +TasksMax=infinity
> +TimeoutStartSec=0
> +Restart=always
> +RestartSec=5s
> +ExecStartPre=-/sbin/modprobe br_netfilter
> +ExecStartPre=-/sbin/modprobe overlay
> +ExecStart=/usr/local/bin/k3s agent
> +ExecStopPost=/usr/local/bin/k3s-clean
> +
> diff --git a/recipes-containers/k3s/k3s/k3s-clean
> b/recipes-containers/k3s/k3s/k3s-clean
> new file mode 100755
> index 0000000..8eca918
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-clean
> @@ -0,0 +1,30 @@
> +#!/bin/sh -eu
> +#
> +# Copyright (C) 2020 Axis Communications AB
> +#
> +# SPDX-License-Identifier: Apache-2.0
> +
> +do_unmount() {
> +       [ $# -eq 2 ] || return
> +       local mounts=
> +       while read ignore mount ignore; do
> +               case $mount in
> +                       $1/*|$2/*)
> +                               mounts="$mount $mounts"
> +                               ;;
> +               esac
> +       done </proc/self/mounts
> +       [ -z "$mounts" ] || umount $mounts
> +}
> +
> +do_unmount /run/k3s /var/lib/rancher/k3s
> +
> +# The lines below come from install.sh's create_killall() function:
> +ip link show 2>/dev/null | grep 'master cni0' | while read ignore iface
> ignore; do
> +    iface=${iface%%@*}
> +    [ -z "$iface" ] || ip link delete $iface
> +done
> +
> +ip link delete cni0
> +ip link delete flannel.1
> +rm -rf /var/lib/cni/
> diff --git a/recipes-containers/k3s/k3s/k3s.service
> b/recipes-containers/k3s/k3s/k3s.service
> new file mode 100644
> index 0000000..34c7a80
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s.service
> @@ -0,0 +1,27 @@
> +# Derived from the k3s install.sh's create_systemd_service_file() function
> +[Unit]
> +Description=Lightweight Kubernetes
> +Documentation=https://k3s.io
> +Requires=containerd.service
> +After=containerd.service
> +
> +[Install]
> +WantedBy=multi-user.target
> +
> +[Service]
> +Type=notify
> +KillMode=process
> +Delegate=yes
> +# Having non-zero Limit*s causes performance problems due to accounting
> overhead
> +# in the kernel. We recommend using cgroups to do container-local
> accounting.
> +LimitNOFILE=1048576
> +LimitNPROC=infinity
> +LimitCORE=infinity
> +TasksMax=infinity
> +TimeoutStartSec=0
> +Restart=always
> +RestartSec=5s
> +ExecStartPre=-/sbin/modprobe br_netfilter
> +ExecStartPre=-/sbin/modprobe overlay
> +ExecStart=/usr/local/bin/k3s server
> +
> diff --git a/recipes-containers/k3s/k3s_git.bb
> b/recipes-containers/k3s/k3s_git.bb
> new file mode 100644
> index 0000000..cfc2c64
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s_git.bb
> @@ -0,0 +1,75 @@
> +SUMMARY = "Production-Grade Container Scheduling and Management"
> +DESCRIPTION = "Lightweight Kubernetes, intended to be a fully compliant
> Kubernetes."
> +HOMEPAGE = "https://k3s.io/"
> +LICENSE = "Apache-2.0"
> +LIC_FILES_CHKSUM =
> "file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
> +PV = "v1.18.9+k3s1-dirty"
> +
> +SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
> +           file://k3s.service \
> +           file://k3s-agent.service \
> +           file://k3s-agent \
> +           file://k3s-clean \
> +           file://cni-containerd-net.conf \
> +
> file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
> +          "
> +SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
> +SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
> +
> +inherit go
> +inherit goarch
> +inherit systemd
> +
> +PACKAGECONFIG = ""
> +PACKAGECONFIG[upx] = ",,upx-native"
> +GO_IMPORT = "import"
> +GO_BUILD_LDFLAGS = "-X github.com/rancher/k3s/pkg/version.Version=${PV} \
> +                    -X
> github.com/rancher/k3s/pkg/version.GitCommit=${@d.getVar('SRCREV_k3s',
> d, 1)[:8]} \
> +                    -w -s \
> +                   "
> +BIN_PREFIX ?= "${exec_prefix}/local"
> +
> +do_compile() {
> +        export
> GOPATH="${S}/src/import/.gopath:${S}/src/import/vendor:${STAGING_DIR_TARGET}/${prefix}/local/go"
> +        export CGO_ENABLED="1"
> +        export GOFLAGS="-mod=vendor"
> +        cd ${S}/src/import
> +        ${GO} build -tags providerless -ldflags "${GO_BUILD_LDFLAGS}"
> -o ./dist/artifacts/k3s ./cmd/server/main.go
> +        # Use UPX if it is enabled (and thus exists) to compress binary
> +        if command -v upx > /dev/null 2>&1; then
> +                upx -9 ./dist/artifacts/k3s
> +        fi
> +}
> +do_install() {
> +        install -d "${D}${BIN_PREFIX}/bin"
> +        install -m 755 "${S}/src/import/dist/artifacts/k3s"
> "${D}${BIN_PREFIX}/bin"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/crictl"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/ctr"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/kubectl"
> +        install -m 755 "${WORKDIR}/k3s-clean" "${D}${BIN_PREFIX}/bin"
> +        install -D -m 0644 "${WORKDIR}/cni-containerd-net.conf"
> "${D}/${sysconfdir}/cni/net.d/10-containerd-net.conf"
> +        if
> ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
> +                install -D -m 0644 "${WORKDIR}/k3s.service"
> "${D}${systemd_system_unitdir}/k3s.service"
> +                install -D -m 0644 "${WORKDIR}/k3s-agent.service"
> "${D}${systemd_system_unitdir}/k3s-agent.service"
> +                sed -i
> "s#\(Exec\)\(.*\)=\(.*\)\(k3s\)#\1\2=${BIN_PREFIX}/bin/\4#g"
> "${D}${systemd_system_unitdir}/k3s.service"
> "${D}${systemd_system_unitdir}/k3s-agent.service"
> +                install -m 755 "${WORKDIR}/k3s-agent"
> "${D}${BIN_PREFIX}/bin"
> +        fi
> +}
> +
> +PACKAGES =+ "${PN}-server ${PN}-agent"
> +
> +SYSTEMD_PACKAGES =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','${PN}-server
> ${PN}-agent','',d)}"
> +SYSTEMD_SERVICE_${PN}-server =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s.service','',d)}"
> +SYSTEMD_SERVICE_${PN}-agent =
> "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s-agent.service','',d)}"
> +SYSTEMD_AUTO_ENABLE_${PN}-agent = "disable"
> +
> +FILES_${PN}-agent = "${BIN_PREFIX}/bin/k3s-agent"
> +
> +RDEPENDS_${PN} = "cni conntrack-tools coreutils findutils iproute2
> ipset virtual/containerd"
> +RDEPENDS_${PN}-server = "${PN}"
> +RDEPENDS_${PN}-agent = "${PN}"
> +
> +RCONFLICTS_${PN} = "kubectl"
> +
> +INHIBIT_PACKAGE_STRIP = "1"
> +INSANE_SKIP_${PN} += "ldflags already-stripped"
> --
> 2.20.1
>
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-10-15 15:02                                 ` Bruce Ashfield
@ 2020-10-20 11:14                                   ` Joakim Roubert
  2020-10-21  3:10                                     ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-10-20 11:14 UTC (permalink / raw)
  To: meta-virtualization; +Cc: Joakim Roubert

Change-Id: Id1c52727593bc5ea8d0cd2de192faa44304d7a45
Signed-off-by: Joakim Roubert <joakimr@axis.com>
---
 recipes-containers/k3s/README.md              |  30 +++++
 ...01-Finding-host-local-in-usr-libexec.patch |  27 +++++
 .../k3s/k3s/cni-containerd-net.conf           |  24 ++++
 recipes-containers/k3s/k3s/k3s-agent          | 103 ++++++++++++++++++
 recipes-containers/k3s/k3s/k3s-agent.service  |  26 +++++
 recipes-containers/k3s/k3s/k3s-clean          |  30 +++++
 recipes-containers/k3s/k3s/k3s.service        |  27 +++++
 recipes-containers/k3s/k3s_git.bb             |  75 +++++++++++++
 8 files changed, 342 insertions(+)
 create mode 100644 recipes-containers/k3s/README.md
 create mode 100644 recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
 create mode 100644 recipes-containers/k3s/k3s/cni-containerd-net.conf
 create mode 100755 recipes-containers/k3s/k3s/k3s-agent
 create mode 100644 recipes-containers/k3s/k3s/k3s-agent.service
 create mode 100755 recipes-containers/k3s/k3s/k3s-clean
 create mode 100644 recipes-containers/k3s/k3s/k3s.service
 create mode 100644 recipes-containers/k3s/k3s_git.bb

diff --git a/recipes-containers/k3s/README.md b/recipes-containers/k3s/README.md
new file mode 100644
index 0000000..3fe5ccd
--- /dev/null
+++ b/recipes-containers/k3s/README.md
@@ -0,0 +1,30 @@
+# k3s: Lightweight Kubernetes
+
+Rancher's [k3s](https://k3s.io/), available under
+[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), provides
+lightweight Kubernetes suitable for small/edge devices. There are use cases
+where the
+[installation procedures provided by Rancher](https://rancher.com/docs/k3s/latest/en/installation/)
+are not ideal but a bitbake-built version is what is needed. And only a few
+mods to the [k3s source code](https://github.com/rancher/k3s) is needed to
+accomplish that.
+
+## CNI
+
+By default, K3s will run with flannel as the CNI, using VXLAN as the default
+backend. It is both possible to change the flannel backend and to change from
+flannel to another CNI.
+
+Please see <https://rancher.com/docs/k3s/latest/en/installation/network-options/>
+for further k3s networking details.
+
+## Configure and run a k3s agent
+
+The convenience script `k3s-agent` can be used to set up a k3s agent (service):
+
+```shell
+k3s-agent -t <token> -s https://<master>:6443
+```
+
+(Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
+k3s master.)
diff --git a/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
new file mode 100644
index 0000000..8205d73
--- /dev/null
+++ b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
@@ -0,0 +1,27 @@
+From 4faf68d68c97cfd10947e1152f711acc59f39647 Mon Sep 17 00:00:00 2001
+From: Erik Jansson <erikja@axis.com>
+Date: Wed, 16 Oct 2019 15:07:48 +0200
+Subject: [PATCH] Finding host-local in /usr/libexec
+
+Upstream-status: Inappropriate [embedded specific]
+Signed-off-by: <erikja@axis.com>
+---
+ pkg/agent/config/config.go | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
+index b4296f360a..6af9dab895 100644
+--- a/pkg/agent/config/config.go
++++ b/pkg/agent/config/config.go
+@@ -308,7 +308,7 @@ func get(envInfo *cmds.Agent) (*config.Node, error) {
+ 		return nil, err
+ 	}
+ 
+-	hostLocal, err := exec.LookPath("host-local")
++	hostLocal, err := exec.LookPath("/usr/libexec/cni/host-local")
+ 	if err != nil {
+ 		return nil, errors.Wrapf(err, "failed to find host-local")
+ 	}
+-- 
+2.11.0
+
diff --git a/recipes-containers/k3s/k3s/cni-containerd-net.conf b/recipes-containers/k3s/k3s/cni-containerd-net.conf
new file mode 100644
index 0000000..ca434d6
--- /dev/null
+++ b/recipes-containers/k3s/k3s/cni-containerd-net.conf
@@ -0,0 +1,24 @@
+{
+  "cniVersion": "0.4.0",
+  "name": "containerd-net",
+  "plugins": [
+    {
+      "type": "bridge",
+      "bridge": "cni0",
+      "isGateway": true,
+      "ipMasq": true,
+      "promiscMode": true,
+      "ipam": {
+        "type": "host-local",
+        "subnet": "10.88.0.0/16",
+        "routes": [
+          { "dst": "0.0.0.0/0" }
+        ]
+      }
+    },
+    {
+      "type": "portmap",
+      "capabilities": {"portMappings": true}
+    }
+  ]
+}
diff --git a/recipes-containers/k3s/k3s/k3s-agent b/recipes-containers/k3s/k3s/k3s-agent
new file mode 100755
index 0000000..b6c6cb6
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-agent
@@ -0,0 +1,103 @@
+#!/bin/sh -eu
+#
+# Copyright (C) 2020 Axis Communications AB
+#
+# SPDX-License-Identifier: Apache-2.0
+
+ENV_CONF=/etc/systemd/system/k3s-agent.service.d/10-env.conf
+
+usage() {
+	echo "
+USAGE:
+    ${0##*/} [OPTIONS]
+OPTIONS:
+    --token value, -t value             Token to use for authentication [\$K3S_TOKEN]
+    --token-file value                  Token file to use for authentication [\$K3S_TOKEN_FILE]
+    --server value, -s value            Server to connect to [\$K3S_URL]
+    --node-name value                   Node name [\$K3S_NODE_NAME]
+    --resolv-conf value                 Kubelet resolv.conf file [\$K3S_RESOLV_CONF]
+    --cluster-secret value              Shared secret used to bootstrap a cluster [\$K3S_CLUSTER_SECRET]
+    -h                                  print this
+"
+}
+
+[ $# -gt 0 ] || {
+	usage
+	exit
+}
+
+case $1 in
+	-*)
+		;;
+	*)
+		usage
+		exit 1
+		;;
+esac
+
+rm -f $ENV_CONF
+mkdir -p ${ENV_CONF%/*}
+echo [Service] > $ENV_CONF
+
+while getopts "t:s:-:h" opt; do
+	case $opt in
+		h)
+			usage
+			exit
+			;;
+		t)
+			VAR_NAME=K3S_TOKEN
+			;;
+		s)
+			VAR_NAME=K3S_URL
+			;;
+		-)
+			[ $# -ge $OPTIND ] || {
+				usage
+				exit 1
+			}
+			opt=$OPTARG
+			eval OPTARG='$'$OPTIND
+			OPTIND=$(($OPTIND + 1))
+			case $opt in
+				token)
+					VAR_NAME=K3S_TOKEN
+					;;
+				token-file)
+					VAR_NAME=K3S_TOKEN_FILE
+					;;
+				server)
+					VAR_NAME=K3S_URL
+					;;
+				node-name)
+					VAR_NAME=K3S_NODE_NAME
+					;;
+				resolv-conf)
+					VAR_NAME=K3S_RESOLV_CONF
+					;;
+				cluster-secret)
+					VAR_NAME=K3S_CLUSTER_SECRET
+					;;
+				help)
+					usage
+					exit
+					;;
+				*)
+					usage
+					exit 1
+					;;
+			esac
+			;;
+		*)
+			usage
+			exit 1
+			;;
+	esac
+    echo Environment=$VAR_NAME=$OPTARG >> $ENV_CONF
+done
+
+chmod 0644 $ENV_CONF
+rm -rf /var/lib/rancher/k3s/agent
+systemctl daemon-reload
+systemctl restart k3s-agent
+systemctl enable k3s-agent.service
diff --git a/recipes-containers/k3s/k3s/k3s-agent.service b/recipes-containers/k3s/k3s/k3s-agent.service
new file mode 100644
index 0000000..9f9016d
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-agent.service
@@ -0,0 +1,26 @@
+# Derived from the k3s install.sh's create_systemd_service_file() function
+[Unit]
+Description=Lightweight Kubernetes Agent
+Documentation=https://k3s.io
+Requires=containerd.service
+After=containerd.service
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=notify
+KillMode=control-group
+Delegate=yes
+LimitNOFILE=infinity
+LimitNPROC=infinity
+LimitCORE=infinity
+TasksMax=infinity
+TimeoutStartSec=0
+Restart=always
+RestartSec=5s
+ExecStartPre=-/sbin/modprobe br_netfilter
+ExecStartPre=-/sbin/modprobe overlay
+ExecStart=/usr/local/bin/k3s agent
+ExecStopPost=/usr/local/bin/k3s-clean
+
diff --git a/recipes-containers/k3s/k3s/k3s-clean b/recipes-containers/k3s/k3s/k3s-clean
new file mode 100755
index 0000000..8eca918
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s-clean
@@ -0,0 +1,30 @@
+#!/bin/sh -eu
+#
+# Copyright (C) 2020 Axis Communications AB
+#
+# SPDX-License-Identifier: Apache-2.0
+
+do_unmount() {
+	[ $# -eq 2 ] || return
+	local mounts=
+	while read ignore mount ignore; do
+		case $mount in
+			$1/*|$2/*)
+				mounts="$mount $mounts"
+				;;
+		esac
+	done </proc/self/mounts
+	[ -z "$mounts" ] || umount $mounts
+}
+
+do_unmount /run/k3s /var/lib/rancher/k3s
+
+# The lines below come from install.sh's create_killall() function:
+ip link show 2>/dev/null | grep 'master cni0' | while read ignore iface ignore; do
+    iface=${iface%%@*}
+    [ -z "$iface" ] || ip link delete $iface
+done
+
+ip link delete cni0
+ip link delete flannel.1
+rm -rf /var/lib/cni/
diff --git a/recipes-containers/k3s/k3s/k3s.service b/recipes-containers/k3s/k3s/k3s.service
new file mode 100644
index 0000000..34c7a80
--- /dev/null
+++ b/recipes-containers/k3s/k3s/k3s.service
@@ -0,0 +1,27 @@
+# Derived from the k3s install.sh's create_systemd_service_file() function
+[Unit]
+Description=Lightweight Kubernetes
+Documentation=https://k3s.io
+Requires=containerd.service
+After=containerd.service
+
+[Install]
+WantedBy=multi-user.target
+
+[Service]
+Type=notify
+KillMode=process
+Delegate=yes
+# Having non-zero Limit*s causes performance problems due to accounting overhead
+# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=1048576
+LimitNPROC=infinity
+LimitCORE=infinity
+TasksMax=infinity
+TimeoutStartSec=0
+Restart=always
+RestartSec=5s
+ExecStartPre=-/sbin/modprobe br_netfilter
+ExecStartPre=-/sbin/modprobe overlay
+ExecStart=/usr/local/bin/k3s server
+
diff --git a/recipes-containers/k3s/k3s_git.bb b/recipes-containers/k3s/k3s_git.bb
new file mode 100644
index 0000000..cfc2c64
--- /dev/null
+++ b/recipes-containers/k3s/k3s_git.bb
@@ -0,0 +1,75 @@
+SUMMARY = "Production-Grade Container Scheduling and Management"
+DESCRIPTION = "Lightweight Kubernetes, intended to be a fully compliant Kubernetes."
+HOMEPAGE = "https://k3s.io/"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
+PV = "v1.18.9+k3s1-dirty"
+
+SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
+           file://k3s.service \
+           file://k3s-agent.service \
+           file://k3s-agent \
+           file://k3s-clean \
+           file://cni-containerd-net.conf \
+           file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
+          "
+SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
+SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
+
+inherit go
+inherit goarch
+inherit systemd
+
+PACKAGECONFIG = ""
+PACKAGECONFIG[upx] = ",,upx-native"
+GO_IMPORT = "import"
+GO_BUILD_LDFLAGS = "-X github.com/rancher/k3s/pkg/version.Version=${PV} \
+                    -X github.com/rancher/k3s/pkg/version.GitCommit=${@d.getVar('SRCREV_k3s', d, 1)[:8]} \
+                    -w -s \
+                   "
+BIN_PREFIX ?= "${exec_prefix}/local"
+
+do_compile() {
+        export GOPATH="${S}/src/import/.gopath:${S}/src/import/vendor:${STAGING_DIR_TARGET}/${prefix}/local/go"
+        export CGO_ENABLED="1"
+        export GOFLAGS="-mod=vendor"
+        cd ${S}/src/import
+        ${GO} build -tags providerless -ldflags "${GO_BUILD_LDFLAGS}" -o ./dist/artifacts/k3s ./cmd/server/main.go
+        # Use UPX if it is enabled (and thus exists) to compress binary
+        if command -v upx > /dev/null 2>&1; then
+                upx -9 ./dist/artifacts/k3s
+        fi
+}
+do_install() {
+        install -d "${D}${BIN_PREFIX}/bin"
+        install -m 755 "${S}/src/import/dist/artifacts/k3s" "${D}${BIN_PREFIX}/bin"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/crictl"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/ctr"
+        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/kubectl"
+        install -m 755 "${WORKDIR}/k3s-clean" "${D}${BIN_PREFIX}/bin"
+        install -D -m 0644 "${WORKDIR}/cni-containerd-net.conf" "${D}/${sysconfdir}/cni/net.d/10-containerd-net.conf"
+        if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
+                install -D -m 0644 "${WORKDIR}/k3s.service" "${D}${systemd_system_unitdir}/k3s.service"
+                install -D -m 0644 "${WORKDIR}/k3s-agent.service" "${D}${systemd_system_unitdir}/k3s-agent.service"
+                sed -i "s#\(Exec\)\(.*\)=\(.*\)\(k3s\)#\1\2=${BIN_PREFIX}/bin/\4#g" "${D}${systemd_system_unitdir}/k3s.service" "${D}${systemd_system_unitdir}/k3s-agent.service"
+                install -m 755 "${WORKDIR}/k3s-agent" "${D}${BIN_PREFIX}/bin"
+        fi
+}
+
+PACKAGES =+ "${PN}-server ${PN}-agent"
+
+SYSTEMD_PACKAGES = "${@bb.utils.contains('DISTRO_FEATURES','systemd','${PN}-server ${PN}-agent','',d)}"
+SYSTEMD_SERVICE_${PN}-server = "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s.service','',d)}"
+SYSTEMD_SERVICE_${PN}-agent = "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s-agent.service','',d)}"
+SYSTEMD_AUTO_ENABLE_${PN}-agent = "disable"
+
+FILES_${PN}-agent = "${BIN_PREFIX}/bin/k3s-agent"
+
+RDEPENDS_${PN} = "cni conntrack-tools coreutils findutils iproute2 ipset virtual/containerd"
+RDEPENDS_${PN}-server = "${PN}"
+RDEPENDS_${PN}-agent = "${PN}"
+
+RCONFLICTS_${PN} = "kubectl"
+
+INHIBIT_PACKAGE_STRIP = "1"
+INSANE_SKIP_${PN} += "ldflags already-stripped"
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-10-20 11:14                                   ` [meta-virtualization][PATCH v5] " Joakim Roubert
@ 2020-10-21  3:10                                     ` Bruce Ashfield
  2020-10-21  6:00                                       ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-10-21  3:10 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization, Joakim Roubert

Ha!!!!

This applies.

I'm now testing and completing some of my networking factoring,
as well as importing / forking some recipes to avoid extra layer
depends.

Bruce


In message: [meta-virtualization][PATCH v5] Adding k3s recipe
on 20/10/2020 Joakim Roubert wrote:

> Change-Id: Id1c52727593bc5ea8d0cd2de192faa44304d7a45
> Signed-off-by: Joakim Roubert <joakimr@axis.com>
> ---
>  recipes-containers/k3s/README.md              |  30 +++++
>  ...01-Finding-host-local-in-usr-libexec.patch |  27 +++++
>  .../k3s/k3s/cni-containerd-net.conf           |  24 ++++
>  recipes-containers/k3s/k3s/k3s-agent          | 103 ++++++++++++++++++
>  recipes-containers/k3s/k3s/k3s-agent.service  |  26 +++++
>  recipes-containers/k3s/k3s/k3s-clean          |  30 +++++
>  recipes-containers/k3s/k3s/k3s.service        |  27 +++++
>  recipes-containers/k3s/k3s_git.bb             |  75 +++++++++++++
>  8 files changed, 342 insertions(+)
>  create mode 100644 recipes-containers/k3s/README.md
>  create mode 100644 recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
>  create mode 100644 recipes-containers/k3s/k3s/cni-containerd-net.conf
>  create mode 100755 recipes-containers/k3s/k3s/k3s-agent
>  create mode 100644 recipes-containers/k3s/k3s/k3s-agent.service
>  create mode 100755 recipes-containers/k3s/k3s/k3s-clean
>  create mode 100644 recipes-containers/k3s/k3s/k3s.service
>  create mode 100644 recipes-containers/k3s/k3s_git.bb
> 
> diff --git a/recipes-containers/k3s/README.md b/recipes-containers/k3s/README.md
> new file mode 100644
> index 0000000..3fe5ccd
> --- /dev/null
> +++ b/recipes-containers/k3s/README.md
> @@ -0,0 +1,30 @@
> +# k3s: Lightweight Kubernetes
> +
> +Rancher's [k3s](https://k3s.io/), available under
> +[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), provides
> +lightweight Kubernetes suitable for small/edge devices. There are use cases
> +where the
> +[installation procedures provided by Rancher](https://rancher.com/docs/k3s/latest/en/installation/)
> +are not ideal but a bitbake-built version is what is needed. And only a few
> +mods to the [k3s source code](https://github.com/rancher/k3s) is needed to
> +accomplish that.
> +
> +## CNI
> +
> +By default, K3s will run with flannel as the CNI, using VXLAN as the default
> +backend. It is both possible to change the flannel backend and to change from
> +flannel to another CNI.
> +
> +Please see <https://rancher.com/docs/k3s/latest/en/installation/network-options/>
> +for further k3s networking details.
> +
> +## Configure and run a k3s agent
> +
> +The convenience script `k3s-agent` can be used to set up a k3s agent (service):
> +
> +```shell
> +k3s-agent -t <token> -s https://<master>:6443
> +```
> +
> +(Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
> +k3s master.)
> diff --git a/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> new file mode 100644
> index 0000000..8205d73
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/0001-Finding-host-local-in-usr-libexec.patch
> @@ -0,0 +1,27 @@
> +From 4faf68d68c97cfd10947e1152f711acc59f39647 Mon Sep 17 00:00:00 2001
> +From: Erik Jansson <erikja@axis.com>
> +Date: Wed, 16 Oct 2019 15:07:48 +0200
> +Subject: [PATCH] Finding host-local in /usr/libexec
> +
> +Upstream-status: Inappropriate [embedded specific]
> +Signed-off-by: <erikja@axis.com>
> +---
> + pkg/agent/config/config.go | 2 +-
> + 1 file changed, 1 insertion(+), 1 deletion(-)
> +
> +diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
> +index b4296f360a..6af9dab895 100644
> +--- a/pkg/agent/config/config.go
> ++++ b/pkg/agent/config/config.go
> +@@ -308,7 +308,7 @@ func get(envInfo *cmds.Agent) (*config.Node, error) {
> + 		return nil, err
> + 	}
> + 
> +-	hostLocal, err := exec.LookPath("host-local")
> ++	hostLocal, err := exec.LookPath("/usr/libexec/cni/host-local")
> + 	if err != nil {
> + 		return nil, errors.Wrapf(err, "failed to find host-local")
> + 	}
> +-- 
> +2.11.0
> +
> diff --git a/recipes-containers/k3s/k3s/cni-containerd-net.conf b/recipes-containers/k3s/k3s/cni-containerd-net.conf
> new file mode 100644
> index 0000000..ca434d6
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/cni-containerd-net.conf
> @@ -0,0 +1,24 @@
> +{
> +  "cniVersion": "0.4.0",
> +  "name": "containerd-net",
> +  "plugins": [
> +    {
> +      "type": "bridge",
> +      "bridge": "cni0",
> +      "isGateway": true,
> +      "ipMasq": true,
> +      "promiscMode": true,
> +      "ipam": {
> +        "type": "host-local",
> +        "subnet": "10.88.0.0/16",
> +        "routes": [
> +          { "dst": "0.0.0.0/0" }
> +        ]
> +      }
> +    },
> +    {
> +      "type": "portmap",
> +      "capabilities": {"portMappings": true}
> +    }
> +  ]
> +}
> diff --git a/recipes-containers/k3s/k3s/k3s-agent b/recipes-containers/k3s/k3s/k3s-agent
> new file mode 100755
> index 0000000..b6c6cb6
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-agent
> @@ -0,0 +1,103 @@
> +#!/bin/sh -eu
> +#
> +# Copyright (C) 2020 Axis Communications AB
> +#
> +# SPDX-License-Identifier: Apache-2.0
> +
> +ENV_CONF=/etc/systemd/system/k3s-agent.service.d/10-env.conf
> +
> +usage() {
> +	echo "
> +USAGE:
> +    ${0##*/} [OPTIONS]
> +OPTIONS:
> +    --token value, -t value             Token to use for authentication [\$K3S_TOKEN]
> +    --token-file value                  Token file to use for authentication [\$K3S_TOKEN_FILE]
> +    --server value, -s value            Server to connect to [\$K3S_URL]
> +    --node-name value                   Node name [\$K3S_NODE_NAME]
> +    --resolv-conf value                 Kubelet resolv.conf file [\$K3S_RESOLV_CONF]
> +    --cluster-secret value              Shared secret used to bootstrap a cluster [\$K3S_CLUSTER_SECRET]
> +    -h                                  print this
> +"
> +}
> +
> +[ $# -gt 0 ] || {
> +	usage
> +	exit
> +}
> +
> +case $1 in
> +	-*)
> +		;;
> +	*)
> +		usage
> +		exit 1
> +		;;
> +esac
> +
> +rm -f $ENV_CONF
> +mkdir -p ${ENV_CONF%/*}
> +echo [Service] > $ENV_CONF
> +
> +while getopts "t:s:-:h" opt; do
> +	case $opt in
> +		h)
> +			usage
> +			exit
> +			;;
> +		t)
> +			VAR_NAME=K3S_TOKEN
> +			;;
> +		s)
> +			VAR_NAME=K3S_URL
> +			;;
> +		-)
> +			[ $# -ge $OPTIND ] || {
> +				usage
> +				exit 1
> +			}
> +			opt=$OPTARG
> +			eval OPTARG='$'$OPTIND
> +			OPTIND=$(($OPTIND + 1))
> +			case $opt in
> +				token)
> +					VAR_NAME=K3S_TOKEN
> +					;;
> +				token-file)
> +					VAR_NAME=K3S_TOKEN_FILE
> +					;;
> +				server)
> +					VAR_NAME=K3S_URL
> +					;;
> +				node-name)
> +					VAR_NAME=K3S_NODE_NAME
> +					;;
> +				resolv-conf)
> +					VAR_NAME=K3S_RESOLV_CONF
> +					;;
> +				cluster-secret)
> +					VAR_NAME=K3S_CLUSTER_SECRET
> +					;;
> +				help)
> +					usage
> +					exit
> +					;;
> +				*)
> +					usage
> +					exit 1
> +					;;
> +			esac
> +			;;
> +		*)
> +			usage
> +			exit 1
> +			;;
> +	esac
> +    echo Environment=$VAR_NAME=$OPTARG >> $ENV_CONF
> +done
> +
> +chmod 0644 $ENV_CONF
> +rm -rf /var/lib/rancher/k3s/agent
> +systemctl daemon-reload
> +systemctl restart k3s-agent
> +systemctl enable k3s-agent.service
> diff --git a/recipes-containers/k3s/k3s/k3s-agent.service b/recipes-containers/k3s/k3s/k3s-agent.service
> new file mode 100644
> index 0000000..9f9016d
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-agent.service
> @@ -0,0 +1,26 @@
> +# Derived from the k3s install.sh's create_systemd_service_file() function
> +[Unit]
> +Description=Lightweight Kubernetes Agent
> +Documentation=https://k3s.io
> +Requires=containerd.service
> +After=containerd.service
> +
> +[Install]
> +WantedBy=multi-user.target
> +
> +[Service]
> +Type=notify
> +KillMode=control-group
> +Delegate=yes
> +LimitNOFILE=infinity
> +LimitNPROC=infinity
> +LimitCORE=infinity
> +TasksMax=infinity
> +TimeoutStartSec=0
> +Restart=always
> +RestartSec=5s
> +ExecStartPre=-/sbin/modprobe br_netfilter
> +ExecStartPre=-/sbin/modprobe overlay
> +ExecStart=/usr/local/bin/k3s agent
> +ExecStopPost=/usr/local/bin/k3s-clean
> +
> diff --git a/recipes-containers/k3s/k3s/k3s-clean b/recipes-containers/k3s/k3s/k3s-clean
> new file mode 100755
> index 0000000..8eca918
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s-clean
> @@ -0,0 +1,30 @@
> +#!/bin/sh -eu
> +#
> +# Copyright (C) 2020 Axis Communications AB
> +#
> +# SPDX-License-Identifier: Apache-2.0
> +
> +do_unmount() {
> +	[ $# -eq 2 ] || return
> +	local mounts=
> +	while read ignore mount ignore; do
> +		case $mount in
> +			$1/*|$2/*)
> +				mounts="$mount $mounts"
> +				;;
> +		esac
> +	done </proc/self/mounts
> +	[ -z "$mounts" ] || umount $mounts
> +}
> +
> +do_unmount /run/k3s /var/lib/rancher/k3s
> +
> +# The lines below come from install.sh's create_killall() function:
> +ip link show 2>/dev/null | grep 'master cni0' | while read ignore iface ignore; do
> +    iface=${iface%%@*}
> +    [ -z "$iface" ] || ip link delete $iface
> +done
> +
> +ip link delete cni0
> +ip link delete flannel.1
> +rm -rf /var/lib/cni/
> diff --git a/recipes-containers/k3s/k3s/k3s.service b/recipes-containers/k3s/k3s/k3s.service
> new file mode 100644
> index 0000000..34c7a80
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s/k3s.service
> @@ -0,0 +1,27 @@
> +# Derived from the k3s install.sh's create_systemd_service_file() function
> +[Unit]
> +Description=Lightweight Kubernetes
> +Documentation=https://k3s.io
> +Requires=containerd.service
> +After=containerd.service
> +
> +[Install]
> +WantedBy=multi-user.target
> +
> +[Service]
> +Type=notify
> +KillMode=process
> +Delegate=yes
> +# Having non-zero Limit*s causes performance problems due to accounting overhead
> +# in the kernel. We recommend using cgroups to do container-local accounting.
> +LimitNOFILE=1048576
> +LimitNPROC=infinity
> +LimitCORE=infinity
> +TasksMax=infinity
> +TimeoutStartSec=0
> +Restart=always
> +RestartSec=5s
> +ExecStartPre=-/sbin/modprobe br_netfilter
> +ExecStartPre=-/sbin/modprobe overlay
> +ExecStart=/usr/local/bin/k3s server
> +
> diff --git a/recipes-containers/k3s/k3s_git.bb b/recipes-containers/k3s/k3s_git.bb
> new file mode 100644
> index 0000000..cfc2c64
> --- /dev/null
> +++ b/recipes-containers/k3s/k3s_git.bb
> @@ -0,0 +1,75 @@
> +SUMMARY = "Production-Grade Container Scheduling and Management"
> +DESCRIPTION = "Lightweight Kubernetes, intended to be a fully compliant Kubernetes."
> +HOMEPAGE = "https://k3s.io/"
> +LICENSE = "Apache-2.0"
> +LIC_FILES_CHKSUM = "file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
> +PV = "v1.18.9+k3s1-dirty"
> +
> +SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
> +           file://k3s.service \
> +           file://k3s-agent.service \
> +           file://k3s-agent \
> +           file://k3s-clean \
> +           file://cni-containerd-net.conf \
> +           file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
> +          "
> +SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
> +SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
> +
> +inherit go
> +inherit goarch
> +inherit systemd
> +
> +PACKAGECONFIG = ""
> +PACKAGECONFIG[upx] = ",,upx-native"
> +GO_IMPORT = "import"
> +GO_BUILD_LDFLAGS = "-X github.com/rancher/k3s/pkg/version.Version=${PV} \
> +                    -X github.com/rancher/k3s/pkg/version.GitCommit=${@d.getVar('SRCREV_k3s', d, 1)[:8]} \
> +                    -w -s \
> +                   "
> +BIN_PREFIX ?= "${exec_prefix}/local"
> +
> +do_compile() {
> +        export GOPATH="${S}/src/import/.gopath:${S}/src/import/vendor:${STAGING_DIR_TARGET}/${prefix}/local/go"
> +        export CGO_ENABLED="1"
> +        export GOFLAGS="-mod=vendor"
> +        cd ${S}/src/import
> +        ${GO} build -tags providerless -ldflags "${GO_BUILD_LDFLAGS}" -o ./dist/artifacts/k3s ./cmd/server/main.go
> +        # Use UPX if it is enabled (and thus exists) to compress binary
> +        if command -v upx > /dev/null 2>&1; then
> +                upx -9 ./dist/artifacts/k3s
> +        fi
> +}
> +do_install() {
> +        install -d "${D}${BIN_PREFIX}/bin"
> +        install -m 755 "${S}/src/import/dist/artifacts/k3s" "${D}${BIN_PREFIX}/bin"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/crictl"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/ctr"
> +        ln -sr "${D}/${BIN_PREFIX}/bin/k3s" "${D}${BIN_PREFIX}/bin/kubectl"
> +        install -m 755 "${WORKDIR}/k3s-clean" "${D}${BIN_PREFIX}/bin"
> +        install -D -m 0644 "${WORKDIR}/cni-containerd-net.conf" "${D}/${sysconfdir}/cni/net.d/10-containerd-net.conf"
> +        if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
> +                install -D -m 0644 "${WORKDIR}/k3s.service" "${D}${systemd_system_unitdir}/k3s.service"
> +                install -D -m 0644 "${WORKDIR}/k3s-agent.service" "${D}${systemd_system_unitdir}/k3s-agent.service"
> +                sed -i "s#\(Exec\)\(.*\)=\(.*\)\(k3s\)#\1\2=${BIN_PREFIX}/bin/\4#g" "${D}${systemd_system_unitdir}/k3s.service" "${D}${systemd_system_unitdir}/k3s-agent.service"
> +                install -m 755 "${WORKDIR}/k3s-agent" "${D}${BIN_PREFIX}/bin"
> +        fi
> +}
> +
> +PACKAGES =+ "${PN}-server ${PN}-agent"
> +
> +SYSTEMD_PACKAGES = "${@bb.utils.contains('DISTRO_FEATURES','systemd','${PN}-server ${PN}-agent','',d)}"
> +SYSTEMD_SERVICE_${PN}-server = "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s.service','',d)}"
> +SYSTEMD_SERVICE_${PN}-agent = "${@bb.utils.contains('DISTRO_FEATURES','systemd','k3s-agent.service','',d)}"
> +SYSTEMD_AUTO_ENABLE_${PN}-agent = "disable"
> +
> +FILES_${PN}-agent = "${BIN_PREFIX}/bin/k3s-agent"
> +
> +RDEPENDS_${PN} = "cni conntrack-tools coreutils findutils iproute2 ipset virtual/containerd"
> +RDEPENDS_${PN}-server = "${PN}"
> +RDEPENDS_${PN}-agent = "${PN}"
> +
> +RCONFLICTS_${PN} = "kubectl"
> +
> +INHIBIT_PACKAGE_STRIP = "1"
> +INSANE_SKIP_${PN} += "ldflags already-stripped"
> -- 
> 2.20.1
> 

> 
> 
> 


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-10-21  3:10                                     ` Bruce Ashfield
@ 2020-10-21  6:00                                       ` Joakim Roubert
  2020-10-26 15:46                                         ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-10-21  6:00 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2020-10-21 05:10, Bruce Ashfield wrote:
> Ha!!!!
> 
> This applies.

Wonderful, thank you! I guess this is what is called "five times lucky"...

> I'm now testing and completing some of my networking factoring,
> as well as importing / forking some recipes to avoid extra layer
> depends.

Excellent!

BR,

/Joakim
-- 
Joakim Roubert
Senior Engineer

Axis Communications AB
Emdalavägen 14, SE-223 69 Lund, Sweden
Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
Fax: +46 46 13 61 30, www.axis.com


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-10-21  6:00                                       ` Joakim Roubert
@ 2020-10-26 15:46                                         ` Bruce Ashfield
  2020-10-28  8:32                                           ` Joakim Roubert
  2020-11-10  6:43                                           ` Lance Yang
  0 siblings, 2 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-10-26 15:46 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-10-21 05:10, Bruce Ashfield wrote:
> > Ha!!!!
> >
> > This applies.
>
> Wonderful, thank you! I guess this is what is called "five times lucky"...
>
> > I'm now testing and completing some of my networking factoring,
> > as well as importing / forking some recipes to avoid extra layer
> > depends.
>
> Excellent!

I've pushed some of my WIP to:
https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualization/log/?h=k3s-wip

That includes the split of the networking, the import of some of the
dependencies and some small tweaks I'm working on.

I did have a couple of questions on the k3s packaging itself, I was
getting the following
error:

ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
Files/directories were installed but not shipped in any package:
  /usr/local/bin/k3s-clean
  /usr/local/bin/crictl
  /usr/local/bin/kubectl
  /usr/local/bin/k3s

So I added them to the FILES of the k3s package itself (so both
k3s-server and k3s-agent will get them), is that the split you were
looking for ?

Bruce

>
> BR,
>
> /Joakim
> --
> Joakim Roubert
> Senior Engineer
>
> Axis Communications AB
> Emdalavägen 14, SE-223 69 Lund, Sweden
> Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> Fax: +46 46 13 61 30, www.axis.com
>
--
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-10-26 15:46                                         ` Bruce Ashfield
@ 2020-10-28  8:32                                           ` Joakim Roubert
  2020-11-06 21:20                                             ` Bruce Ashfield
  2020-11-10  6:43                                           ` Lance Yang
  1 sibling, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-10-28  8:32 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2020-10-26 16:46, Bruce Ashfield wrote:
> 
> ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> Files/directories were installed but not shipped in any package:
>    /usr/local/bin/k3s-clean
>    /usr/local/bin/crictl
>    /usr/local/bin/kubectl
>    /usr/local/bin/k3s
> 
> So I added them to the FILES of the k3s package itself (so both
> k3s-server and k3s-agent will get them), is that the split you were
> looking for ?

Yes, it is! In my build, all of those are found in the k3s RPM, although 
the FILES directive for them is missing. So yes, your fix here is the 
right one, IMO.

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-10-28  8:32                                           ` Joakim Roubert
@ 2020-11-06 21:20                                             ` Bruce Ashfield
  2020-11-09  7:48                                               ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-06 21:20 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

On Wed, Oct 28, 2020 at 4:32 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-10-26 16:46, Bruce Ashfield wrote:
> >
> > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > Files/directories were installed but not shipped in any package:
> >    /usr/local/bin/k3s-clean
> >    /usr/local/bin/crictl
> >    /usr/local/bin/kubectl
> >    /usr/local/bin/k3s
> >
> > So I added them to the FILES of the k3s package itself (so both
> > k3s-server and k3s-agent will get them), is that the split you were
> > looking for ?
>
> Yes, it is! In my build, all of those are found in the k3s RPM, although
> the FILES directive for them is missing. So yes, your fix here is the
> right one, IMO.

I now have another 6 or 7 WIP patches on top of this to try and get a
single node "cluster" working with k3s. I'll clean them up and get
them into the k3s WIP branch shortly.

I'm failing on the networking component though.

In your working references, which iptables do you have installed ?
(legacy ? nftables?)

I'm failing to get flannel to start, with a series of errors like this:

-----------
I1106 21:19:00.985656   10641 eviction_manager.go:351] eviction
manager: able to reduce ephemeral-storage pressure without evicting
pods.
E1106 21:19:10.636899   10641 proxier.go:841] Failed to ensure that
filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking
rule: exit status 2: iptables v1.8.6 (legacy): Couldn't load match
`comment':No such y
Try `iptables -h' or 'iptables --help' for more information.
I1106 21:19:10.641647   10641 proxier.go:825] Sync failed; retrying in 30s
------------

Which means as soon as I try and run an agent command, the server dies
and I can't even do a basic test.

Bruce

>
> BR,
>
> /Joakim



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-06 21:20                                             ` Bruce Ashfield
@ 2020-11-09  7:48                                               ` Joakim Roubert
  2020-11-09  9:26                                                 ` Lance.Yang
  2020-11-09 13:44                                                 ` Bruce Ashfield
  0 siblings, 2 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-11-09  7:48 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2020-11-06 22:20, Bruce Ashfield wrote:
> 
> I now have another 6 or 7 WIP patches on top of this to try and get a
> single node "cluster" working with k3s. I'll clean them up and get
> them into the k3s WIP branch shortly.

Awesome!

> In your working references, which iptables do you have installed ?
> (legacy ? nftables?)

# iptables --version
iptables v1.8.4 (legacy)

> I'm failing to get flannel to start, with a series of errors like this:
> 
> -----------
> I1106 21:19:00.985656   10641 eviction_manager.go:351] eviction
> manager: able to reduce ephemeral-storage pressure without evicting
> pods.
> E1106 21:19:10.636899   10641 proxier.go:841] Failed to ensure that
> filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking
> rule: exit status 2: iptables v1.8.6 (legacy): Couldn't load match
> `comment':No such y
> Try `iptables -h' or 'iptables --help' for more information.
> I1106 21:19:10.641647   10641 proxier.go:825] Sync failed; retrying in 30s
> ------------

This is a bit strange, as it seems you are running in legacy mode too, 
although a somewhat newer version than I have, and the only thing I know 
is that Rancher recommends 1.6.1 or newer (which it is).

https://rancher.com/docs/k3s/latest/en/known-issues/

Might there be something missing in the kernel?

I have these RPMs installed in my image:

iptables
iptables-module-ip6t-ah
iptables-module-ip6t-dnat
iptables-module-ip6t-dnpt
iptables-module-ip6t-dst
iptables-module-ip6t-eui64
iptables-module-ip6t-frag
iptables-module-ip6t-hbh
iptables-module-ip6t-hl
iptables-module-ip6t-icmp6
iptables-module-ip6t-ipv6header
iptables-module-ip6t-log
iptables-module-ip6t-masquerade
iptables-module-ip6t-mh
iptables-module-ip6t-netmap
iptables-module-ip6t-redirect
iptables-module-ip6t-reject
iptables-module-ip6t-rt
iptables-module-ip6t-snat
iptables-module-ip6t-snpt
iptables-module-ip6t-srh
iptables-module-ipt-ah
iptables-module-ipt-clusterip
iptables-module-ipt-dnat
iptables-module-ipt-ecn
iptables-module-ipt-icmp
iptables-module-ipt-log
iptables-module-ipt-masquerade
iptables-module-ipt-netmap
iptables-module-ipt-realm
iptables-module-ipt-redirect
iptables-module-ipt-reject
iptables-module-ipt-snat
iptables-module-ipt-ttl
iptables-module-ipt-ulog
iptables-modules
iptables-module-xt-addrtype
iptables-module-xt-audit
iptables-module-xt-bpf
iptables-module-xt-cgroup
iptables-module-xt-checksum
iptables-module-xt-classify
iptables-module-xt-cluster
iptables-module-xt-comment
iptables-module-xt-connbytes
iptables-module-xt-connlimit
iptables-module-xt-connmark
iptables-module-xt-connsecmark
iptables-module-xt-conntrack
iptables-module-xt-cpu
iptables-module-xt-ct
iptables-module-xt-dccp
iptables-module-xt-devgroup
iptables-module-xt-dscp
iptables-module-xt-ecn
iptables-module-xt-esp
iptables-module-xt-hashlimit
iptables-module-xt-helper
iptables-module-xt-hmark
iptables-module-xt-idletimer
iptables-module-xt-ipcomp
iptables-module-xt-iprange
iptables-module-xt-ipvs
iptables-module-xt-led
iptables-module-xt-length
iptables-module-xt-limit
iptables-module-xt-mac
iptables-module-xt-mark
iptables-module-xt-multiport
iptables-module-xt-nfacct
iptables-module-xt-nflog
iptables-module-xt-nfqueue
iptables-module-xt-osf
iptables-module-xt-owner
iptables-module-xt-physdev
iptables-module-xt-pkttype
iptables-module-xt-policy
iptables-module-xt-quota
iptables-module-xt-rateest
iptables-module-xt-recent
iptables-module-xt-rpfilter
iptables-module-xt-sctp
iptables-module-xt-secmark
iptables-module-xt-set
iptables-module-xt-socket
iptables-module-xt-standard
iptables-module-xt-statistic
iptables-module-xt-string
iptables-module-xt-synproxy
iptables-module-xt-tcp
iptables-module-xt-tcpmss
iptables-module-xt-tcpoptstrip
iptables-module-xt-tee
iptables-module-xt-time
iptables-module-xt-tos
iptables-module-xt-tproxy
iptables-module-xt-trace
iptables-module-xt-u32
iptables-module-xt-udp

and in my kernel config, I have (apart from what is needed for running 
containers with containerd):

CONFIG_NETFILTER_NETLINK=m
CONFIG_NETFILTER_XT_MATCH_OWNER=m
CONFIG_NET_UDP_TUNNEL=m
CONFIG_NF_DUP_NETDEV=m
CONFIG_NF_LOG_BRIDGE=m
CONFIG_NF_TABLES_ARP=y
CONFIG_NF_TABLES_BRIDGE=y
CONFIG_NF_TABLES_INET=y
CONFIG_NF_TABLES_IPV4=y
CONFIG_NF_TABLES_IPV6=y
CONFIG_NF_TABLES=m
CONFIG_NF_TABLES_NETDEV=y
CONFIG_NFT_BRIDGE_REJECT=m
CONFIG_NFT_CHAIN_NAT_IPV4=m
CONFIG_NFT_CHAIN_ROUTE_IPV4=m
CONFIG_NFT_CHAIN_ROUTE_IPV6=m
CONFIG_NFT_COMPAT=m
CONFIG_NFT_COUNTER=m
CONFIG_NFT_CT=m
CONFIG_NFT_DUP_IPV4=m
CONFIG_NFT_DUP_IPV6=m
CONFIG_NFT_DUP_NETDEV=m
# CONFIG_NFT_EXTHDR is not set
CONFIG_NFT_FIB_INET=m
CONFIG_NFT_FIB_IPV4=m
CONFIG_NFT_FIB_IPV6=m
CONFIG_NFT_FIB_NETDEV=m
CONFIG_NFT_FWD_NETDEV=m
CONFIG_NFT_HASH=m
CONFIG_NFT_LIMIT=m
CONFIG_NFT_LOG=m
CONFIG_NFT_MASQ_IPV4=m
CONFIG_NFT_MASQ=m
# CONFIG_NFT_META is not set
CONFIG_NFT_NAT=m
CONFIG_NFT_NUMGEN=m
# CONFIG_NFT_OBJREF is not set
CONFIG_NFT_QUEUE=m
CONFIG_NFT_QUOTA=m
CONFIG_NFT_REDIR_IPV4=m
CONFIG_NFT_REDIR=m
CONFIG_NFT_REJECT=m
# CONFIG_NFT_RT is not set
# CONFIG_NFT_SET_BITMAP is not set
# CONFIG_NFT_SET_HASH is not set
# CONFIG_NFT_SET_RBTREE is not set
CONFIG_OVERLAY_FS=m
CONFIG_STP=m

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-09  7:48                                               ` Joakim Roubert
@ 2020-11-09  9:26                                                 ` Lance.Yang
  2020-11-09 13:45                                                   ` Bruce Ashfield
  2020-11-09 13:44                                                 ` Bruce Ashfield
  1 sibling, 1 reply; 73+ messages in thread
From: Lance.Yang @ 2020-11-09  9:26 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization, joakim.roubert, nd

Hi Bruce,

For the iptables issue, I tested iptables.

As iptables comment module belonging to iptables extension, I checked my kernel config and set the parameter: CONFIG_NETFILTER_XT_MATCH_COMMENT=m.

iptables -V
iptables v1.8.5 (legacy)

I used this iptables command to check

iptables -A INPUT -p tcp --dport 22 -m comment --comment "SSH" -j ACCEPT

It works fine from my side.

Best Regards,
Lance

> -----Original Message-----
> From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org>
> On Behalf Of Joakim Roubert via lists.yoctoproject.org
> Sent: Monday, November 9, 2020 3:49 PM
> To: Bruce Ashfield <bruce.ashfield@gmail.com>
> Cc: meta-virtualization@yoctoproject.org
> Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
>
> On 2020-11-06 22:20, Bruce Ashfield wrote:
> >
> > I now have another 6 or 7 WIP patches on top of this to try and get a
> > single node "cluster" working with k3s. I'll clean them up and get
> > them into the k3s WIP branch shortly.
>
> Awesome!
>
> > In your working references, which iptables do you have installed ?
> > (legacy ? nftables?)
>
> # iptables --version
> iptables v1.8.4 (legacy)
>
> > I'm failing to get flannel to start, with a series of errors like this:
> >
> > -----------
> > I1106 21:19:00.985656   10641 eviction_manager.go:351] eviction
> > manager: able to reduce ephemeral-storage pressure without evicting
> > pods.
> > E1106 21:19:10.636899   10641 proxier.go:841] Failed to ensure that
> > filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking
> > rule: exit status 2: iptables v1.8.6 (legacy): Couldn't load match
> > `comment':No such y Try `iptables -h' or 'iptables --help' for more
> > information.
> > I1106 21:19:10.641647   10641 proxier.go:825] Sync failed; retrying in
> > 30s
> > ------------
>
> This is a bit strange, as it seems you are running in legacy mode too, although a somewhat
> newer version than I have, and the only thing I know is that Rancher recommends 1.6.1 or newer
> (which it is).
>
> https://rancher.com/docs/k3s/latest/en/known-issues/
>
> Might there be something missing in the kernel?
>
> I have these RPMs installed in my image:
>
> iptables
> iptables-module-ip6t-ah
> iptables-module-ip6t-dnat
> iptables-module-ip6t-dnpt
> iptables-module-ip6t-dst
> iptables-module-ip6t-eui64
> iptables-module-ip6t-frag
> iptables-module-ip6t-hbh
> iptables-module-ip6t-hl
> iptables-module-ip6t-icmp6
> iptables-module-ip6t-ipv6header
> iptables-module-ip6t-log
> iptables-module-ip6t-masquerade
> iptables-module-ip6t-mh
> iptables-module-ip6t-netmap
> iptables-module-ip6t-redirect
> iptables-module-ip6t-reject
> iptables-module-ip6t-rt
> iptables-module-ip6t-snat
> iptables-module-ip6t-snpt
> iptables-module-ip6t-srh
> iptables-module-ipt-ah
> iptables-module-ipt-clusterip
> iptables-module-ipt-dnat
> iptables-module-ipt-ecn
> iptables-module-ipt-icmp
> iptables-module-ipt-log
> iptables-module-ipt-masquerade
> iptables-module-ipt-netmap
> iptables-module-ipt-realm
> iptables-module-ipt-redirect
> iptables-module-ipt-reject
> iptables-module-ipt-snat
> iptables-module-ipt-ttl
> iptables-module-ipt-ulog
> iptables-modules
> iptables-module-xt-addrtype
> iptables-module-xt-audit
> iptables-module-xt-bpf
> iptables-module-xt-cgroup
> iptables-module-xt-checksum
> iptables-module-xt-classify
> iptables-module-xt-cluster
> iptables-module-xt-comment
> iptables-module-xt-connbytes
> iptables-module-xt-connlimit
> iptables-module-xt-connmark
> iptables-module-xt-connsecmark
> iptables-module-xt-conntrack
> iptables-module-xt-cpu
> iptables-module-xt-ct
> iptables-module-xt-dccp
> iptables-module-xt-devgroup
> iptables-module-xt-dscp
> iptables-module-xt-ecn
> iptables-module-xt-esp
> iptables-module-xt-hashlimit
> iptables-module-xt-helper
> iptables-module-xt-hmark
> iptables-module-xt-idletimer
> iptables-module-xt-ipcomp
> iptables-module-xt-iprange
> iptables-module-xt-ipvs
> iptables-module-xt-led
> iptables-module-xt-length
> iptables-module-xt-limit
> iptables-module-xt-mac
> iptables-module-xt-mark
> iptables-module-xt-multiport
> iptables-module-xt-nfacct
> iptables-module-xt-nflog
> iptables-module-xt-nfqueue
> iptables-module-xt-osf
> iptables-module-xt-owner
> iptables-module-xt-physdev
> iptables-module-xt-pkttype
> iptables-module-xt-policy
> iptables-module-xt-quota
> iptables-module-xt-rateest
> iptables-module-xt-recent
> iptables-module-xt-rpfilter
> iptables-module-xt-sctp
> iptables-module-xt-secmark
> iptables-module-xt-set
> iptables-module-xt-socket
> iptables-module-xt-standard
> iptables-module-xt-statistic
> iptables-module-xt-string
> iptables-module-xt-synproxy
> iptables-module-xt-tcp
> iptables-module-xt-tcpmss
> iptables-module-xt-tcpoptstrip
> iptables-module-xt-tee
> iptables-module-xt-time
> iptables-module-xt-tos
> iptables-module-xt-tproxy
> iptables-module-xt-trace
> iptables-module-xt-u32
> iptables-module-xt-udp
>
> and in my kernel config, I have (apart from what is needed for running containers with
> containerd):
>
> CONFIG_NETFILTER_NETLINK=m
> CONFIG_NETFILTER_XT_MATCH_OWNER=m
> CONFIG_NET_UDP_TUNNEL=m
> CONFIG_NF_DUP_NETDEV=m
> CONFIG_NF_LOG_BRIDGE=m
> CONFIG_NF_TABLES_ARP=y
> CONFIG_NF_TABLES_BRIDGE=y
> CONFIG_NF_TABLES_INET=y
> CONFIG_NF_TABLES_IPV4=y
> CONFIG_NF_TABLES_IPV6=y
> CONFIG_NF_TABLES=m
> CONFIG_NF_TABLES_NETDEV=y
> CONFIG_NFT_BRIDGE_REJECT=m
> CONFIG_NFT_CHAIN_NAT_IPV4=m
> CONFIG_NFT_CHAIN_ROUTE_IPV4=m
> CONFIG_NFT_CHAIN_ROUTE_IPV6=m
> CONFIG_NFT_COMPAT=m
> CONFIG_NFT_COUNTER=m
> CONFIG_NFT_CT=m
> CONFIG_NFT_DUP_IPV4=m
> CONFIG_NFT_DUP_IPV6=m
> CONFIG_NFT_DUP_NETDEV=m
> # CONFIG_NFT_EXTHDR is not set
> CONFIG_NFT_FIB_INET=m
> CONFIG_NFT_FIB_IPV4=m
> CONFIG_NFT_FIB_IPV6=m
> CONFIG_NFT_FIB_NETDEV=m
> CONFIG_NFT_FWD_NETDEV=m
> CONFIG_NFT_HASH=m
> CONFIG_NFT_LIMIT=m
> CONFIG_NFT_LOG=m
> CONFIG_NFT_MASQ_IPV4=m
> CONFIG_NFT_MASQ=m
> # CONFIG_NFT_META is not set
> CONFIG_NFT_NAT=m
> CONFIG_NFT_NUMGEN=m
> # CONFIG_NFT_OBJREF is not set
> CONFIG_NFT_QUEUE=m
> CONFIG_NFT_QUOTA=m
> CONFIG_NFT_REDIR_IPV4=m
> CONFIG_NFT_REDIR=m
> CONFIG_NFT_REJECT=m
> # CONFIG_NFT_RT is not set
> # CONFIG_NFT_SET_BITMAP is not set
> # CONFIG_NFT_SET_HASH is not set
> # CONFIG_NFT_SET_RBTREE is not set
> CONFIG_OVERLAY_FS=m
> CONFIG_STP=m
>
> BR,
>
> /Joakim
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-09  7:48                                               ` Joakim Roubert
  2020-11-09  9:26                                                 ` Lance.Yang
@ 2020-11-09 13:44                                                 ` Bruce Ashfield
  1 sibling, 0 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-09 13:44 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

On Mon, Nov 9, 2020 at 2:48 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-11-06 22:20, Bruce Ashfield wrote:
> >
> > I now have another 6 or 7 WIP patches on top of this to try and get a
> > single node "cluster" working with k3s. I'll clean them up and get
> > them into the k3s WIP branch shortly.
>
> Awesome!
>
> > In your working references, which iptables do you have installed ?
> > (legacy ? nftables?)
>
> # iptables --version
> iptables v1.8.4 (legacy)
>
> > I'm failing to get flannel to start, with a series of errors like this:
> >
> > -----------
> > I1106 21:19:00.985656   10641 eviction_manager.go:351] eviction
> > manager: able to reduce ephemeral-storage pressure without evicting
> > pods.
> > E1106 21:19:10.636899   10641 proxier.go:841] Failed to ensure that
> > filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking
> > rule: exit status 2: iptables v1.8.6 (legacy): Couldn't load match
> > `comment':No such y
> > Try `iptables -h' or 'iptables --help' for more information.
> > I1106 21:19:10.641647   10641 proxier.go:825] Sync failed; retrying in 30s
> > ------------
>
> This is a bit strange, as it seems you are running in legacy mode too,
> although a somewhat newer version than I have, and the only thing I know
> is that Rancher recommends 1.6.1 or newer (which it is).
>
> https://rancher.com/docs/k3s/latest/en/known-issues/
>
> Might there be something missing in the kernel?
>
> I have these RPMs installed in my image:
>
> iptables
> iptables-module-ip6t-ah
> iptables-module-ip6t-dnat
> iptables-module-ip6t-dnpt
> iptables-module-ip6t-dst
> iptables-module-ip6t-eui64
> iptables-module-ip6t-frag
> iptables-module-ip6t-hbh
> iptables-module-ip6t-hl
> iptables-module-ip6t-icmp6
> iptables-module-ip6t-ipv6header
> iptables-module-ip6t-log
> iptables-module-ip6t-masquerade
> iptables-module-ip6t-mh
> iptables-module-ip6t-netmap
> iptables-module-ip6t-redirect
> iptables-module-ip6t-reject
> iptables-module-ip6t-rt
> iptables-module-ip6t-snat
> iptables-module-ip6t-snpt
> iptables-module-ip6t-srh
> iptables-module-ipt-ah
> iptables-module-ipt-clusterip
> iptables-module-ipt-dnat
> iptables-module-ipt-ecn
> iptables-module-ipt-icmp
> iptables-module-ipt-log
> iptables-module-ipt-masquerade
> iptables-module-ipt-netmap
> iptables-module-ipt-realm
> iptables-module-ipt-redirect
> iptables-module-ipt-reject
> iptables-module-ipt-snat
> iptables-module-ipt-ttl
> iptables-module-ipt-ulog
> iptables-modules
> iptables-module-xt-addrtype
> iptables-module-xt-audit
> iptables-module-xt-bpf
> iptables-module-xt-cgroup
> iptables-module-xt-checksum
> iptables-module-xt-classify
> iptables-module-xt-cluster
> iptables-module-xt-comment
> iptables-module-xt-connbytes
> iptables-module-xt-connlimit
> iptables-module-xt-connmark
> iptables-module-xt-connsecmark
> iptables-module-xt-conntrack
> iptables-module-xt-cpu
> iptables-module-xt-ct
> iptables-module-xt-dccp
> iptables-module-xt-devgroup
> iptables-module-xt-dscp
> iptables-module-xt-ecn
> iptables-module-xt-esp
> iptables-module-xt-hashlimit
> iptables-module-xt-helper
> iptables-module-xt-hmark
> iptables-module-xt-idletimer
> iptables-module-xt-ipcomp
> iptables-module-xt-iprange
> iptables-module-xt-ipvs
> iptables-module-xt-led
> iptables-module-xt-length
> iptables-module-xt-limit
> iptables-module-xt-mac
> iptables-module-xt-mark
> iptables-module-xt-multiport
> iptables-module-xt-nfacct
> iptables-module-xt-nflog
> iptables-module-xt-nfqueue
> iptables-module-xt-osf
> iptables-module-xt-owner
> iptables-module-xt-physdev
> iptables-module-xt-pkttype
> iptables-module-xt-policy
> iptables-module-xt-quota
> iptables-module-xt-rateest
> iptables-module-xt-recent
> iptables-module-xt-rpfilter
> iptables-module-xt-sctp
> iptables-module-xt-secmark
> iptables-module-xt-set
> iptables-module-xt-socket
> iptables-module-xt-standard
> iptables-module-xt-statistic
> iptables-module-xt-string
> iptables-module-xt-synproxy
> iptables-module-xt-tcp
> iptables-module-xt-tcpmss
> iptables-module-xt-tcpoptstrip
> iptables-module-xt-tee
> iptables-module-xt-time
> iptables-module-xt-tos
> iptables-module-xt-tproxy
> iptables-module-xt-trace
> iptables-module-xt-u32
> iptables-module-xt-udp
>
> and in my kernel config, I have (apart from what is needed for running
> containers with containerd):
>
> CONFIG_NETFILTER_NETLINK=m
> CONFIG_NETFILTER_XT_MATCH_OWNER=m
> CONFIG_NET_UDP_TUNNEL=m
> CONFIG_NF_DUP_NETDEV=m
> CONFIG_NF_LOG_BRIDGE=m
> CONFIG_NF_TABLES_ARP=y
> CONFIG_NF_TABLES_BRIDGE=y
> CONFIG_NF_TABLES_INET=y
> CONFIG_NF_TABLES_IPV4=y
> CONFIG_NF_TABLES_IPV6=y
> CONFIG_NF_TABLES=m
> CONFIG_NF_TABLES_NETDEV=y
> CONFIG_NFT_BRIDGE_REJECT=m
> CONFIG_NFT_CHAIN_NAT_IPV4=m
> CONFIG_NFT_CHAIN_ROUTE_IPV4=m
> CONFIG_NFT_CHAIN_ROUTE_IPV6=m
> CONFIG_NFT_COMPAT=m
> CONFIG_NFT_COUNTER=m
> CONFIG_NFT_CT=m
> CONFIG_NFT_DUP_IPV4=m
> CONFIG_NFT_DUP_IPV6=m
> CONFIG_NFT_DUP_NETDEV=m
> # CONFIG_NFT_EXTHDR is not set
> CONFIG_NFT_FIB_INET=m
> CONFIG_NFT_FIB_IPV4=m
> CONFIG_NFT_FIB_IPV6=m
> CONFIG_NFT_FIB_NETDEV=m
> CONFIG_NFT_FWD_NETDEV=m
> CONFIG_NFT_HASH=m
> CONFIG_NFT_LIMIT=m
> CONFIG_NFT_LOG=m
> CONFIG_NFT_MASQ_IPV4=m
> CONFIG_NFT_MASQ=m
> # CONFIG_NFT_META is not set
> CONFIG_NFT_NAT=m
> CONFIG_NFT_NUMGEN=m
> # CONFIG_NFT_OBJREF is not set
> CONFIG_NFT_QUEUE=m
> CONFIG_NFT_QUOTA=m
> CONFIG_NFT_REDIR_IPV4=m
> CONFIG_NFT_REDIR=m
> CONFIG_NFT_REJECT=m
> # CONFIG_NFT_RT is not set
> # CONFIG_NFT_SET_BITMAP is not set
> # CONFIG_NFT_SET_HASH is not set
> # CONFIG_NFT_SET_RBTREE is not set
> CONFIG_OVERLAY_FS=m
> CONFIG_STP=m

Cool, the above helps.

I have some new fragments, etc, developed for both k8s and k3s, so
I'll double check that I didn't fat finger something.

Bruce

>
> BR,
>
> /Joakim



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-09  9:26                                                 ` Lance.Yang
@ 2020-11-09 13:45                                                   ` Bruce Ashfield
  2020-11-10  8:45                                                     ` Lance Yang
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-09 13:45 UTC (permalink / raw)
  To: Lance Yang; +Cc: meta-virtualization, joakim.roubert, nd

On Mon, Nov 9, 2020 at 4:26 AM Lance Yang <Lance.Yang@arm.com> wrote:
>
> Hi Bruce,
>
> For the iptables issue, I tested iptables.
>
> As iptables comment module belonging to iptables extension, I checked my kernel config and set the parameter: CONFIG_NETFILTER_XT_MATCH_COMMENT=m.
>
> iptables -V
> iptables v1.8.5 (legacy)
>
> I used this iptables command to check
>
> iptables -A INPUT -p tcp --dport 22 -m comment --comment "SSH" -j ACCEPT
>
> It works fine from my side.

Yah, that's what I assumed it was as well, but yet, when I added it in
.. I didn't see a change.

That being said, this is helpful, so I started a clean build to see if
I had picked up something stale that was masking my fix.

Bruce

>
> Best Regards,
> Lance
>
> > -----Original Message-----
> > From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org>
> > On Behalf Of Joakim Roubert via lists.yoctoproject.org
> > Sent: Monday, November 9, 2020 3:49 PM
> > To: Bruce Ashfield <bruce.ashfield@gmail.com>
> > Cc: meta-virtualization@yoctoproject.org
> > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> >
> > On 2020-11-06 22:20, Bruce Ashfield wrote:
> > >
> > > I now have another 6 or 7 WIP patches on top of this to try and get a
> > > single node "cluster" working with k3s. I'll clean them up and get
> > > them into the k3s WIP branch shortly.
> >
> > Awesome!
> >
> > > In your working references, which iptables do you have installed ?
> > > (legacy ? nftables?)
> >
> > # iptables --version
> > iptables v1.8.4 (legacy)
> >
> > > I'm failing to get flannel to start, with a series of errors like this:
> > >
> > > -----------
> > > I1106 21:19:00.985656   10641 eviction_manager.go:351] eviction
> > > manager: able to reduce ephemeral-storage pressure without evicting
> > > pods.
> > > E1106 21:19:10.636899   10641 proxier.go:841] Failed to ensure that
> > > filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking
> > > rule: exit status 2: iptables v1.8.6 (legacy): Couldn't load match
> > > `comment':No such y Try `iptables -h' or 'iptables --help' for more
> > > information.
> > > I1106 21:19:10.641647   10641 proxier.go:825] Sync failed; retrying in
> > > 30s
> > > ------------
> >
> > This is a bit strange, as it seems you are running in legacy mode too, although a somewhat
> > newer version than I have, and the only thing I know is that Rancher recommends 1.6.1 or newer
> > (which it is).
> >
> > https://rancher.com/docs/k3s/latest/en/known-issues/
> >
> > Might there be something missing in the kernel?
> >
> > I have these RPMs installed in my image:
> >
> > iptables
> > iptables-module-ip6t-ah
> > iptables-module-ip6t-dnat
> > iptables-module-ip6t-dnpt
> > iptables-module-ip6t-dst
> > iptables-module-ip6t-eui64
> > iptables-module-ip6t-frag
> > iptables-module-ip6t-hbh
> > iptables-module-ip6t-hl
> > iptables-module-ip6t-icmp6
> > iptables-module-ip6t-ipv6header
> > iptables-module-ip6t-log
> > iptables-module-ip6t-masquerade
> > iptables-module-ip6t-mh
> > iptables-module-ip6t-netmap
> > iptables-module-ip6t-redirect
> > iptables-module-ip6t-reject
> > iptables-module-ip6t-rt
> > iptables-module-ip6t-snat
> > iptables-module-ip6t-snpt
> > iptables-module-ip6t-srh
> > iptables-module-ipt-ah
> > iptables-module-ipt-clusterip
> > iptables-module-ipt-dnat
> > iptables-module-ipt-ecn
> > iptables-module-ipt-icmp
> > iptables-module-ipt-log
> > iptables-module-ipt-masquerade
> > iptables-module-ipt-netmap
> > iptables-module-ipt-realm
> > iptables-module-ipt-redirect
> > iptables-module-ipt-reject
> > iptables-module-ipt-snat
> > iptables-module-ipt-ttl
> > iptables-module-ipt-ulog
> > iptables-modules
> > iptables-module-xt-addrtype
> > iptables-module-xt-audit
> > iptables-module-xt-bpf
> > iptables-module-xt-cgroup
> > iptables-module-xt-checksum
> > iptables-module-xt-classify
> > iptables-module-xt-cluster
> > iptables-module-xt-comment
> > iptables-module-xt-connbytes
> > iptables-module-xt-connlimit
> > iptables-module-xt-connmark
> > iptables-module-xt-connsecmark
> > iptables-module-xt-conntrack
> > iptables-module-xt-cpu
> > iptables-module-xt-ct
> > iptables-module-xt-dccp
> > iptables-module-xt-devgroup
> > iptables-module-xt-dscp
> > iptables-module-xt-ecn
> > iptables-module-xt-esp
> > iptables-module-xt-hashlimit
> > iptables-module-xt-helper
> > iptables-module-xt-hmark
> > iptables-module-xt-idletimer
> > iptables-module-xt-ipcomp
> > iptables-module-xt-iprange
> > iptables-module-xt-ipvs
> > iptables-module-xt-led
> > iptables-module-xt-length
> > iptables-module-xt-limit
> > iptables-module-xt-mac
> > iptables-module-xt-mark
> > iptables-module-xt-multiport
> > iptables-module-xt-nfacct
> > iptables-module-xt-nflog
> > iptables-module-xt-nfqueue
> > iptables-module-xt-osf
> > iptables-module-xt-owner
> > iptables-module-xt-physdev
> > iptables-module-xt-pkttype
> > iptables-module-xt-policy
> > iptables-module-xt-quota
> > iptables-module-xt-rateest
> > iptables-module-xt-recent
> > iptables-module-xt-rpfilter
> > iptables-module-xt-sctp
> > iptables-module-xt-secmark
> > iptables-module-xt-set
> > iptables-module-xt-socket
> > iptables-module-xt-standard
> > iptables-module-xt-statistic
> > iptables-module-xt-string
> > iptables-module-xt-synproxy
> > iptables-module-xt-tcp
> > iptables-module-xt-tcpmss
> > iptables-module-xt-tcpoptstrip
> > iptables-module-xt-tee
> > iptables-module-xt-time
> > iptables-module-xt-tos
> > iptables-module-xt-tproxy
> > iptables-module-xt-trace
> > iptables-module-xt-u32
> > iptables-module-xt-udp
> >
> > and in my kernel config, I have (apart from what is needed for running containers with
> > containerd):
> >
> > CONFIG_NETFILTER_NETLINK=m
> > CONFIG_NETFILTER_XT_MATCH_OWNER=m
> > CONFIG_NET_UDP_TUNNEL=m
> > CONFIG_NF_DUP_NETDEV=m
> > CONFIG_NF_LOG_BRIDGE=m
> > CONFIG_NF_TABLES_ARP=y
> > CONFIG_NF_TABLES_BRIDGE=y
> > CONFIG_NF_TABLES_INET=y
> > CONFIG_NF_TABLES_IPV4=y
> > CONFIG_NF_TABLES_IPV6=y
> > CONFIG_NF_TABLES=m
> > CONFIG_NF_TABLES_NETDEV=y
> > CONFIG_NFT_BRIDGE_REJECT=m
> > CONFIG_NFT_CHAIN_NAT_IPV4=m
> > CONFIG_NFT_CHAIN_ROUTE_IPV4=m
> > CONFIG_NFT_CHAIN_ROUTE_IPV6=m
> > CONFIG_NFT_COMPAT=m
> > CONFIG_NFT_COUNTER=m
> > CONFIG_NFT_CT=m
> > CONFIG_NFT_DUP_IPV4=m
> > CONFIG_NFT_DUP_IPV6=m
> > CONFIG_NFT_DUP_NETDEV=m
> > # CONFIG_NFT_EXTHDR is not set
> > CONFIG_NFT_FIB_INET=m
> > CONFIG_NFT_FIB_IPV4=m
> > CONFIG_NFT_FIB_IPV6=m
> > CONFIG_NFT_FIB_NETDEV=m
> > CONFIG_NFT_FWD_NETDEV=m
> > CONFIG_NFT_HASH=m
> > CONFIG_NFT_LIMIT=m
> > CONFIG_NFT_LOG=m
> > CONFIG_NFT_MASQ_IPV4=m
> > CONFIG_NFT_MASQ=m
> > # CONFIG_NFT_META is not set
> > CONFIG_NFT_NAT=m
> > CONFIG_NFT_NUMGEN=m
> > # CONFIG_NFT_OBJREF is not set
> > CONFIG_NFT_QUEUE=m
> > CONFIG_NFT_QUOTA=m
> > CONFIG_NFT_REDIR_IPV4=m
> > CONFIG_NFT_REDIR=m
> > CONFIG_NFT_REJECT=m
> > # CONFIG_NFT_RT is not set
> > # CONFIG_NFT_SET_BITMAP is not set
> > # CONFIG_NFT_SET_HASH is not set
> > # CONFIG_NFT_SET_RBTREE is not set
> > CONFIG_OVERLAY_FS=m
> > CONFIG_STP=m
> >
> > BR,
> >
> > /Joakim
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-10-26 15:46                                         ` Bruce Ashfield
  2020-10-28  8:32                                           ` Joakim Roubert
@ 2020-11-10  6:43                                           ` Lance Yang
  2020-11-10 12:46                                             ` Bruce Ashfield
       [not found]                                             ` <16462648E2B320A8.24110@lists.yoctoproject.org>
  1 sibling, 2 replies; 73+ messages in thread
From: Lance Yang @ 2020-11-10  6:43 UTC (permalink / raw)
  To: bruce.ashfield, Joakim Roubert
  Cc: meta-virtualization, Michael Zhao, Kaly Xin

Hi Bruce and Joakim,

Thanks for sharing this branch: k3s-wip. I have tested against my yocto build.

My Image: Linux qemuarm64 by yocto.

The master node can be ready after I started the k3s server. However, the pods in kube-system (which are essential components for k3s) cannot turn to ready state on qemuarm64.

After the master node itself turned to ready state, I check the pods with kubectl:

kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
qemuarm64   Ready    master   11m   v1.18.9-k3s1
root@qemuarm64:~# ls
root@qemuarm64:~# kubectl get pods -n kube-system
NAME                                     READY   STATUS              RESTARTS   AGE
local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0          12m
coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0          12m
helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m

Then I describe the pods with:

Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    16m                  default-scheduler  Successfully assigned kube-system/coredns-7944c66d8d-tlrm9 to qemuarm64
  Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[coredns-token-b7nlh config-volume]: timed out waiting for the condition
  Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach or mount volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[config-volume coredns-token-b7nlh]: timed out waiting for the condition
  Warning  FailedMount  11s (x16 over 16m)   kubelet            MountVolume.SetUp failed for volume "coredns-token-b7nlh" : mount failed: exec: "mount": executable file not found in $PATH

I found the "mount" binary is not found in $PATH. However, I confirmed the $PATH and mount binary on my qemuarm64 image:

root@qemuarm64:~# echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
root@qemuarm64:~# which mount
/bin/mount

When I type mount command, it worked fine:

/dev/root on / type ext4 (rw,relatime)
devtmpfs on /dev type devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
...
... (skipped the verbose output)

I would like to know whether you have met this "mount" issue ever?

Best Regards,
Lance

> -----Original Message-----
> From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org>
> On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> Sent: Monday, October 26, 2020 11:46 PM
> To: Joakim Roubert <joakim.roubert@axis.com>
> Cc: meta-virtualization@yoctoproject.org
> Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
>
> On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> >
> > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > Ha!!!!
> > >
> > > This applies.
> >
> > Wonderful, thank you! I guess this is what is called "five times lucky"...
> >
> > > I'm now testing and completing some of my networking factoring, as
> > > well as importing / forking some recipes to avoid extra layer
> > > depends.
> >
> > Excellent!
>
> I've pushed some of my WIP to:
> https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualization/log/?h=k3s-wip
>
> That includes the split of the networking, the import of some of the dependencies and some
> small tweaks I'm working on.
>
> I did have a couple of questions on the k3s packaging itself, I was getting the following
> error:
>
> ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> Files/directories were installed but not shipped in any package:
>   /usr/local/bin/k3s-clean
>   /usr/local/bin/crictl
>   /usr/local/bin/kubectl
>   /usr/local/bin/k3s
>
> So I added them to the FILES of the k3s package itself (so both k3s-server and k3s-agent will get
> them), is that the split you were looking for ?
>
> Bruce
>
> >
> > BR,
> >
> > /Joakim
> > --
> > Joakim Roubert
> > Senior Engineer
> >
> > Axis Communications AB
> > Emdalavägen 14, SE-223 69 Lund, Sweden
> > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > Fax: +46 46 13 61 30, www.axis.com
> >
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-09 13:45                                                   ` Bruce Ashfield
@ 2020-11-10  8:45                                                     ` Lance Yang
  0 siblings, 0 replies; 73+ messages in thread
From: Lance Yang @ 2020-11-10  8:45 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization, joakim.roubert, Michael Zhao, nd

Hi Bruce,

Thanks for your reply. Because I installed "kernel-modules" in my local.conf:

IMAGE_INSTALL_append = " ...  \ (skip something unrelated)
                         iptables kernel-modules k3s"

So I am not able to figure out which modules are missing. But hopefully, my investigation can help a little bit.

I referred to the file iptables_1.8.5.bb, I found something like:

RDEPENDS_${PN} = "${PN}-module-xt-standard"
RRECOMMENDS_${PN} = " \
    ${PN}-modules \
    kernel-module-x-tables \
    kernel-module-ip-tables \
    kernel-module-iptable-filter \
    kernel-module-iptable-nat \
    kernel-module-nf-defrag-ipv4 \
    kernel-module-nf-conntrack \
    kernel-module-nf-conntrack-ipv4 \
    kernel-module-nf-nat \
    kernel-module-ipt-masquerade \
    ${@bb.utils.contains('PACKAGECONFIG', 'ipv6', '\
        kernel-module-ip6table-filter \
        kernel-module-ip6-tables \
    ', '', d)} \

I am not sure whether we need install kernel-module-x-tables manually or something like that. Maybe it is not a bad way to try install "kernel-modules" directly to confirm if this issue is related to some kernel-modules. And then figure out which specific netfilter/iptables/xtables modules we are missing. What do you think?

Best Regards,
Lance Yang


> -----Original Message-----
> From: Bruce Ashfield <bruce.ashfield@gmail.com>
> Sent: Monday, November 9, 2020 9:46 PM
> To: Lance Yang <Lance.Yang@arm.com>
> Cc: meta-virtualization@yoctoproject.org; joakim.roubert@axis.com; nd <nd@arm.com>
> Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
>
> On Mon, Nov 9, 2020 at 4:26 AM Lance Yang <Lance.Yang@arm.com> wrote:
> >
> > Hi Bruce,
> >
> > For the iptables issue, I tested iptables.
> >
> > As iptables comment module belonging to iptables extension, I checked my kernel config and
> set the parameter: CONFIG_NETFILTER_XT_MATCH_COMMENT=m.
> >
> > iptables -V
> > iptables v1.8.5 (legacy)
> >
> > I used this iptables command to check
> >
> > iptables -A INPUT -p tcp --dport 22 -m comment --comment "SSH" -j
> > ACCEPT
> >
> > It works fine from my side.
>
> Yah, that's what I assumed it was as well, but yet, when I added it in .. I didn't see a change.
>
> That being said, this is helpful, so I started a clean build to see if I had picked up something
> stale that was masking my fix.
>
> Bruce
>
> >
> > Best Regards,
> > Lance
> >
> > > -----Original Message-----
> > > From: meta-virtualization@lists.yoctoproject.org
> > > <meta-virtualization@lists.yoctoproject.org>
> > > On Behalf Of Joakim Roubert via lists.yoctoproject.org
> > > Sent: Monday, November 9, 2020 3:49 PM
> > > To: Bruce Ashfield <bruce.ashfield@gmail.com>
> > > Cc: meta-virtualization@yoctoproject.org
> > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > >
> > > On 2020-11-06 22:20, Bruce Ashfield wrote:
> > > >
> > > > I now have another 6 or 7 WIP patches on top of this to try and
> > > > get a single node "cluster" working with k3s. I'll clean them up
> > > > and get them into the k3s WIP branch shortly.
> > >
> > > Awesome!
> > >
> > > > In your working references, which iptables do you have installed ?
> > > > (legacy ? nftables?)
> > >
> > > # iptables --version
> > > iptables v1.8.4 (legacy)
> > >
> > > > I'm failing to get flannel to start, with a series of errors like this:
> > > >
> > > > -----------
> > > > I1106 21:19:00.985656   10641 eviction_manager.go:351] eviction
> > > > manager: able to reduce ephemeral-storage pressure without
> > > > evicting pods.
> > > > E1106 21:19:10.636899   10641 proxier.go:841] Failed to ensure that
> > > > filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking
> > > > rule: exit status 2: iptables v1.8.6 (legacy): Couldn't load match
> > > > `comment':No such y Try `iptables -h' or 'iptables --help' for
> > > > more information.
> > > > I1106 21:19:10.641647   10641 proxier.go:825] Sync failed; retrying in
> > > > 30s
> > > > ------------
> > >
> > > This is a bit strange, as it seems you are running in legacy mode
> > > too, although a somewhat newer version than I have, and the only
> > > thing I know is that Rancher recommends 1.6.1 or newer (which it is).
> > >
> > > https://rancher.com/docs/k3s/latest/en/known-issues/
> > >
> > > Might there be something missing in the kernel?
> > >
> > > I have these RPMs installed in my image:
> > >
> > > iptables
> > > iptables-module-ip6t-ah
> > > iptables-module-ip6t-dnat
> > > iptables-module-ip6t-dnpt
> > > iptables-module-ip6t-dst
> > > iptables-module-ip6t-eui64
> > > iptables-module-ip6t-frag
> > > iptables-module-ip6t-hbh
> > > iptables-module-ip6t-hl
> > > iptables-module-ip6t-icmp6
> > > iptables-module-ip6t-ipv6header
> > > iptables-module-ip6t-log
> > > iptables-module-ip6t-masquerade
> > > iptables-module-ip6t-mh
> > > iptables-module-ip6t-netmap
> > > iptables-module-ip6t-redirect
> > > iptables-module-ip6t-reject
> > > iptables-module-ip6t-rt
> > > iptables-module-ip6t-snat
> > > iptables-module-ip6t-snpt
> > > iptables-module-ip6t-srh
> > > iptables-module-ipt-ah
> > > iptables-module-ipt-clusterip
> > > iptables-module-ipt-dnat
> > > iptables-module-ipt-ecn
> > > iptables-module-ipt-icmp
> > > iptables-module-ipt-log
> > > iptables-module-ipt-masquerade
> > > iptables-module-ipt-netmap
> > > iptables-module-ipt-realm
> > > iptables-module-ipt-redirect
> > > iptables-module-ipt-reject
> > > iptables-module-ipt-snat
> > > iptables-module-ipt-ttl
> > > iptables-module-ipt-ulog
> > > iptables-modules
> > > iptables-module-xt-addrtype
> > > iptables-module-xt-audit
> > > iptables-module-xt-bpf
> > > iptables-module-xt-cgroup
> > > iptables-module-xt-checksum
> > > iptables-module-xt-classify
> > > iptables-module-xt-cluster
> > > iptables-module-xt-comment
> > > iptables-module-xt-connbytes
> > > iptables-module-xt-connlimit
> > > iptables-module-xt-connmark
> > > iptables-module-xt-connsecmark
> > > iptables-module-xt-conntrack
> > > iptables-module-xt-cpu
> > > iptables-module-xt-ct
> > > iptables-module-xt-dccp
> > > iptables-module-xt-devgroup
> > > iptables-module-xt-dscp
> > > iptables-module-xt-ecn
> > > iptables-module-xt-esp
> > > iptables-module-xt-hashlimit
> > > iptables-module-xt-helper
> > > iptables-module-xt-hmark
> > > iptables-module-xt-idletimer
> > > iptables-module-xt-ipcomp
> > > iptables-module-xt-iprange
> > > iptables-module-xt-ipvs
> > > iptables-module-xt-led
> > > iptables-module-xt-length
> > > iptables-module-xt-limit
> > > iptables-module-xt-mac
> > > iptables-module-xt-mark
> > > iptables-module-xt-multiport
> > > iptables-module-xt-nfacct
> > > iptables-module-xt-nflog
> > > iptables-module-xt-nfqueue
> > > iptables-module-xt-osf
> > > iptables-module-xt-owner
> > > iptables-module-xt-physdev
> > > iptables-module-xt-pkttype
> > > iptables-module-xt-policy
> > > iptables-module-xt-quota
> > > iptables-module-xt-rateest
> > > iptables-module-xt-recent
> > > iptables-module-xt-rpfilter
> > > iptables-module-xt-sctp
> > > iptables-module-xt-secmark
> > > iptables-module-xt-set
> > > iptables-module-xt-socket
> > > iptables-module-xt-standard
> > > iptables-module-xt-statistic
> > > iptables-module-xt-string
> > > iptables-module-xt-synproxy
> > > iptables-module-xt-tcp
> > > iptables-module-xt-tcpmss
> > > iptables-module-xt-tcpoptstrip
> > > iptables-module-xt-tee
> > > iptables-module-xt-time
> > > iptables-module-xt-tos
> > > iptables-module-xt-tproxy
> > > iptables-module-xt-trace
> > > iptables-module-xt-u32
> > > iptables-module-xt-udp
> > >
> > > and in my kernel config, I have (apart from what is needed for
> > > running containers with
> > > containerd):
> > >
> > > CONFIG_NETFILTER_NETLINK=m
> > > CONFIG_NETFILTER_XT_MATCH_OWNER=m
> > > CONFIG_NET_UDP_TUNNEL=m
> > > CONFIG_NF_DUP_NETDEV=m
> > > CONFIG_NF_LOG_BRIDGE=m
> > > CONFIG_NF_TABLES_ARP=y
> > > CONFIG_NF_TABLES_BRIDGE=y
> > > CONFIG_NF_TABLES_INET=y
> > > CONFIG_NF_TABLES_IPV4=y
> > > CONFIG_NF_TABLES_IPV6=y
> > > CONFIG_NF_TABLES=m
> > > CONFIG_NF_TABLES_NETDEV=y
> > > CONFIG_NFT_BRIDGE_REJECT=m
> > > CONFIG_NFT_CHAIN_NAT_IPV4=m
> > > CONFIG_NFT_CHAIN_ROUTE_IPV4=m
> > > CONFIG_NFT_CHAIN_ROUTE_IPV6=m
> > > CONFIG_NFT_COMPAT=m
> > > CONFIG_NFT_COUNTER=m
> > > CONFIG_NFT_CT=m
> > > CONFIG_NFT_DUP_IPV4=m
> > > CONFIG_NFT_DUP_IPV6=m
> > > CONFIG_NFT_DUP_NETDEV=m
> > > # CONFIG_NFT_EXTHDR is not set
> > > CONFIG_NFT_FIB_INET=m
> > > CONFIG_NFT_FIB_IPV4=m
> > > CONFIG_NFT_FIB_IPV6=m
> > > CONFIG_NFT_FIB_NETDEV=m
> > > CONFIG_NFT_FWD_NETDEV=m
> > > CONFIG_NFT_HASH=m
> > > CONFIG_NFT_LIMIT=m
> > > CONFIG_NFT_LOG=m
> > > CONFIG_NFT_MASQ_IPV4=m
> > > CONFIG_NFT_MASQ=m
> > > # CONFIG_NFT_META is not set
> > > CONFIG_NFT_NAT=m
> > > CONFIG_NFT_NUMGEN=m
> > > # CONFIG_NFT_OBJREF is not set
> > > CONFIG_NFT_QUEUE=m
> > > CONFIG_NFT_QUOTA=m
> > > CONFIG_NFT_REDIR_IPV4=m
> > > CONFIG_NFT_REDIR=m
> > > CONFIG_NFT_REJECT=m
> > > # CONFIG_NFT_RT is not set
> > > # CONFIG_NFT_SET_BITMAP is not set
> > > # CONFIG_NFT_SET_HASH is not set
> > > # CONFIG_NFT_SET_RBTREE is not set
> > > CONFIG_OVERLAY_FS=m
> > > CONFIG_STP=m
> > >
> > > BR,
> > >
> > > /Joakim
> > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may
> also be privileged. If you are not the intended recipient, please notify the sender immediately
> and do not disclose the contents to any other person, use it for any purpose, or store or copy
> the information in any medium. Thank you.
>
>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-10  6:43                                           ` Lance Yang
@ 2020-11-10 12:46                                             ` Bruce Ashfield
       [not found]                                             ` <16462648E2B320A8.24110@lists.yoctoproject.org>
  1 sibling, 0 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-10 12:46 UTC (permalink / raw)
  To: Lance Yang; +Cc: Joakim Roubert, meta-virtualization, Michael Zhao, Kaly Xin

On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <Lance.Yang@arm.com> wrote:
>
> Hi Bruce and Joakim,
>
> Thanks for sharing this branch: k3s-wip. I have tested against my yocto build.

The branch will be more functional shortly, I have quite a few changes
to factor things for
k8s and generally more usable :D

        modified:   classes/cni_networking.bbclass
        modified:   conf/layer.conf
        modified:   recipes-containers/containerd/containerd-docker_git.bb
        modified:
recipes-containers/containerd/containerd-opencontainers_git.bb
        modified:   recipes-containers/k3s/README.md
        modified:   recipes-containers/k3s/k3s_git.bb
        modified:   recipes-kernel/linux/linux-yocto/kubernetes.cfg
        modified:   recipes-networking/cni/cni_git.bb
        container-deploy.txt
        recipes-core/packagegroups/

>
> My Image: Linux qemuarm64 by yocto.
>
> The master node can be ready after I started the k3s server. However, the pods in kube-system (which are essential components for k3s) cannot turn to ready state on qemuarm64.
>

That's interesting, since in my configuration, the master never comes ready:

root@qemux86-64:~# kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
qemux86-64   NotReady   master   15h   v1.18.9-k3s1

I've sorted out more of the dependencies, and have packagegroups to
make them easier
now.

Hopefully, I can figure out what is now missing and keeping my master
from moving into
ready today.

Bruce

> After the master node itself turned to ready state, I check the pods with kubectl:
>
> kubectl get nodes
> NAME        STATUS   ROLES    AGE   VERSION
> qemuarm64   Ready    master   11m   v1.18.9-k3s1
> root@qemuarm64:~# ls
> root@qemuarm64:~# kubectl get pods -n kube-system
> NAME                                     READY   STATUS              RESTARTS   AGE
> local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0          12m
> coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
> metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0          12m
> helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m
>
> Then I describe the pods with:
>
> Events:
>   Type     Reason       Age                  From               Message
>   ----     ------       ----                 ----               -------
>   Normal   Scheduled    16m                  default-scheduler  Successfully assigned kube-system/coredns-7944c66d8d-tlrm9 to qemuarm64
>   Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[coredns-token-b7nlh config-volume]: timed out waiting for the condition
>   Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach or mount volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[config-volume coredns-token-b7nlh]: timed out waiting for the condition
>   Warning  FailedMount  11s (x16 over 16m)   kubelet            MountVolume.SetUp failed for volume "coredns-token-b7nlh" : mount failed: exec: "mount": executable file not found in $PATH
>
> I found the "mount" binary is not found in $PATH. However, I confirmed the $PATH and mount binary on my qemuarm64 image:
>
> root@qemuarm64:~# echo $PATH
> /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> root@qemuarm64:~# which mount
> /bin/mount
>
> When I type mount command, it worked fine:
>
> /dev/root on / type ext4 (rw,relatime)
> devtmpfs on /dev type devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> proc on /proc type proc (rw,relatime)
> sysfs on /sys type sysfs (rw,relatime)
> debugfs on /sys/kernel/debug type debugfs (rw,relatime)
> tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
> ...
> ... (skipped the verbose output)
>
> I would like to know whether you have met this "mount" issue ever?
>
> Best Regards,
> Lance
>
> > -----Original Message-----
> > From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org>
> > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > Sent: Monday, October 26, 2020 11:46 PM
> > To: Joakim Roubert <joakim.roubert@axis.com>
> > Cc: meta-virtualization@yoctoproject.org
> > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> >
> > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> > >
> > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > Ha!!!!
> > > >
> > > > This applies.
> > >
> > > Wonderful, thank you! I guess this is what is called "five times lucky"...
> > >
> > > > I'm now testing and completing some of my networking factoring, as
> > > > well as importing / forking some recipes to avoid extra layer
> > > > depends.
> > >
> > > Excellent!
> >
> > I've pushed some of my WIP to:
> > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualization/log/?h=k3s-wip
> >
> > That includes the split of the networking, the import of some of the dependencies and some
> > small tweaks I'm working on.
> >
> > I did have a couple of questions on the k3s packaging itself, I was getting the following
> > error:
> >
> > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > Files/directories were installed but not shipped in any package:
> >   /usr/local/bin/k3s-clean
> >   /usr/local/bin/crictl
> >   /usr/local/bin/kubectl
> >   /usr/local/bin/k3s
> >
> > So I added them to the FILES of the k3s package itself (so both k3s-server and k3s-agent will get
> > them), is that the split you were looking for ?
> >
> > Bruce
> >
> > >
> > > BR,
> > >
> > > /Joakim
> > > --
> > > Joakim Roubert
> > > Senior Engineer
> > >
> > > Axis Communications AB
> > > Emdalavägen 14, SE-223 69 Lund, Sweden
> > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > Fax: +46 46 13 61 30, www.axis.com
> > >
> > --
> > - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> > - "Use the force Harry" - Gandalf, Star Trek II
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
       [not found]                                             ` <16462648E2B320A8.24110@lists.yoctoproject.org>
@ 2020-11-10 13:17                                               ` Bruce Ashfield
  2020-11-12  7:30                                                 ` Lance Yang
  2020-11-12 13:40                                                 ` Joakim Roubert
       [not found]                                               ` <164627F27D18DB55.10479@lists.yoctoproject.org>
  1 sibling, 2 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-10 13:17 UTC (permalink / raw)
  To: Bruce Ashfield
  Cc: Lance Yang, Joakim Roubert, meta-virtualization, Michael Zhao, Kaly Xin

On Tue, Nov 10, 2020 at 7:46 AM Bruce Ashfield via
lists.yoctoproject.org
<bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
>
> On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <Lance.Yang@arm.com> wrote:
> >
> > Hi Bruce and Joakim,
> >
> > Thanks for sharing this branch: k3s-wip. I have tested against my yocto build.
>
> The branch will be more functional shortly, I have quite a few changes
> to factor things for
> k8s and generally more usable :D
>
>         modified:   classes/cni_networking.bbclass
>         modified:   conf/layer.conf
>         modified:   recipes-containers/containerd/containerd-docker_git.bb
>         modified:
> recipes-containers/containerd/containerd-opencontainers_git.bb
>         modified:   recipes-containers/k3s/README.md
>         modified:   recipes-containers/k3s/k3s_git.bb
>         modified:   recipes-kernel/linux/linux-yocto/kubernetes.cfg
>         modified:   recipes-networking/cni/cni_git.bb
>         container-deploy.txt
>         recipes-core/packagegroups/
>
> >
> > My Image: Linux qemuarm64 by yocto.
> >
> > The master node can be ready after I started the k3s server. However, the pods in kube-system (which are essential components for k3s) cannot turn to ready state on qemuarm64.
> >
>
> That's interesting, since in my configuration, the master never comes ready:
>
> root@qemux86-64:~# kubectl get nodes
> NAME         STATUS     ROLES    AGE   VERSION
> qemux86-64   NotReady   master   15h   v1.18.9-k3s1
>

Hah.

I finally got the node to show up as ready:

root@qemux86-64:~# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
qemux86-64   Ready    master   112s   v1.18.9-k3s1

I'm attempting to build an all-in-one node, and that is likely causing
me some issues.

I'm revisiting those potential conflicts now.

But if anyone else does have an all in one working and has some tips,
feel free to share :D

Bruce

> I've sorted out more of the dependencies, and have packagegroups to
> make them easier
> now.
>
> Hopefully, I can figure out what is now missing and keeping my master
> from moving into
> ready today.
>
> Bruce
>
> > After the master node itself turned to ready state, I check the pods with kubectl:
> >
> > kubectl get nodes
> > NAME        STATUS   ROLES    AGE   VERSION
> > qemuarm64   Ready    master   11m   v1.18.9-k3s1
> > root@qemuarm64:~# ls
> > root@qemuarm64:~# kubectl get pods -n kube-system
> > NAME                                     READY   STATUS              RESTARTS   AGE
> > local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0          12m
> > coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
> > metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0          12m
> > helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m
> >
> > Then I describe the pods with:
> >
> > Events:
> >   Type     Reason       Age                  From               Message
> >   ----     ------       ----                 ----               -------
> >   Normal   Scheduled    16m                  default-scheduler  Successfully assigned kube-system/coredns-7944c66d8d-tlrm9 to qemuarm64
> >   Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[coredns-token-b7nlh config-volume]: timed out waiting for the condition
> >   Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach or mount volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[config-volume coredns-token-b7nlh]: timed out waiting for the condition
> >   Warning  FailedMount  11s (x16 over 16m)   kubelet            MountVolume.SetUp failed for volume "coredns-token-b7nlh" : mount failed: exec: "mount": executable file not found in $PATH
> >
> > I found the "mount" binary is not found in $PATH. However, I confirmed the $PATH and mount binary on my qemuarm64 image:
> >
> > root@qemuarm64:~# echo $PATH
> > /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > root@qemuarm64:~# which mount
> > /bin/mount
> >
> > When I type mount command, it worked fine:
> >
> > /dev/root on / type ext4 (rw,relatime)
> > devtmpfs on /dev type devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> > proc on /proc type proc (rw,relatime)
> > sysfs on /sys type sysfs (rw,relatime)
> > debugfs on /sys/kernel/debug type debugfs (rw,relatime)
> > tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
> > ...
> > ... (skipped the verbose output)
> >
> > I would like to know whether you have met this "mount" issue ever?
> >
> > Best Regards,
> > Lance
> >
> > > -----Original Message-----
> > > From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org>
> > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > Sent: Monday, October 26, 2020 11:46 PM
> > > To: Joakim Roubert <joakim.roubert@axis.com>
> > > Cc: meta-virtualization@yoctoproject.org
> > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > >
> > > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> > > >
> > > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > > Ha!!!!
> > > > >
> > > > > This applies.
> > > >
> > > > Wonderful, thank you! I guess this is what is called "five times lucky"...
> > > >
> > > > > I'm now testing and completing some of my networking factoring, as
> > > > > well as importing / forking some recipes to avoid extra layer
> > > > > depends.
> > > >
> > > > Excellent!
> > >
> > > I've pushed some of my WIP to:
> > > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualization/log/?h=k3s-wip
> > >
> > > That includes the split of the networking, the import of some of the dependencies and some
> > > small tweaks I'm working on.
> > >
> > > I did have a couple of questions on the k3s packaging itself, I was getting the following
> > > error:
> > >
> > > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > > Files/directories were installed but not shipped in any package:
> > >   /usr/local/bin/k3s-clean
> > >   /usr/local/bin/crictl
> > >   /usr/local/bin/kubectl
> > >   /usr/local/bin/k3s
> > >
> > > So I added them to the FILES of the k3s package itself (so both k3s-server and k3s-agent will get
> > > them), is that the split you were looking for ?
> > >
> > > Bruce
> > >
> > > >
> > > > BR,
> > > >
> > > > /Joakim
> > > > --
> > > > Joakim Roubert
> > > > Senior Engineer
> > > >
> > > > Axis Communications AB
> > > > Emdalavägen 14, SE-223 69 Lund, Sweden
> > > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > > Fax: +46 46 13 61 30, www.axis.com
> > > >
> > > --
> > > - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> > > - "Use the force Harry" - Gandalf, Star Trek II
> > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
>
>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await
> thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
>
> 
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
       [not found]                                               ` <164627F27D18DB55.10479@lists.yoctoproject.org>
@ 2020-11-10 13:34                                                 ` Bruce Ashfield
  2020-11-11 10:06                                                   ` Lance Yang
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-10 13:34 UTC (permalink / raw)
  To: Bruce Ashfield
  Cc: Lance Yang, Joakim Roubert, meta-virtualization, Michael Zhao, Kaly Xin

On Tue, Nov 10, 2020 at 8:17 AM Bruce Ashfield via
lists.yoctoproject.org
<bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
>
> On Tue, Nov 10, 2020 at 7:46 AM Bruce Ashfield via
> lists.yoctoproject.org
> <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> >
> > On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > >
> > > Hi Bruce and Joakim,
> > >
> > > Thanks for sharing this branch: k3s-wip. I have tested against my yocto build.
> >
> > The branch will be more functional shortly, I have quite a few changes
> > to factor things for
> > k8s and generally more usable :D
> >
> >         modified:   classes/cni_networking.bbclass
> >         modified:   conf/layer.conf
> >         modified:   recipes-containers/containerd/containerd-docker_git.bb
> >         modified:
> > recipes-containers/containerd/containerd-opencontainers_git.bb
> >         modified:   recipes-containers/k3s/README.md
> >         modified:   recipes-containers/k3s/k3s_git.bb
> >         modified:   recipes-kernel/linux/linux-yocto/kubernetes.cfg
> >         modified:   recipes-networking/cni/cni_git.bb
> >         container-deploy.txt
> >         recipes-core/packagegroups/
> >
> > >
> > > My Image: Linux qemuarm64 by yocto.
> > >
> > > The master node can be ready after I started the k3s server. However, the pods in kube-system (which are essential components for k3s) cannot turn to ready state on qemuarm64.
> > >
> >
> > That's interesting, since in my configuration, the master never comes ready:
> >
> > root@qemux86-64:~# kubectl get nodes
> > NAME         STATUS     ROLES    AGE   VERSION
> > qemux86-64   NotReady   master   15h   v1.18.9-k3s1
> >
>
> Hah.
>
> I finally got the node to show up as ready:
>
> root@qemux86-64:~# kubectl get nodes
> NAME         STATUS   ROLES    AGE    VERSION
> qemux86-64   Ready    master   112s   v1.18.9-k3s1
>

Lance,

What image type were you building ? I'm pulling in dependencies to
packagegroups and the recipes themselves.

I'm not seeing the mount issue on my master/server node:

root@qemux86-64:~# kubectl get pods -n kube-system
NAME                                     READY   STATUS      RESTARTS   AGE
local-path-provisioner-6d59f47c7-h7lxk   1/1     Running     0          3m32s
metrics-server-7566d596c8-mwntr          1/1     Running     0          3m32s
helm-install-traefik-229v7               0/1     Completed   0          3m32s
coredns-7944c66d8d-9rfj7                 1/1     Running     0          3m32s
svclb-traefik-pb5j4                      2/2     Running     0          2m29s
traefik-758cd5fc85-lxpr8                 1/1     Running     0          2m29s

I'm going back to all-in-one node debugging, but can look into the
mount issue more later.

Bruce

> I'm attempting to build an all-in-one node, and that is likely causing
> me some issues.
>
> I'm revisiting those potential conflicts now.
>
> But if anyone else does have an all in one working and has some tips,
> feel free to share :D
>
> Bruce
>
> > I've sorted out more of the dependencies, and have packagegroups to
> > make them easier
> > now.
> >
> > Hopefully, I can figure out what is now missing and keeping my master
> > from moving into
> > ready today.
> >
> > Bruce
> >
> > > After the master node itself turned to ready state, I check the pods with kubectl:
> > >
> > > kubectl get nodes
> > > NAME        STATUS   ROLES    AGE   VERSION
> > > qemuarm64   Ready    master   11m   v1.18.9-k3s1
> > > root@qemuarm64:~# ls
> > > root@qemuarm64:~# kubectl get pods -n kube-system
> > > NAME                                     READY   STATUS              RESTARTS   AGE
> > > local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0          12m
> > > coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
> > > metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0          12m
> > > helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m
> > >
> > > Then I describe the pods with:
> > >
> > > Events:
> > >   Type     Reason       Age                  From               Message
> > >   ----     ------       ----                 ----               -------
> > >   Normal   Scheduled    16m                  default-scheduler  Successfully assigned kube-system/coredns-7944c66d8d-tlrm9 to qemuarm64
> > >   Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[coredns-token-b7nlh config-volume]: timed out waiting for the condition
> > >   Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach or mount volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[config-volume coredns-token-b7nlh]: timed out waiting for the condition
> > >   Warning  FailedMount  11s (x16 over 16m)   kubelet            MountVolume.SetUp failed for volume "coredns-token-b7nlh" : mount failed: exec: "mount": executable file not found in $PATH
> > >
> > > I found the "mount" binary is not found in $PATH. However, I confirmed the $PATH and mount binary on my qemuarm64 image:
> > >
> > > root@qemuarm64:~# echo $PATH
> > > /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > root@qemuarm64:~# which mount
> > > /bin/mount
> > >
> > > When I type mount command, it worked fine:
> > >
> > > /dev/root on / type ext4 (rw,relatime)
> > > devtmpfs on /dev type devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> > > proc on /proc type proc (rw,relatime)
> > > sysfs on /sys type sysfs (rw,relatime)
> > > debugfs on /sys/kernel/debug type debugfs (rw,relatime)
> > > tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
> > > ...
> > > ... (skipped the verbose output)
> > >
> > > I would like to know whether you have met this "mount" issue ever?
> > >
> > > Best Regards,
> > > Lance
> > >
> > > > -----Original Message-----
> > > > From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org>
> > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > Sent: Monday, October 26, 2020 11:46 PM
> > > > To: Joakim Roubert <joakim.roubert@axis.com>
> > > > Cc: meta-virtualization@yoctoproject.org
> > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > > >
> > > > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> > > > >
> > > > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > > > Ha!!!!
> > > > > >
> > > > > > This applies.
> > > > >
> > > > > Wonderful, thank you! I guess this is what is called "five times lucky"...
> > > > >
> > > > > > I'm now testing and completing some of my networking factoring, as
> > > > > > well as importing / forking some recipes to avoid extra layer
> > > > > > depends.
> > > > >
> > > > > Excellent!
> > > >
> > > > I've pushed some of my WIP to:
> > > > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualization/log/?h=k3s-wip
> > > >
> > > > That includes the split of the networking, the import of some of the dependencies and some
> > > > small tweaks I'm working on.
> > > >
> > > > I did have a couple of questions on the k3s packaging itself, I was getting the following
> > > > error:
> > > >
> > > > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > > > Files/directories were installed but not shipped in any package:
> > > >   /usr/local/bin/k3s-clean
> > > >   /usr/local/bin/crictl
> > > >   /usr/local/bin/kubectl
> > > >   /usr/local/bin/k3s
> > > >
> > > > So I added them to the FILES of the k3s package itself (so both k3s-server and k3s-agent will get
> > > > them), is that the split you were looking for ?
> > > >
> > > > Bruce
> > > >
> > > > >
> > > > > BR,
> > > > >
> > > > > /Joakim
> > > > > --
> > > > > Joakim Roubert
> > > > > Senior Engineer
> > > > >
> > > > > Axis Communications AB
> > > > > Emdalavägen 14, SE-223 69 Lund, Sweden
> > > > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > > > Fax: +46 46 13 61 30, www.axis.com
> > > > >
> > > > --
> > > > - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
> >
> >
> >
> > --
> > - Thou shalt not follow the NULL pointer, for chaos and madness await
> > thee at its end
> > - "Use the force Harry" - Gandalf, Star Trek II
> >
> >
> >
>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await
> thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
>
> 
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-10 13:34                                                 ` Bruce Ashfield
@ 2020-11-11 10:06                                                   ` Lance Yang
  2020-11-11 13:40                                                     ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Lance Yang @ 2020-11-11 10:06 UTC (permalink / raw)
  To: bruce.ashfield
  Cc: Joakim Roubert, meta-virtualization, Michael Zhao, Kaly Xin

Hi Bruce,

Thanks for your reply. It took me a long time to address the $PATH issue.

I found the cause after I cleaned my environment. it is because I set my own $PATH before k3s startup which did not include "/bin" and some components in k3s will use "os.Getenv("PATH") to search the related tool.

Since some OS may not have systemd and the k3s-clean script seems not  to delete all interfaces created by cni and umount all mount points related to k3s. I wrote a script for this situation and I will send it through a separate email based on your previous scripts.

Best Regards,
Lance
> -----Original Message-----
> From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org>
> On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> Sent: Tuesday, November 10, 2020 9:35 PM
> To: Bruce Ashfield <bruce.ashfield@gmail.com>
> Cc: Lance Yang <Lance.Yang@arm.com>; Joakim Roubert <joakim.roubert@axis.com>; meta-
> virtualization@yoctoproject.org; Michael Zhao <Michael.Zhao@arm.com>; Kaly Xin
> <Kaly.Xin@arm.com>
> Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
>
> On Tue, Nov 10, 2020 at 8:17 AM Bruce Ashfield via lists.yoctoproject.org
> <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> >
> > On Tue, Nov 10, 2020 at 7:46 AM Bruce Ashfield via
> > lists.yoctoproject.org
> > <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> > >
> > > On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > > >
> > > > Hi Bruce and Joakim,
> > > >
> > > > Thanks for sharing this branch: k3s-wip. I have tested against my yocto build.
> > >
> > > The branch will be more functional shortly, I have quite a few
> > > changes to factor things for k8s and generally more usable :D
> > >
> > >         modified:   classes/cni_networking.bbclass
> > >         modified:   conf/layer.conf
> > >         modified:   recipes-containers/containerd/containerd-docker_git.bb
> > >         modified:
> > > recipes-containers/containerd/containerd-opencontainers_git.bb
> > >         modified:   recipes-containers/k3s/README.md
> > >         modified:   recipes-containers/k3s/k3s_git.bb
> > >         modified:   recipes-kernel/linux/linux-yocto/kubernetes.cfg
> > >         modified:   recipes-networking/cni/cni_git.bb
> > >         container-deploy.txt
> > >         recipes-core/packagegroups/
> > >
> > > >
> > > > My Image: Linux qemuarm64 by yocto.
> > > >
> > > > The master node can be ready after I started the k3s server. However, the pods in kube-
> system (which are essential components for k3s) cannot turn to ready state on qemuarm64.
> > > >
> > >
> > > That's interesting, since in my configuration, the master never comes ready:
> > >
> > > root@qemux86-64:~# kubectl get nodes
> > > NAME         STATUS     ROLES    AGE   VERSION
> > > qemux86-64   NotReady   master   15h   v1.18.9-k3s1
> > >
> >
> > Hah.
> >
> > I finally got the node to show up as ready:
> >
> > root@qemux86-64:~# kubectl get nodes
> > NAME         STATUS   ROLES    AGE    VERSION
> > qemux86-64   Ready    master   112s   v1.18.9-k3s1
> >
>
> Lance,
>
> What image type were you building ? I'm pulling in dependencies to packagegroups and the
> recipes themselves.
>
> I'm not seeing the mount issue on my master/server node:
>
> root@qemux86-64:~# kubectl get pods -n kube-system
> NAME                                     READY   STATUS      RESTARTS   AGE
> local-path-provisioner-6d59f47c7-h7lxk   1/1     Running     0          3m32s
> metrics-server-7566d596c8-mwntr          1/1     Running     0          3m32s
> helm-install-traefik-229v7               0/1     Completed   0          3m32s
> coredns-7944c66d8d-9rfj7                 1/1     Running     0          3m32s
> svclb-traefik-pb5j4                      2/2     Running     0          2m29s
> traefik-758cd5fc85-lxpr8                 1/1     Running     0          2m29s
>
> I'm going back to all-in-one node debugging, but can look into the mount issue more later.
>
> Bruce
>
> > I'm attempting to build an all-in-one node, and that is likely causing
> > me some issues.
> >
> > I'm revisiting those potential conflicts now.
> >
> > But if anyone else does have an all in one working and has some tips,
> > feel free to share :D
> >
> > Bruce
> >
> > > I've sorted out more of the dependencies, and have packagegroups to
> > > make them easier now.
> > >
> > > Hopefully, I can figure out what is now missing and keeping my
> > > master from moving into ready today.
> > >
> > > Bruce
> > >
> > > > After the master node itself turned to ready state, I check the pods with kubectl:
> > > >
> > > > kubectl get nodes
> > > > NAME        STATUS   ROLES    AGE   VERSION
> > > > qemuarm64   Ready    master   11m   v1.18.9-k3s1
> > > > root@qemuarm64:~# ls
> > > > root@qemuarm64:~# kubectl get pods -n kube-system
> > > > NAME                                     READY   STATUS              RESTARTS   AGE
> > > > local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0          12m
> > > > coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
> > > > metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0          12m
> > > > helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m
> > > >
> > > > Then I describe the pods with:
> > > >
> > > > Events:
> > > >   Type     Reason       Age                  From               Message
> > > >   ----     ------       ----                 ----               -------
> > > >   Normal   Scheduled    16m                  default-scheduler  Successfully assigned kube-
> system/coredns-7944c66d8d-tlrm9 to qemuarm64
> > > >   Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to attach or mount
> volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[coredns-token-
> b7nlh config-volume]: timed out waiting for the condition
> > > >   Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach or mount
> volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[config-volume
> coredns-token-b7nlh]: timed out waiting for the condition
> > > >   Warning  FailedMount  11s (x16 over 16m)   kubelet            MountVolume.SetUp failed
> for volume "coredns-token-b7nlh" : mount failed: exec: "mount": executable file not found in
> $PATH
> > > >
> > > > I found the "mount" binary is not found in $PATH. However, I confirmed the $PATH and
> mount binary on my qemuarm64 image:
> > > >
> > > > root@qemuarm64:~# echo $PATH
> > > > /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > root@qemuarm64:~# which mount
> > > > /bin/mount
> > > >
> > > > When I type mount command, it worked fine:
> > > >
> > > > /dev/root on / type ext4 (rw,relatime) devtmpfs on /dev type
> > > > devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> > > > proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs
> > > > (rw,relatime) debugfs on /sys/kernel/debug type debugfs
> > > > (rw,relatime) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
> > > > ...
> > > > ... (skipped the verbose output)
> > > >
> > > > I would like to know whether you have met this "mount" issue ever?
> > > >
> > > > Best Regards,
> > > > Lance
> > > >
> > > > > -----Original Message-----
> > > > > From: meta-virtualization@lists.yoctoproject.org
> > > > > <meta-virtualization@lists.yoctoproject.org>
> > > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > > Sent: Monday, October 26, 2020 11:46 PM
> > > > > To: Joakim Roubert <joakim.roubert@axis.com>
> > > > > Cc: meta-virtualization@yoctoproject.org
> > > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > > > >
> > > > > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> > > > > >
> > > > > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > > > > Ha!!!!
> > > > > > >
> > > > > > > This applies.
> > > > > >
> > > > > > Wonderful, thank you! I guess this is what is called "five times lucky"...
> > > > > >
> > > > > > > I'm now testing and completing some of my networking
> > > > > > > factoring, as well as importing / forking some recipes to
> > > > > > > avoid extra layer depends.
> > > > > >
> > > > > > Excellent!
> > > > >
> > > > > I've pushed some of my WIP to:
> > > > > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualization/l
> > > > > og/?h=k3s-wip
> > > > >
> > > > > That includes the split of the networking, the import of some of
> > > > > the dependencies and some small tweaks I'm working on.
> > > > >
> > > > > I did have a couple of questions on the k3s packaging itself, I
> > > > > was getting the following
> > > > > error:
> > > > >
> > > > > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > > > > Files/directories were installed but not shipped in any package:
> > > > >   /usr/local/bin/k3s-clean
> > > > >   /usr/local/bin/crictl
> > > > >   /usr/local/bin/kubectl
> > > > >   /usr/local/bin/k3s
> > > > >
> > > > > So I added them to the FILES of the k3s package itself (so both
> > > > > k3s-server and k3s-agent will get them), is that the split you were looking for ?
> > > > >
> > > > > Bruce
> > > > >
> > > > > >
> > > > > > BR,
> > > > > >
> > > > > > /Joakim
> > > > > > --
> > > > > > Joakim Roubert
> > > > > > Senior Engineer
> > > > > >
> > > > > > Axis Communications AB
> > > > > > Emdalavägen 14, SE-223 69 Lund, Sweden
> > > > > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > > > > Fax: +46 46 13 61 30, www.axis.com
> > > > > >
> > > > > --
> > > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > > await thee at its end
> > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and
> may also be privileged. If you are not the intended recipient, please notify the sender
> immediately and do not disclose the contents to any other person, use it for any purpose, or
> store or copy the information in any medium. Thank you.
> > >
> > >
> > >
> > > --
> > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > await thee at its end
> > > - "Use the force Harry" - Gandalf, Star Trek II
> > >
> > >
> > >
> >
> >
> > --
> > - Thou shalt not follow the NULL pointer, for chaos and madness await
> > thee at its end
> > - "Use the force Harry" - Gandalf, Star Trek II
> >
> >
> >
>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-11 10:06                                                   ` Lance Yang
@ 2020-11-11 13:40                                                     ` Bruce Ashfield
  2020-11-12  7:04                                                       ` Lance Yang
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-11 13:40 UTC (permalink / raw)
  To: Lance Yang; +Cc: Joakim Roubert, meta-virtualization, Michael Zhao, Kaly Xin

On Wed, Nov 11, 2020 at 5:07 AM Lance Yang <Lance.Yang@arm.com> wrote:
>
> Hi Bruce,
>
> Thanks for your reply. It took me a long time to address the $PATH issue.
>
> I found the cause after I cleaned my environment. it is because I set my own $PATH before k3s startup which did not include "/bin" and some components in k3s will use "os.Getenv("PATH") to search the related tool.
>
> Since some OS may not have systemd and the k3s-clean script seems not  to delete all interfaces created by cni and umount all mount points related to k3s. I wrote a script for this situation and I will send it through a separate email based on your previous scripts.
>

Sounds good.

You'll notice that I pushed an update to the k3s-wip branch, things
are a bit closer to working there now.

I am interested to hear if flannel crashes if you (or anyone else)
install (and use) the k3s-agent on the same node as the k3s-server.

As soon as you run any k3s-agent command (with the proper token and
local host as the server), the node moves into NotReady with what
looks like a CNI error.

I fetched and ran the binaries directly from rancher, and they showed
the same behaviour as the meta-virt ones, so it isn't something
fundamental with the integration.

Bruce

> Best Regards,
> Lance
> > -----Original Message-----
> > From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org>
> > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > Sent: Tuesday, November 10, 2020 9:35 PM
> > To: Bruce Ashfield <bruce.ashfield@gmail.com>
> > Cc: Lance Yang <Lance.Yang@arm.com>; Joakim Roubert <joakim.roubert@axis.com>; meta-
> > virtualization@yoctoproject.org; Michael Zhao <Michael.Zhao@arm.com>; Kaly Xin
> > <Kaly.Xin@arm.com>
> > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> >
> > On Tue, Nov 10, 2020 at 8:17 AM Bruce Ashfield via lists.yoctoproject.org
> > <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> > >
> > > On Tue, Nov 10, 2020 at 7:46 AM Bruce Ashfield via
> > > lists.yoctoproject.org
> > > <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> > > >
> > > > On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > > > >
> > > > > Hi Bruce and Joakim,
> > > > >
> > > > > Thanks for sharing this branch: k3s-wip. I have tested against my yocto build.
> > > >
> > > > The branch will be more functional shortly, I have quite a few
> > > > changes to factor things for k8s and generally more usable :D
> > > >
> > > >         modified:   classes/cni_networking.bbclass
> > > >         modified:   conf/layer.conf
> > > >         modified:   recipes-containers/containerd/containerd-docker_git.bb
> > > >         modified:
> > > > recipes-containers/containerd/containerd-opencontainers_git.bb
> > > >         modified:   recipes-containers/k3s/README.md
> > > >         modified:   recipes-containers/k3s/k3s_git.bb
> > > >         modified:   recipes-kernel/linux/linux-yocto/kubernetes.cfg
> > > >         modified:   recipes-networking/cni/cni_git.bb
> > > >         container-deploy.txt
> > > >         recipes-core/packagegroups/
> > > >
> > > > >
> > > > > My Image: Linux qemuarm64 by yocto.
> > > > >
> > > > > The master node can be ready after I started the k3s server. However, the pods in kube-
> > system (which are essential components for k3s) cannot turn to ready state on qemuarm64.
> > > > >
> > > >
> > > > That's interesting, since in my configuration, the master never comes ready:
> > > >
> > > > root@qemux86-64:~# kubectl get nodes
> > > > NAME         STATUS     ROLES    AGE   VERSION
> > > > qemux86-64   NotReady   master   15h   v1.18.9-k3s1
> > > >
> > >
> > > Hah.
> > >
> > > I finally got the node to show up as ready:
> > >
> > > root@qemux86-64:~# kubectl get nodes
> > > NAME         STATUS   ROLES    AGE    VERSION
> > > qemux86-64   Ready    master   112s   v1.18.9-k3s1
> > >
> >
> > Lance,
> >
> > What image type were you building ? I'm pulling in dependencies to packagegroups and the
> > recipes themselves.
> >
> > I'm not seeing the mount issue on my master/server node:
> >
> > root@qemux86-64:~# kubectl get pods -n kube-system
> > NAME                                     READY   STATUS      RESTARTS   AGE
> > local-path-provisioner-6d59f47c7-h7lxk   1/1     Running     0          3m32s
> > metrics-server-7566d596c8-mwntr          1/1     Running     0          3m32s
> > helm-install-traefik-229v7               0/1     Completed   0          3m32s
> > coredns-7944c66d8d-9rfj7                 1/1     Running     0          3m32s
> > svclb-traefik-pb5j4                      2/2     Running     0          2m29s
> > traefik-758cd5fc85-lxpr8                 1/1     Running     0          2m29s
> >
> > I'm going back to all-in-one node debugging, but can look into the mount issue more later.
> >
> > Bruce
> >
> > > I'm attempting to build an all-in-one node, and that is likely causing
> > > me some issues.
> > >
> > > I'm revisiting those potential conflicts now.
> > >
> > > But if anyone else does have an all in one working and has some tips,
> > > feel free to share :D
> > >
> > > Bruce
> > >
> > > > I've sorted out more of the dependencies, and have packagegroups to
> > > > make them easier now.
> > > >
> > > > Hopefully, I can figure out what is now missing and keeping my
> > > > master from moving into ready today.
> > > >
> > > > Bruce
> > > >
> > > > > After the master node itself turned to ready state, I check the pods with kubectl:
> > > > >
> > > > > kubectl get nodes
> > > > > NAME        STATUS   ROLES    AGE   VERSION
> > > > > qemuarm64   Ready    master   11m   v1.18.9-k3s1
> > > > > root@qemuarm64:~# ls
> > > > > root@qemuarm64:~# kubectl get pods -n kube-system
> > > > > NAME                                     READY   STATUS              RESTARTS   AGE
> > > > > local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0          12m
> > > > > coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
> > > > > metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0          12m
> > > > > helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m
> > > > >
> > > > > Then I describe the pods with:
> > > > >
> > > > > Events:
> > > > >   Type     Reason       Age                  From               Message
> > > > >   ----     ------       ----                 ----               -------
> > > > >   Normal   Scheduled    16m                  default-scheduler  Successfully assigned kube-
> > system/coredns-7944c66d8d-tlrm9 to qemuarm64
> > > > >   Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to attach or mount
> > volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[coredns-token-
> > b7nlh config-volume]: timed out waiting for the condition
> > > > >   Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach or mount
> > volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[config-volume
> > coredns-token-b7nlh]: timed out waiting for the condition
> > > > >   Warning  FailedMount  11s (x16 over 16m)   kubelet            MountVolume.SetUp failed
> > for volume "coredns-token-b7nlh" : mount failed: exec: "mount": executable file not found in
> > $PATH
> > > > >
> > > > > I found the "mount" binary is not found in $PATH. However, I confirmed the $PATH and
> > mount binary on my qemuarm64 image:
> > > > >
> > > > > root@qemuarm64:~# echo $PATH
> > > > > /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > root@qemuarm64:~# which mount
> > > > > /bin/mount
> > > > >
> > > > > When I type mount command, it worked fine:
> > > > >
> > > > > /dev/root on / type ext4 (rw,relatime) devtmpfs on /dev type
> > > > > devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> > > > > proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs
> > > > > (rw,relatime) debugfs on /sys/kernel/debug type debugfs
> > > > > (rw,relatime) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
> > > > > ...
> > > > > ... (skipped the verbose output)
> > > > >
> > > > > I would like to know whether you have met this "mount" issue ever?
> > > > >
> > > > > Best Regards,
> > > > > Lance
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: meta-virtualization@lists.yoctoproject.org
> > > > > > <meta-virtualization@lists.yoctoproject.org>
> > > > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > > > Sent: Monday, October 26, 2020 11:46 PM
> > > > > > To: Joakim Roubert <joakim.roubert@axis.com>
> > > > > > Cc: meta-virtualization@yoctoproject.org
> > > > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > > > > >
> > > > > > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> > > > > > >
> > > > > > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > > > > > Ha!!!!
> > > > > > > >
> > > > > > > > This applies.
> > > > > > >
> > > > > > > Wonderful, thank you! I guess this is what is called "five times lucky"...
> > > > > > >
> > > > > > > > I'm now testing and completing some of my networking
> > > > > > > > factoring, as well as importing / forking some recipes to
> > > > > > > > avoid extra layer depends.
> > > > > > >
> > > > > > > Excellent!
> > > > > >
> > > > > > I've pushed some of my WIP to:
> > > > > > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualization/l
> > > > > > og/?h=k3s-wip
> > > > > >
> > > > > > That includes the split of the networking, the import of some of
> > > > > > the dependencies and some small tweaks I'm working on.
> > > > > >
> > > > > > I did have a couple of questions on the k3s packaging itself, I
> > > > > > was getting the following
> > > > > > error:
> > > > > >
> > > > > > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > > > > > Files/directories were installed but not shipped in any package:
> > > > > >   /usr/local/bin/k3s-clean
> > > > > >   /usr/local/bin/crictl
> > > > > >   /usr/local/bin/kubectl
> > > > > >   /usr/local/bin/k3s
> > > > > >
> > > > > > So I added them to the FILES of the k3s package itself (so both
> > > > > > k3s-server and k3s-agent will get them), is that the split you were looking for ?
> > > > > >
> > > > > > Bruce
> > > > > >
> > > > > > >
> > > > > > > BR,
> > > > > > >
> > > > > > > /Joakim
> > > > > > > --
> > > > > > > Joakim Roubert
> > > > > > > Senior Engineer
> > > > > > >
> > > > > > > Axis Communications AB
> > > > > > > Emdalavägen 14, SE-223 69 Lund, Sweden
> > > > > > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > > > > > Fax: +46 46 13 61 30, www.axis.com
> > > > > > >
> > > > > > --
> > > > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > > > await thee at its end
> > > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and
> > may also be privileged. If you are not the intended recipient, please notify the sender
> > immediately and do not disclose the contents to any other person, use it for any purpose, or
> > store or copy the information in any medium. Thank you.
> > > >
> > > >
> > > >
> > > > --
> > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > await thee at its end
> > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > >
> > > >
> > > >
> > >
> > >
> > > --
> > > - Thou shalt not follow the NULL pointer, for chaos and madness await
> > > thee at its end
> > > - "Use the force Harry" - Gandalf, Star Trek II
> > >
> > >
> > >
> >
> >
> > --
> > - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> > - "Use the force Harry" - Gandalf, Star Trek II
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-11 13:40                                                     ` Bruce Ashfield
@ 2020-11-12  7:04                                                       ` Lance Yang
  2020-11-12 13:40                                                         ` Bruce Ashfield
  2020-11-17 14:13                                                         ` Joakim Roubert
  0 siblings, 2 replies; 73+ messages in thread
From: Lance Yang @ 2020-11-12  7:04 UTC (permalink / raw)
  To: Bruce Ashfield
  Cc: Joakim Roubert, meta-virtualization, Michael Zhao, Kaly Xin, nd

Hi Bruce,

On my side, the flannel did not crash.

I started k3s server without agent and confirmed the node is ready. Then I copied the node_token and then ran k3s agent with that token and specified the server url. Then I can see node registered successfully.

The pods traefik and coredns will be not ready after I started k3s agent in a short while:
kube-system   traefik-758cd5fc85-8tqhf                 0/1     Running   0          24m
kube-system   coredns-7944c66d8d-584jt                 0/1     Running   0          21m

but they soon were back to ready state:
kube-system   coredns-7944c66d8d-584jt                 1/1     Running   0          21m
kube-system   traefik-758cd5fc85-8tqhf                 1/1     Running   0          24m

I did not see any errors related flannel crash.  Hope this helped a little bit.

Best Regards,
Lance
> -----Original Message-----
> From: Bruce Ashfield <bruce.ashfield@gmail.com>
> Sent: Wednesday, November 11, 2020 9:40 PM
> To: Lance Yang <Lance.Yang@arm.com>
> Cc: Joakim Roubert <joakim.roubert@axis.com>; meta-virtualization@yoctoproject.org; Michael
> Zhao <Michael.Zhao@arm.com>; Kaly Xin <Kaly.Xin@arm.com>
> Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
>
> On Wed, Nov 11, 2020 at 5:07 AM Lance Yang <Lance.Yang@arm.com> wrote:
> >
> > Hi Bruce,
> >
> > Thanks for your reply. It took me a long time to address the $PATH issue.
> >
> > I found the cause after I cleaned my environment. it is because I set my own $PATH before k3s
> startup which did not include "/bin" and some components in k3s will use "os.Getenv("PATH")
> to search the related tool.
> >
> > Since some OS may not have systemd and the k3s-clean script seems not  to delete all
> interfaces created by cni and umount all mount points related to k3s. I wrote a script for this
> situation and I will send it through a separate email based on your previous scripts.
> >
>
> Sounds good.
>
> You'll notice that I pushed an update to the k3s-wip branch, things are a bit closer to working
> there now.
>
> I am interested to hear if flannel crashes if you (or anyone else) install (and use) the k3s-agent
> on the same node as the k3s-server.
>
> As soon as you run any k3s-agent command (with the proper token and local host as the server),
> the node moves into NotReady with what looks like a CNI error.
>
> I fetched and ran the binaries directly from rancher, and they showed the same behaviour as the
> meta-virt ones, so it isn't something fundamental with the integration.
>
> Bruce
>
> > Best Regards,
> > Lance
> > > -----Original Message-----
> > > From: meta-virtualization@lists.yoctoproject.org
> > > <meta-virtualization@lists.yoctoproject.org>
> > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > Sent: Tuesday, November 10, 2020 9:35 PM
> > > To: Bruce Ashfield <bruce.ashfield@gmail.com>
> > > Cc: Lance Yang <Lance.Yang@arm.com>; Joakim Roubert
> > > <joakim.roubert@axis.com>; meta- virtualization@yoctoproject.org;
> > > Michael Zhao <Michael.Zhao@arm.com>; Kaly Xin <Kaly.Xin@arm.com>
> > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > >
> > > On Tue, Nov 10, 2020 at 8:17 AM Bruce Ashfield via
> > > lists.yoctoproject.org <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> > > >
> > > > On Tue, Nov 10, 2020 at 7:46 AM Bruce Ashfield via
> > > > lists.yoctoproject.org
> > > > <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> > > > >
> > > > > On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > > > > >
> > > > > > Hi Bruce and Joakim,
> > > > > >
> > > > > > Thanks for sharing this branch: k3s-wip. I have tested against my yocto build.
> > > > >
> > > > > The branch will be more functional shortly, I have quite a few
> > > > > changes to factor things for k8s and generally more usable :D
> > > > >
> > > > >         modified:   classes/cni_networking.bbclass
> > > > >         modified:   conf/layer.conf
> > > > >         modified:   recipes-containers/containerd/containerd-docker_git.bb
> > > > >         modified:
> > > > > recipes-containers/containerd/containerd-opencontainers_git.bb
> > > > >         modified:   recipes-containers/k3s/README.md
> > > > >         modified:   recipes-containers/k3s/k3s_git.bb
> > > > >         modified:   recipes-kernel/linux/linux-yocto/kubernetes.cfg
> > > > >         modified:   recipes-networking/cni/cni_git.bb
> > > > >         container-deploy.txt
> > > > >         recipes-core/packagegroups/
> > > > >
> > > > > >
> > > > > > My Image: Linux qemuarm64 by yocto.
> > > > > >
> > > > > > The master node can be ready after I started the k3s server.
> > > > > > However, the pods in kube-
> > > system (which are essential components for k3s) cannot turn to ready state on qemuarm64.
> > > > > >
> > > > >
> > > > > That's interesting, since in my configuration, the master never comes ready:
> > > > >
> > > > > root@qemux86-64:~# kubectl get nodes
> > > > > NAME         STATUS     ROLES    AGE   VERSION
> > > > > qemux86-64   NotReady   master   15h   v1.18.9-k3s1
> > > > >
> > > >
> > > > Hah.
> > > >
> > > > I finally got the node to show up as ready:
> > > >
> > > > root@qemux86-64:~# kubectl get nodes
> > > > NAME         STATUS   ROLES    AGE    VERSION
> > > > qemux86-64   Ready    master   112s   v1.18.9-k3s1
> > > >
> > >
> > > Lance,
> > >
> > > What image type were you building ? I'm pulling in dependencies to
> > > packagegroups and the recipes themselves.
> > >
> > > I'm not seeing the mount issue on my master/server node:
> > >
> > > root@qemux86-64:~# kubectl get pods -n kube-system
> > > NAME                                     READY   STATUS      RESTARTS   AGE
> > > local-path-provisioner-6d59f47c7-h7lxk   1/1     Running     0          3m32s
> > > metrics-server-7566d596c8-mwntr          1/1     Running     0          3m32s
> > > helm-install-traefik-229v7               0/1     Completed   0          3m32s
> > > coredns-7944c66d8d-9rfj7                 1/1     Running     0          3m32s
> > > svclb-traefik-pb5j4                      2/2     Running     0          2m29s
> > > traefik-758cd5fc85-lxpr8                 1/1     Running     0          2m29s
> > >
> > > I'm going back to all-in-one node debugging, but can look into the mount issue more later.
> > >
> > > Bruce
> > >
> > > > I'm attempting to build an all-in-one node, and that is likely
> > > > causing me some issues.
> > > >
> > > > I'm revisiting those potential conflicts now.
> > > >
> > > > But if anyone else does have an all in one working and has some
> > > > tips, feel free to share :D
> > > >
> > > > Bruce
> > > >
> > > > > I've sorted out more of the dependencies, and have packagegroups
> > > > > to make them easier now.
> > > > >
> > > > > Hopefully, I can figure out what is now missing and keeping my
> > > > > master from moving into ready today.
> > > > >
> > > > > Bruce
> > > > >
> > > > > > After the master node itself turned to ready state, I check the pods with kubectl:
> > > > > >
> > > > > > kubectl get nodes
> > > > > > NAME        STATUS   ROLES    AGE   VERSION
> > > > > > qemuarm64   Ready    master   11m   v1.18.9-k3s1
> > > > > > root@qemuarm64:~# ls
> > > > > > root@qemuarm64:~# kubectl get pods -n kube-system
> > > > > > NAME                                     READY   STATUS              RESTARTS   AGE
> > > > > > local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0          12m
> > > > > > coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
> > > > > > metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0          12m
> > > > > > helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m
> > > > > >
> > > > > > Then I describe the pods with:
> > > > > >
> > > > > > Events:
> > > > > >   Type     Reason       Age                  From               Message
> > > > > >   ----     ------       ----                 ----               -------
> > > > > >   Normal   Scheduled    16m                  default-scheduler  Successfully assigned kube-
> > > system/coredns-7944c66d8d-tlrm9 to qemuarm64
> > > > > >   Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to attach or
> mount
> > > volumes: unmounted volumes=[coredns-token-b7nlh], unattached
> > > volumes=[coredns-token- b7nlh config-volume]: timed out waiting for
> > > the condition
> > > > > >   Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach or mount
> > > volumes: unmounted volumes=[coredns-token-b7nlh], unattached
> > > volumes=[config-volume
> > > coredns-token-b7nlh]: timed out waiting for the condition
> > > > > >   Warning  FailedMount  11s (x16 over 16m)   kubelet            MountVolume.SetUp
> failed
> > > for volume "coredns-token-b7nlh" : mount failed: exec: "mount":
> > > executable file not found in $PATH
> > > > > >
> > > > > > I found the "mount" binary is not found in $PATH. However, I
> > > > > > confirmed the $PATH and
> > > mount binary on my qemuarm64 image:
> > > > > >
> > > > > > root@qemuarm64:~# echo $PATH
> > > > > > /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > > root@qemuarm64:~# which mount
> > > > > > /bin/mount
> > > > > >
> > > > > > When I type mount command, it worked fine:
> > > > > >
> > > > > > /dev/root on / type ext4 (rw,relatime) devtmpfs on /dev type
> > > > > > devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> > > > > > proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs
> > > > > > (rw,relatime) debugfs on /sys/kernel/debug type debugfs
> > > > > > (rw,relatime) tmpfs on /run type tmpfs
> > > > > > (rw,nosuid,nodev,mode=755) ...
> > > > > > ... (skipped the verbose output)
> > > > > >
> > > > > > I would like to know whether you have met this "mount" issue ever?
> > > > > >
> > > > > > Best Regards,
> > > > > > Lance
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: meta-virtualization@lists.yoctoproject.org
> > > > > > > <meta-virtualization@lists.yoctoproject.org>
> > > > > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > > > > Sent: Monday, October 26, 2020 11:46 PM
> > > > > > > To: Joakim Roubert <joakim.roubert@axis.com>
> > > > > > > Cc: meta-virtualization@yoctoproject.org
> > > > > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s
> > > > > > > recipe
> > > > > > >
> > > > > > > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> > > > > > > >
> > > > > > > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > > > > > > Ha!!!!
> > > > > > > > >
> > > > > > > > > This applies.
> > > > > > > >
> > > > > > > > Wonderful, thank you! I guess this is what is called "five times lucky"...
> > > > > > > >
> > > > > > > > > I'm now testing and completing some of my networking
> > > > > > > > > factoring, as well as importing / forking some recipes
> > > > > > > > > to avoid extra layer depends.
> > > > > > > >
> > > > > > > > Excellent!
> > > > > > >
> > > > > > > I've pushed some of my WIP to:
> > > > > > > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualizati
> > > > > > > on/l
> > > > > > > og/?h=k3s-wip
> > > > > > >
> > > > > > > That includes the split of the networking, the import of
> > > > > > > some of the dependencies and some small tweaks I'm working on.
> > > > > > >
> > > > > > > I did have a couple of questions on the k3s packaging
> > > > > > > itself, I was getting the following
> > > > > > > error:
> > > > > > >
> > > > > > > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > > > > > > Files/directories were installed but not shipped in any package:
> > > > > > >   /usr/local/bin/k3s-clean
> > > > > > >   /usr/local/bin/crictl
> > > > > > >   /usr/local/bin/kubectl
> > > > > > >   /usr/local/bin/k3s
> > > > > > >
> > > > > > > So I added them to the FILES of the k3s package itself (so
> > > > > > > both k3s-server and k3s-agent will get them), is that the split you were looking for ?
> > > > > > >
> > > > > > > Bruce
> > > > > > >
> > > > > > > >
> > > > > > > > BR,
> > > > > > > >
> > > > > > > > /Joakim
> > > > > > > > --
> > > > > > > > Joakim Roubert
> > > > > > > > Senior Engineer
> > > > > > > >
> > > > > > > > Axis Communications AB
> > > > > > > > Emdalavägen 14, SE-223 69 Lund, Sweden
> > > > > > > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > > > > > > Fax: +46 46 13 61 30, www.axis.com
> > > > > > > >
> > > > > > > --
> > > > > > > - Thou shalt not follow the NULL pointer, for chaos and
> > > > > > > madness await thee at its end
> > > > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > > > IMPORTANT NOTICE: The contents of this email and any
> > > > > > attachments are confidential and
> > > may also be privileged. If you are not the intended recipient,
> > > please notify the sender immediately and do not disclose the
> > > contents to any other person, use it for any purpose, or store or copy the information in any
> medium. Thank you.
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > > await thee at its end
> > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > await thee at its end
> > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > >
> > > >
> > > >
> > >
> > >
> > > --
> > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > await thee at its end
> > > - "Use the force Harry" - Gandalf, Star Trek II
> > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may
> also be privileged. If you are not the intended recipient, please notify the sender immediately
> and do not disclose the contents to any other person, use it for any purpose, or store or copy
> the information in any medium. Thank you.
>
>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-10 13:17                                               ` Bruce Ashfield
@ 2020-11-12  7:30                                                 ` Lance Yang
  2020-11-12 13:38                                                   ` Bruce Ashfield
  2020-11-12 13:43                                                   ` [meta-virtualization][PATCH v5] Adding k3s recipe Joakim Roubert
  2020-11-12 13:40                                                 ` Joakim Roubert
  1 sibling, 2 replies; 73+ messages in thread
From: Lance Yang @ 2020-11-12  7:30 UTC (permalink / raw)
  To: bruce.ashfield
  Cc: Joakim Roubert, meta-virtualization, Michael Zhao, Kaly Xin

Hi Bruce,

I pulled your latest k3s-wip branch. In your recipes-networking, the ipset:

cat ipset_6.38.bb
# Copyright (C) 2017 Aaron Brice <aaron.brice@datasoft.com>
# Released under the MIT license (see COPYING.MIT for the terms)

...
... (skip verbose output)

DEPENDS = "libtool libmnl"
RDEPENDS_${PN} = "kernel-module-ip-set"
...

I use bitbake to install k3s and ipset but I got errors below:

Problem 1: conflicting requests
  - nothing provides kernel-module-ip-set needed by ipset-6.38-r0

I think RDEPENDS_${PN} is a slightly over-strict condition in some situations. For example, in the following situation:

In my build/conf/local.conf, I added "kernel-modules" in IMAGE_INSTALL_append section. kernel-modules-ip-set has already included in kernel-modules when I check in qemuarm64 image with "lsmod|grep ip_set". But if RDEPENDS is used, bitbake won't continue to compile the package unless you add "kernel-module-ip-set".

So I suggested to use " RRECOMMENDS_${PN}".

Best Regards,
Lance
> -----Original Message-----
> From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org>
> On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> Sent: Tuesday, November 10, 2020 9:17 PM
> To: Bruce Ashfield <bruce.ashfield@gmail.com>
> Cc: Lance Yang <Lance.Yang@arm.com>; Joakim Roubert <joakim.roubert@axis.com>; meta-
> virtualization@yoctoproject.org; Michael Zhao <Michael.Zhao@arm.com>; Kaly Xin
> <Kaly.Xin@arm.com>
> Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
>
> On Tue, Nov 10, 2020 at 7:46 AM Bruce Ashfield via lists.yoctoproject.org
> <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> >
> > On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > >
> > > Hi Bruce and Joakim,
> > >
> > > Thanks for sharing this branch: k3s-wip. I have tested against my yocto build.
> >
> > The branch will be more functional shortly, I have quite a few changes
> > to factor things for k8s and generally more usable :D
> >
> >         modified:   classes/cni_networking.bbclass
> >         modified:   conf/layer.conf
> >         modified:   recipes-containers/containerd/containerd-docker_git.bb
> >         modified:
> > recipes-containers/containerd/containerd-opencontainers_git.bb
> >         modified:   recipes-containers/k3s/README.md
> >         modified:   recipes-containers/k3s/k3s_git.bb
> >         modified:   recipes-kernel/linux/linux-yocto/kubernetes.cfg
> >         modified:   recipes-networking/cni/cni_git.bb
> >         container-deploy.txt
> >         recipes-core/packagegroups/
> >
> > >
> > > My Image: Linux qemuarm64 by yocto.
> > >
> > > The master node can be ready after I started the k3s server. However, the pods in kube-
> system (which are essential components for k3s) cannot turn to ready state on qemuarm64.
> > >
> >
> > That's interesting, since in my configuration, the master never comes ready:
> >
> > root@qemux86-64:~# kubectl get nodes
> > NAME         STATUS     ROLES    AGE   VERSION
> > qemux86-64   NotReady   master   15h   v1.18.9-k3s1
> >
>
> Hah.
>
> I finally got the node to show up as ready:
>
> root@qemux86-64:~# kubectl get nodes
> NAME         STATUS   ROLES    AGE    VERSION
> qemux86-64   Ready    master   112s   v1.18.9-k3s1
>
> I'm attempting to build an all-in-one node, and that is likely causing me some issues.
>
> I'm revisiting those potential conflicts now.
>
> But if anyone else does have an all in one working and has some tips, feel free to share :D
>
> Bruce
>
> > I've sorted out more of the dependencies, and have packagegroups to
> > make them easier now.
> >
> > Hopefully, I can figure out what is now missing and keeping my master
> > from moving into ready today.
> >
> > Bruce
> >
> > > After the master node itself turned to ready state, I check the pods with kubectl:
> > >
> > > kubectl get nodes
> > > NAME        STATUS   ROLES    AGE   VERSION
> > > qemuarm64   Ready    master   11m   v1.18.9-k3s1
> > > root@qemuarm64:~# ls
> > > root@qemuarm64:~# kubectl get pods -n kube-system
> > > NAME                                     READY   STATUS              RESTARTS   AGE
> > > local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0          12m
> > > coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
> > > metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0          12m
> > > helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m
> > >
> > > Then I describe the pods with:
> > >
> > > Events:
> > >   Type     Reason       Age                  From               Message
> > >   ----     ------       ----                 ----               -------
> > >   Normal   Scheduled    16m                  default-scheduler  Successfully assigned kube-
> system/coredns-7944c66d8d-tlrm9 to qemuarm64
> > >   Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to attach or mount
> volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[coredns-token-
> b7nlh config-volume]: timed out waiting for the condition
> > >   Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach or mount
> volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[config-volume
> coredns-token-b7nlh]: timed out waiting for the condition
> > >   Warning  FailedMount  11s (x16 over 16m)   kubelet            MountVolume.SetUp failed for
> volume "coredns-token-b7nlh" : mount failed: exec: "mount": executable file not found in $PATH
> > >
> > > I found the "mount" binary is not found in $PATH. However, I confirmed the $PATH and
> mount binary on my qemuarm64 image:
> > >
> > > root@qemuarm64:~# echo $PATH
> > > /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > root@qemuarm64:~# which mount
> > > /bin/mount
> > >
> > > When I type mount command, it worked fine:
> > >
> > > /dev/root on / type ext4 (rw,relatime) devtmpfs on /dev type
> > > devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> > > proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs
> > > (rw,relatime) debugfs on /sys/kernel/debug type debugfs
> > > (rw,relatime) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
> > > ...
> > > ... (skipped the verbose output)
> > >
> > > I would like to know whether you have met this "mount" issue ever?
> > >
> > > Best Regards,
> > > Lance
> > >
> > > > -----Original Message-----
> > > > From: meta-virtualization@lists.yoctoproject.org
> > > > <meta-virtualization@lists.yoctoproject.org>
> > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > Sent: Monday, October 26, 2020 11:46 PM
> > > > To: Joakim Roubert <joakim.roubert@axis.com>
> > > > Cc: meta-virtualization@yoctoproject.org
> > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > > >
> > > > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> > > > >
> > > > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > > > Ha!!!!
> > > > > >
> > > > > > This applies.
> > > > >
> > > > > Wonderful, thank you! I guess this is what is called "five times lucky"...
> > > > >
> > > > > > I'm now testing and completing some of my networking
> > > > > > factoring, as well as importing / forking some recipes to
> > > > > > avoid extra layer depends.
> > > > >
> > > > > Excellent!
> > > >
> > > > I've pushed some of my WIP to:
> > > > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualization/log
> > > > /?h=k3s-wip
> > > >
> > > > That includes the split of the networking, the import of some of
> > > > the dependencies and some small tweaks I'm working on.
> > > >
> > > > I did have a couple of questions on the k3s packaging itself, I
> > > > was getting the following
> > > > error:
> > > >
> > > > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > > > Files/directories were installed but not shipped in any package:
> > > >   /usr/local/bin/k3s-clean
> > > >   /usr/local/bin/crictl
> > > >   /usr/local/bin/kubectl
> > > >   /usr/local/bin/k3s
> > > >
> > > > So I added them to the FILES of the k3s package itself (so both
> > > > k3s-server and k3s-agent will get them), is that the split you were looking for ?
> > > >
> > > > Bruce
> > > >
> > > > >
> > > > > BR,
> > > > >
> > > > > /Joakim
> > > > > --
> > > > > Joakim Roubert
> > > > > Senior Engineer
> > > > >
> > > > > Axis Communications AB
> > > > > Emdalavägen 14, SE-223 69 Lund, Sweden
> > > > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > > > Fax: +46 46 13 61 30, www.axis.com
> > > > >
> > > > --
> > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > await thee at its end
> > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and
> may also be privileged. If you are not the intended recipient, please notify the sender
> immediately and do not disclose the contents to any other person, use it for any purpose, or
> store or copy the information in any medium. Thank you.
> >
> >
> >
> > --
> > - Thou shalt not follow the NULL pointer, for chaos and madness await
> > thee at its end
> > - "Use the force Harry" - Gandalf, Star Trek II
> >
> >
> >
>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-12  7:30                                                 ` Lance Yang
@ 2020-11-12 13:38                                                   ` Bruce Ashfield
  2020-11-12 14:26                                                     ` [meta-virtualization][PATCH] k3s: Update README.md Joakim Roubert
                                                                       ` (2 more replies)
  2020-11-12 13:43                                                   ` [meta-virtualization][PATCH v5] Adding k3s recipe Joakim Roubert
  1 sibling, 3 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-12 13:38 UTC (permalink / raw)
  To: Lance Yang; +Cc: Joakim Roubert, meta-virtualization, Michael Zhao, Kaly Xin

On Thu, Nov 12, 2020 at 2:30 AM Lance Yang <Lance.Yang@arm.com> wrote:
>
> Hi Bruce,
>
> I pulled your latest k3s-wip branch. In your recipes-networking, the ipset:
>
> cat ipset_6.38.bb
> # Copyright (C) 2017 Aaron Brice <aaron.brice@datasoft.com>
> # Released under the MIT license (see COPYING.MIT for the terms)
>
> ...
> ... (skip verbose output)
>
> DEPENDS = "libtool libmnl"
> RDEPENDS_${PN} = "kernel-module-ip-set"
> ...
>
> I use bitbake to install k3s and ipset but I got errors below:
>
> Problem 1: conflicting requests
>   - nothing provides kernel-module-ip-set needed by ipset-6.38-r0
>
> I think RDEPENDS_${PN} is a slightly over-strict condition in some situations. For example, in the following situation:
>
> In my build/conf/local.conf, I added "kernel-modules" in IMAGE_INSTALL_append section. kernel-modules-ip-set has already included in kernel-modules when I check in qemuarm64 image with "lsmod|grep ip_set". But if RDEPENDS is used, bitbake won't continue to compile the package unless you add "kernel-module-ip-set".
>
> So I suggested to use " RRECOMMENDS_${PN}".

That's already the case in the forked recipe (rdepends), and there's
significant issues with getting the right kernel configuration .. so
I'm going to leave it as a rdepends, since if you have that not as a
module, it means you aren't using the linux-yocto reference configs
(or the reference configs are broken), and that there may be other
issues in your configuration.

Bruce

>
> Best Regards,
> Lance
> > -----Original Message-----
> > From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org>
> > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > Sent: Tuesday, November 10, 2020 9:17 PM
> > To: Bruce Ashfield <bruce.ashfield@gmail.com>
> > Cc: Lance Yang <Lance.Yang@arm.com>; Joakim Roubert <joakim.roubert@axis.com>; meta-
> > virtualization@yoctoproject.org; Michael Zhao <Michael.Zhao@arm.com>; Kaly Xin
> > <Kaly.Xin@arm.com>
> > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> >
> > On Tue, Nov 10, 2020 at 7:46 AM Bruce Ashfield via lists.yoctoproject.org
> > <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> > >
> > > On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > > >
> > > > Hi Bruce and Joakim,
> > > >
> > > > Thanks for sharing this branch: k3s-wip. I have tested against my yocto build.
> > >
> > > The branch will be more functional shortly, I have quite a few changes
> > > to factor things for k8s and generally more usable :D
> > >
> > >         modified:   classes/cni_networking.bbclass
> > >         modified:   conf/layer.conf
> > >         modified:   recipes-containers/containerd/containerd-docker_git.bb
> > >         modified:
> > > recipes-containers/containerd/containerd-opencontainers_git.bb
> > >         modified:   recipes-containers/k3s/README.md
> > >         modified:   recipes-containers/k3s/k3s_git.bb
> > >         modified:   recipes-kernel/linux/linux-yocto/kubernetes.cfg
> > >         modified:   recipes-networking/cni/cni_git.bb
> > >         container-deploy.txt
> > >         recipes-core/packagegroups/
> > >
> > > >
> > > > My Image: Linux qemuarm64 by yocto.
> > > >
> > > > The master node can be ready after I started the k3s server. However, the pods in kube-
> > system (which are essential components for k3s) cannot turn to ready state on qemuarm64.
> > > >
> > >
> > > That's interesting, since in my configuration, the master never comes ready:
> > >
> > > root@qemux86-64:~# kubectl get nodes
> > > NAME         STATUS     ROLES    AGE   VERSION
> > > qemux86-64   NotReady   master   15h   v1.18.9-k3s1
> > >
> >
> > Hah.
> >
> > I finally got the node to show up as ready:
> >
> > root@qemux86-64:~# kubectl get nodes
> > NAME         STATUS   ROLES    AGE    VERSION
> > qemux86-64   Ready    master   112s   v1.18.9-k3s1
> >
> > I'm attempting to build an all-in-one node, and that is likely causing me some issues.
> >
> > I'm revisiting those potential conflicts now.
> >
> > But if anyone else does have an all in one working and has some tips, feel free to share :D
> >
> > Bruce
> >
> > > I've sorted out more of the dependencies, and have packagegroups to
> > > make them easier now.
> > >
> > > Hopefully, I can figure out what is now missing and keeping my master
> > > from moving into ready today.
> > >
> > > Bruce
> > >
> > > > After the master node itself turned to ready state, I check the pods with kubectl:
> > > >
> > > > kubectl get nodes
> > > > NAME        STATUS   ROLES    AGE   VERSION
> > > > qemuarm64   Ready    master   11m   v1.18.9-k3s1
> > > > root@qemuarm64:~# ls
> > > > root@qemuarm64:~# kubectl get pods -n kube-system
> > > > NAME                                     READY   STATUS              RESTARTS   AGE
> > > > local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0          12m
> > > > coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
> > > > metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0          12m
> > > > helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m
> > > >
> > > > Then I describe the pods with:
> > > >
> > > > Events:
> > > >   Type     Reason       Age                  From               Message
> > > >   ----     ------       ----                 ----               -------
> > > >   Normal   Scheduled    16m                  default-scheduler  Successfully assigned kube-
> > system/coredns-7944c66d8d-tlrm9 to qemuarm64
> > > >   Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to attach or mount
> > volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[coredns-token-
> > b7nlh config-volume]: timed out waiting for the condition
> > > >   Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach or mount
> > volumes: unmounted volumes=[coredns-token-b7nlh], unattached volumes=[config-volume
> > coredns-token-b7nlh]: timed out waiting for the condition
> > > >   Warning  FailedMount  11s (x16 over 16m)   kubelet            MountVolume.SetUp failed for
> > volume "coredns-token-b7nlh" : mount failed: exec: "mount": executable file not found in $PATH
> > > >
> > > > I found the "mount" binary is not found in $PATH. However, I confirmed the $PATH and
> > mount binary on my qemuarm64 image:
> > > >
> > > > root@qemuarm64:~# echo $PATH
> > > > /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > root@qemuarm64:~# which mount
> > > > /bin/mount
> > > >
> > > > When I type mount command, it worked fine:
> > > >
> > > > /dev/root on / type ext4 (rw,relatime) devtmpfs on /dev type
> > > > devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> > > > proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs
> > > > (rw,relatime) debugfs on /sys/kernel/debug type debugfs
> > > > (rw,relatime) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
> > > > ...
> > > > ... (skipped the verbose output)
> > > >
> > > > I would like to know whether you have met this "mount" issue ever?
> > > >
> > > > Best Regards,
> > > > Lance
> > > >
> > > > > -----Original Message-----
> > > > > From: meta-virtualization@lists.yoctoproject.org
> > > > > <meta-virtualization@lists.yoctoproject.org>
> > > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > > Sent: Monday, October 26, 2020 11:46 PM
> > > > > To: Joakim Roubert <joakim.roubert@axis.com>
> > > > > Cc: meta-virtualization@yoctoproject.org
> > > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > > > >
> > > > > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> > > > > >
> > > > > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > > > > Ha!!!!
> > > > > > >
> > > > > > > This applies.
> > > > > >
> > > > > > Wonderful, thank you! I guess this is what is called "five times lucky"...
> > > > > >
> > > > > > > I'm now testing and completing some of my networking
> > > > > > > factoring, as well as importing / forking some recipes to
> > > > > > > avoid extra layer depends.
> > > > > >
> > > > > > Excellent!
> > > > >
> > > > > I've pushed some of my WIP to:
> > > > > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualization/log
> > > > > /?h=k3s-wip
> > > > >
> > > > > That includes the split of the networking, the import of some of
> > > > > the dependencies and some small tweaks I'm working on.
> > > > >
> > > > > I did have a couple of questions on the k3s packaging itself, I
> > > > > was getting the following
> > > > > error:
> > > > >
> > > > > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > > > > Files/directories were installed but not shipped in any package:
> > > > >   /usr/local/bin/k3s-clean
> > > > >   /usr/local/bin/crictl
> > > > >   /usr/local/bin/kubectl
> > > > >   /usr/local/bin/k3s
> > > > >
> > > > > So I added them to the FILES of the k3s package itself (so both
> > > > > k3s-server and k3s-agent will get them), is that the split you were looking for ?
> > > > >
> > > > > Bruce
> > > > >
> > > > > >
> > > > > > BR,
> > > > > >
> > > > > > /Joakim
> > > > > > --
> > > > > > Joakim Roubert
> > > > > > Senior Engineer
> > > > > >
> > > > > > Axis Communications AB
> > > > > > Emdalavägen 14, SE-223 69 Lund, Sweden
> > > > > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > > > > Fax: +46 46 13 61 30, www.axis.com
> > > > > >
> > > > > --
> > > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > > await thee at its end
> > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and
> > may also be privileged. If you are not the intended recipient, please notify the sender
> > immediately and do not disclose the contents to any other person, use it for any purpose, or
> > store or copy the information in any medium. Thank you.
> > >
> > >
> > >
> > > --
> > > - Thou shalt not follow the NULL pointer, for chaos and madness await
> > > thee at its end
> > > - "Use the force Harry" - Gandalf, Star Trek II
> > >
> > >
> > >
> >
> >
> > --
> > - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> > - "Use the force Harry" - Gandalf, Star Trek II
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-10 13:17                                               ` Bruce Ashfield
  2020-11-12  7:30                                                 ` Lance Yang
@ 2020-11-12 13:40                                                 ` Joakim Roubert
  1 sibling, 0 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-11-12 13:40 UTC (permalink / raw)
  To: meta-virtualization

On 2020-11-10 14:17, Bruce Ashfield wrote:
 >
> But if anyone else does have an all in one working and has some tips,
> feel free to share :D

When it comes to mount issues as were mentioned here earlier, I have 
seen those when I have has /var/lib/rancher/k3s mounted on overlayfs 
file systems (which does not work when the containers are unpacking 
their layers). Mounting another file system (e.g. ext4 or someting) on 
/var/lib/rancher/k3s has mitigated such errors.

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-12  7:04                                                       ` Lance Yang
@ 2020-11-12 13:40                                                         ` Bruce Ashfield
  2020-11-12 14:07                                                           ` Lance Yang
  2020-11-17 14:13                                                         ` Joakim Roubert
  1 sibling, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-12 13:40 UTC (permalink / raw)
  To: Lance Yang
  Cc: Joakim Roubert, meta-virtualization, Michael Zhao, Kaly Xin, nd

On Thu, Nov 12, 2020 at 2:04 AM Lance Yang <Lance.Yang@arm.com> wrote:
>
> Hi Bruce,
>
> On my side, the flannel did not crash.
>
> I started k3s server without agent and confirmed the node is ready. Then I copied the node_token and then ran k3s agent with that token and specified the server url. Then I can see node registered successfully.

When you say 'copied', do you mean copied to a different machine ?

>
> The pods traefik and coredns will be not ready after I started k3s agent in a short while:
> kube-system   traefik-758cd5fc85-8tqhf                 0/1     Running   0          24m
> kube-system   coredns-7944c66d8d-584jt                 0/1     Running   0          21m
>
> but they soon were back to ready state:
> kube-system   coredns-7944c66d8d-584jt                 1/1     Running   0          21m
> kube-system   traefik-758cd5fc85-8tqhf                 1/1     Running   0          24m
>
> I did not see any errors related flannel crash.  Hope this helped a little bit.
>

unfortunately .. no.

I need a single node configuration to work, before I can actually
merge the changes, since without it, I have no way to ensure that it
continues working.

Cheers,

Bruce

> Best Regards,
> Lance
> > -----Original Message-----
> > From: Bruce Ashfield <bruce.ashfield@gmail.com>
> > Sent: Wednesday, November 11, 2020 9:40 PM
> > To: Lance Yang <Lance.Yang@arm.com>
> > Cc: Joakim Roubert <joakim.roubert@axis.com>; meta-virtualization@yoctoproject.org; Michael
> > Zhao <Michael.Zhao@arm.com>; Kaly Xin <Kaly.Xin@arm.com>
> > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> >
> > On Wed, Nov 11, 2020 at 5:07 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > >
> > > Hi Bruce,
> > >
> > > Thanks for your reply. It took me a long time to address the $PATH issue.
> > >
> > > I found the cause after I cleaned my environment. it is because I set my own $PATH before k3s
> > startup which did not include "/bin" and some components in k3s will use "os.Getenv("PATH")
> > to search the related tool.
> > >
> > > Since some OS may not have systemd and the k3s-clean script seems not  to delete all
> > interfaces created by cni and umount all mount points related to k3s. I wrote a script for this
> > situation and I will send it through a separate email based on your previous scripts.
> > >
> >
> > Sounds good.
> >
> > You'll notice that I pushed an update to the k3s-wip branch, things are a bit closer to working
> > there now.
> >
> > I am interested to hear if flannel crashes if you (or anyone else) install (and use) the k3s-agent
> > on the same node as the k3s-server.
> >
> > As soon as you run any k3s-agent command (with the proper token and local host as the server),
> > the node moves into NotReady with what looks like a CNI error.
> >
> > I fetched and ran the binaries directly from rancher, and they showed the same behaviour as the
> > meta-virt ones, so it isn't something fundamental with the integration.
> >
> > Bruce
> >
> > > Best Regards,
> > > Lance
> > > > -----Original Message-----
> > > > From: meta-virtualization@lists.yoctoproject.org
> > > > <meta-virtualization@lists.yoctoproject.org>
> > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > Sent: Tuesday, November 10, 2020 9:35 PM
> > > > To: Bruce Ashfield <bruce.ashfield@gmail.com>
> > > > Cc: Lance Yang <Lance.Yang@arm.com>; Joakim Roubert
> > > > <joakim.roubert@axis.com>; meta- virtualization@yoctoproject.org;
> > > > Michael Zhao <Michael.Zhao@arm.com>; Kaly Xin <Kaly.Xin@arm.com>
> > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > > >
> > > > On Tue, Nov 10, 2020 at 8:17 AM Bruce Ashfield via
> > > > lists.yoctoproject.org <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> > > > >
> > > > > On Tue, Nov 10, 2020 at 7:46 AM Bruce Ashfield via
> > > > > lists.yoctoproject.org
> > > > > <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> > > > > >
> > > > > > On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > > > > > >
> > > > > > > Hi Bruce and Joakim,
> > > > > > >
> > > > > > > Thanks for sharing this branch: k3s-wip. I have tested against my yocto build.
> > > > > >
> > > > > > The branch will be more functional shortly, I have quite a few
> > > > > > changes to factor things for k8s and generally more usable :D
> > > > > >
> > > > > >         modified:   classes/cni_networking.bbclass
> > > > > >         modified:   conf/layer.conf
> > > > > >         modified:   recipes-containers/containerd/containerd-docker_git.bb
> > > > > >         modified:
> > > > > > recipes-containers/containerd/containerd-opencontainers_git.bb
> > > > > >         modified:   recipes-containers/k3s/README.md
> > > > > >         modified:   recipes-containers/k3s/k3s_git.bb
> > > > > >         modified:   recipes-kernel/linux/linux-yocto/kubernetes.cfg
> > > > > >         modified:   recipes-networking/cni/cni_git.bb
> > > > > >         container-deploy.txt
> > > > > >         recipes-core/packagegroups/
> > > > > >
> > > > > > >
> > > > > > > My Image: Linux qemuarm64 by yocto.
> > > > > > >
> > > > > > > The master node can be ready after I started the k3s server.
> > > > > > > However, the pods in kube-
> > > > system (which are essential components for k3s) cannot turn to ready state on qemuarm64.
> > > > > > >
> > > > > >
> > > > > > That's interesting, since in my configuration, the master never comes ready:
> > > > > >
> > > > > > root@qemux86-64:~# kubectl get nodes
> > > > > > NAME         STATUS     ROLES    AGE   VERSION
> > > > > > qemux86-64   NotReady   master   15h   v1.18.9-k3s1
> > > > > >
> > > > >
> > > > > Hah.
> > > > >
> > > > > I finally got the node to show up as ready:
> > > > >
> > > > > root@qemux86-64:~# kubectl get nodes
> > > > > NAME         STATUS   ROLES    AGE    VERSION
> > > > > qemux86-64   Ready    master   112s   v1.18.9-k3s1
> > > > >
> > > >
> > > > Lance,
> > > >
> > > > What image type were you building ? I'm pulling in dependencies to
> > > > packagegroups and the recipes themselves.
> > > >
> > > > I'm not seeing the mount issue on my master/server node:
> > > >
> > > > root@qemux86-64:~# kubectl get pods -n kube-system
> > > > NAME                                     READY   STATUS      RESTARTS   AGE
> > > > local-path-provisioner-6d59f47c7-h7lxk   1/1     Running     0          3m32s
> > > > metrics-server-7566d596c8-mwntr          1/1     Running     0          3m32s
> > > > helm-install-traefik-229v7               0/1     Completed   0          3m32s
> > > > coredns-7944c66d8d-9rfj7                 1/1     Running     0          3m32s
> > > > svclb-traefik-pb5j4                      2/2     Running     0          2m29s
> > > > traefik-758cd5fc85-lxpr8                 1/1     Running     0          2m29s
> > > >
> > > > I'm going back to all-in-one node debugging, but can look into the mount issue more later.
> > > >
> > > > Bruce
> > > >
> > > > > I'm attempting to build an all-in-one node, and that is likely
> > > > > causing me some issues.
> > > > >
> > > > > I'm revisiting those potential conflicts now.
> > > > >
> > > > > But if anyone else does have an all in one working and has some
> > > > > tips, feel free to share :D
> > > > >
> > > > > Bruce
> > > > >
> > > > > > I've sorted out more of the dependencies, and have packagegroups
> > > > > > to make them easier now.
> > > > > >
> > > > > > Hopefully, I can figure out what is now missing and keeping my
> > > > > > master from moving into ready today.
> > > > > >
> > > > > > Bruce
> > > > > >
> > > > > > > After the master node itself turned to ready state, I check the pods with kubectl:
> > > > > > >
> > > > > > > kubectl get nodes
> > > > > > > NAME        STATUS   ROLES    AGE   VERSION
> > > > > > > qemuarm64   Ready    master   11m   v1.18.9-k3s1
> > > > > > > root@qemuarm64:~# ls
> > > > > > > root@qemuarm64:~# kubectl get pods -n kube-system
> > > > > > > NAME                                     READY   STATUS              RESTARTS   AGE
> > > > > > > local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0          12m
> > > > > > > coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
> > > > > > > metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0          12m
> > > > > > > helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m
> > > > > > >
> > > > > > > Then I describe the pods with:
> > > > > > >
> > > > > > > Events:
> > > > > > >   Type     Reason       Age                  From               Message
> > > > > > >   ----     ------       ----                 ----               -------
> > > > > > >   Normal   Scheduled    16m                  default-scheduler  Successfully assigned kube-
> > > > system/coredns-7944c66d8d-tlrm9 to qemuarm64
> > > > > > >   Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to attach or
> > mount
> > > > volumes: unmounted volumes=[coredns-token-b7nlh], unattached
> > > > volumes=[coredns-token- b7nlh config-volume]: timed out waiting for
> > > > the condition
> > > > > > >   Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach or mount
> > > > volumes: unmounted volumes=[coredns-token-b7nlh], unattached
> > > > volumes=[config-volume
> > > > coredns-token-b7nlh]: timed out waiting for the condition
> > > > > > >   Warning  FailedMount  11s (x16 over 16m)   kubelet            MountVolume.SetUp
> > failed
> > > > for volume "coredns-token-b7nlh" : mount failed: exec: "mount":
> > > > executable file not found in $PATH
> > > > > > >
> > > > > > > I found the "mount" binary is not found in $PATH. However, I
> > > > > > > confirmed the $PATH and
> > > > mount binary on my qemuarm64 image:
> > > > > > >
> > > > > > > root@qemuarm64:~# echo $PATH
> > > > > > > /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > > > root@qemuarm64:~# which mount
> > > > > > > /bin/mount
> > > > > > >
> > > > > > > When I type mount command, it worked fine:
> > > > > > >
> > > > > > > /dev/root on / type ext4 (rw,relatime) devtmpfs on /dev type
> > > > > > > devtmpfs (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> > > > > > > proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs
> > > > > > > (rw,relatime) debugfs on /sys/kernel/debug type debugfs
> > > > > > > (rw,relatime) tmpfs on /run type tmpfs
> > > > > > > (rw,nosuid,nodev,mode=755) ...
> > > > > > > ... (skipped the verbose output)
> > > > > > >
> > > > > > > I would like to know whether you have met this "mount" issue ever?
> > > > > > >
> > > > > > > Best Regards,
> > > > > > > Lance
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: meta-virtualization@lists.yoctoproject.org
> > > > > > > > <meta-virtualization@lists.yoctoproject.org>
> > > > > > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > > > > > Sent: Monday, October 26, 2020 11:46 PM
> > > > > > > > To: Joakim Roubert <joakim.roubert@axis.com>
> > > > > > > > Cc: meta-virtualization@yoctoproject.org
> > > > > > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s
> > > > > > > > recipe
> > > > > > > >
> > > > > > > > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> > > > > > > > >
> > > > > > > > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > > > > > > > Ha!!!!
> > > > > > > > > >
> > > > > > > > > > This applies.
> > > > > > > > >
> > > > > > > > > Wonderful, thank you! I guess this is what is called "five times lucky"...
> > > > > > > > >
> > > > > > > > > > I'm now testing and completing some of my networking
> > > > > > > > > > factoring, as well as importing / forking some recipes
> > > > > > > > > > to avoid extra layer depends.
> > > > > > > > >
> > > > > > > > > Excellent!
> > > > > > > >
> > > > > > > > I've pushed some of my WIP to:
> > > > > > > > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualizati
> > > > > > > > on/l
> > > > > > > > og/?h=k3s-wip
> > > > > > > >
> > > > > > > > That includes the split of the networking, the import of
> > > > > > > > some of the dependencies and some small tweaks I'm working on.
> > > > > > > >
> > > > > > > > I did have a couple of questions on the k3s packaging
> > > > > > > > itself, I was getting the following
> > > > > > > > error:
> > > > > > > >
> > > > > > > > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > > > > > > > Files/directories were installed but not shipped in any package:
> > > > > > > >   /usr/local/bin/k3s-clean
> > > > > > > >   /usr/local/bin/crictl
> > > > > > > >   /usr/local/bin/kubectl
> > > > > > > >   /usr/local/bin/k3s
> > > > > > > >
> > > > > > > > So I added them to the FILES of the k3s package itself (so
> > > > > > > > both k3s-server and k3s-agent will get them), is that the split you were looking for ?
> > > > > > > >
> > > > > > > > Bruce
> > > > > > > >
> > > > > > > > >
> > > > > > > > > BR,
> > > > > > > > >
> > > > > > > > > /Joakim
> > > > > > > > > --
> > > > > > > > > Joakim Roubert
> > > > > > > > > Senior Engineer
> > > > > > > > >
> > > > > > > > > Axis Communications AB
> > > > > > > > > Emdalavägen 14, SE-223 69 Lund, Sweden
> > > > > > > > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > > > > > > > Fax: +46 46 13 61 30, www.axis.com
> > > > > > > > >
> > > > > > > > --
> > > > > > > > - Thou shalt not follow the NULL pointer, for chaos and
> > > > > > > > madness await thee at its end
> > > > > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > > > > IMPORTANT NOTICE: The contents of this email and any
> > > > > > > attachments are confidential and
> > > > may also be privileged. If you are not the intended recipient,
> > > > please notify the sender immediately and do not disclose the
> > > > contents to any other person, use it for any purpose, or store or copy the information in any
> > medium. Thank you.
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > > > await thee at its end
> > > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > > await thee at its end
> > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > await thee at its end
> > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may
> > also be privileged. If you are not the intended recipient, please notify the sender immediately
> > and do not disclose the contents to any other person, use it for any purpose, or store or copy
> > the information in any medium. Thank you.
> >
> >
> >
> > --
> > - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> > - "Use the force Harry" - Gandalf, Star Trek II
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-12  7:30                                                 ` Lance Yang
  2020-11-12 13:38                                                   ` Bruce Ashfield
@ 2020-11-12 13:43                                                   ` Joakim Roubert
  2020-11-13  5:48                                                     ` Lance Yang
  1 sibling, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-11-12 13:43 UTC (permalink / raw)
  To: Lance Yang; +Cc: meta-virtualization

On 2020-11-12 08:30, Lance Yang wrote:
> 
> Problem 1: conflicting requests
>    - nothing provides kernel-module-ip-set needed by ipset-6.38-r0

Do you have

CONFIG_IP_SET=m

in your kernel config?

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-12 13:40                                                         ` Bruce Ashfield
@ 2020-11-12 14:07                                                           ` Lance Yang
  0 siblings, 0 replies; 73+ messages in thread
From: Lance Yang @ 2020-11-12 14:07 UTC (permalink / raw)
  To: Bruce Ashfield
  Cc: Joakim Roubert, meta-virtualization, Michael Zhao, Kaly Xin, nd

Hi Bruce,

No, it is not. I just use the command:

k3s agent --token (copied token) --server https://localhost:6443

Best Regards,
Lance

> -----Original Message-----
> From: Bruce Ashfield <bruce.ashfield@gmail.com>
> Sent: Thursday, November 12, 2020 9:40 PM
> To: Lance Yang <Lance.Yang@arm.com>
> Cc: Joakim Roubert <joakim.roubert@axis.com>; meta-virtualization@yoctoproject.org;
> Michael Zhao <Michael.Zhao@arm.com>; Kaly Xin <Kaly.Xin@arm.com>; nd <nd@arm.com>
> Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
>
> On Thu, Nov 12, 2020 at 2:04 AM Lance Yang <Lance.Yang@arm.com> wrote:
> >
> > Hi Bruce,
> >
> > On my side, the flannel did not crash.
> >
> > I started k3s server without agent and confirmed the node is ready. Then I copied the
> node_token and then ran k3s agent with that token and specified the server url. Then I
> can see node registered successfully.
>
> When you say 'copied', do you mean copied to a different machine ?
>
> >
> > The pods traefik and coredns will be not ready after I started k3s agent in a short while:
> > kube-system   traefik-758cd5fc85-8tqhf                 0/1     Running   0          24m
> > kube-system   coredns-7944c66d8d-584jt                 0/1     Running   0          21m
> >
> > but they soon were back to ready state:
> > kube-system   coredns-7944c66d8d-584jt                 1/1     Running   0          21m
> > kube-system   traefik-758cd5fc85-8tqhf                 1/1     Running   0          24m
> >
> > I did not see any errors related flannel crash.  Hope this helped a little bit.
> >
>
> unfortunately .. no.
>
> I need a single node configuration to work, before I can actually merge the changes,
> since without it, I have no way to ensure that it continues working.
>
> Cheers,
>
> Bruce
>
> > Best Regards,
> > Lance
> > > -----Original Message-----
> > > From: Bruce Ashfield <bruce.ashfield@gmail.com>
> > > Sent: Wednesday, November 11, 2020 9:40 PM
> > > To: Lance Yang <Lance.Yang@arm.com>
> > > Cc: Joakim Roubert <joakim.roubert@axis.com>;
> > > meta-virtualization@yoctoproject.org; Michael Zhao
> > > <Michael.Zhao@arm.com>; Kaly Xin <Kaly.Xin@arm.com>
> > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > >
> > > On Wed, Nov 11, 2020 at 5:07 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > > >
> > > > Hi Bruce,
> > > >
> > > > Thanks for your reply. It took me a long time to address the $PATH issue.
> > > >
> > > > I found the cause after I cleaned my environment. it is because I
> > > > set my own $PATH before k3s
> > > startup which did not include "/bin" and some components in k3s will
> > > use "os.Getenv("PATH") to search the related tool.
> > > >
> > > > Since some OS may not have systemd and the k3s-clean script seems
> > > > not  to delete all
> > > interfaces created by cni and umount all mount points related to
> > > k3s. I wrote a script for this situation and I will send it through a separate email
> based on your previous scripts.
> > > >
> > >
> > > Sounds good.
> > >
> > > You'll notice that I pushed an update to the k3s-wip branch, things
> > > are a bit closer to working there now.
> > >
> > > I am interested to hear if flannel crashes if you (or anyone else)
> > > install (and use) the k3s-agent on the same node as the k3s-server.
> > >
> > > As soon as you run any k3s-agent command (with the proper token and
> > > local host as the server), the node moves into NotReady with what looks like a CNI
> error.
> > >
> > > I fetched and ran the binaries directly from rancher, and they
> > > showed the same behaviour as the meta-virt ones, so it isn't something fundamental
> with the integration.
> > >
> > > Bruce
> > >
> > > > Best Regards,
> > > > Lance
> > > > > -----Original Message-----
> > > > > From: meta-virtualization@lists.yoctoproject.org
> > > > > <meta-virtualization@lists.yoctoproject.org>
> > > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > > Sent: Tuesday, November 10, 2020 9:35 PM
> > > > > To: Bruce Ashfield <bruce.ashfield@gmail.com>
> > > > > Cc: Lance Yang <Lance.Yang@arm.com>; Joakim Roubert
> > > > > <joakim.roubert@axis.com>; meta-
> > > > > virtualization@yoctoproject.org; Michael Zhao
> > > > > <Michael.Zhao@arm.com>; Kaly Xin <Kaly.Xin@arm.com>
> > > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> > > > >
> > > > > On Tue, Nov 10, 2020 at 8:17 AM Bruce Ashfield via
> > > > > lists.yoctoproject.org <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> > > > > >
> > > > > > On Tue, Nov 10, 2020 at 7:46 AM Bruce Ashfield via
> > > > > > lists.yoctoproject.org
> > > > > > <bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
> > > > > > >
> > > > > > > On Tue, Nov 10, 2020 at 1:43 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > > > > > > >
> > > > > > > > Hi Bruce and Joakim,
> > > > > > > >
> > > > > > > > Thanks for sharing this branch: k3s-wip. I have tested against my yocto
> build.
> > > > > > >
> > > > > > > The branch will be more functional shortly, I have quite a
> > > > > > > few changes to factor things for k8s and generally more
> > > > > > > usable :D
> > > > > > >
> > > > > > >         modified:   classes/cni_networking.bbclass
> > > > > > >         modified:   conf/layer.conf
> > > > > > >         modified:   recipes-containers/containerd/containerd-docker_git.bb
> > > > > > >         modified:
> > > > > > > recipes-containers/containerd/containerd-opencontainers_git.bb
> > > > > > >         modified:   recipes-containers/k3s/README.md
> > > > > > >         modified:   recipes-containers/k3s/k3s_git.bb
> > > > > > >         modified:   recipes-kernel/linux/linux-yocto/kubernetes.cfg
> > > > > > >         modified:   recipes-networking/cni/cni_git.bb
> > > > > > >         container-deploy.txt
> > > > > > >         recipes-core/packagegroups/
> > > > > > >
> > > > > > > >
> > > > > > > > My Image: Linux qemuarm64 by yocto.
> > > > > > > >
> > > > > > > > The master node can be ready after I started the k3s server.
> > > > > > > > However, the pods in kube-
> > > > > system (which are essential components for k3s) cannot turn to ready state on
> qemuarm64.
> > > > > > > >
> > > > > > >
> > > > > > > That's interesting, since in my configuration, the master never comes ready:
> > > > > > >
> > > > > > > root@qemux86-64:~# kubectl get nodes
> > > > > > > NAME         STATUS     ROLES    AGE   VERSION
> > > > > > > qemux86-64   NotReady   master   15h   v1.18.9-k3s1
> > > > > > >
> > > > > >
> > > > > > Hah.
> > > > > >
> > > > > > I finally got the node to show up as ready:
> > > > > >
> > > > > > root@qemux86-64:~# kubectl get nodes
> > > > > > NAME         STATUS   ROLES    AGE    VERSION
> > > > > > qemux86-64   Ready    master   112s   v1.18.9-k3s1
> > > > > >
> > > > >
> > > > > Lance,
> > > > >
> > > > > What image type were you building ? I'm pulling in dependencies
> > > > > to packagegroups and the recipes themselves.
> > > > >
> > > > > I'm not seeing the mount issue on my master/server node:
> > > > >
> > > > > root@qemux86-64:~# kubectl get pods -n kube-system
> > > > > NAME                                     READY   STATUS      RESTARTS   AGE
> > > > > local-path-provisioner-6d59f47c7-h7lxk   1/1     Running     0          3m32s
> > > > > metrics-server-7566d596c8-mwntr          1/1     Running     0          3m32s
> > > > > helm-install-traefik-229v7               0/1     Completed   0          3m32s
> > > > > coredns-7944c66d8d-9rfj7                 1/1     Running     0          3m32s
> > > > > svclb-traefik-pb5j4                      2/2     Running     0          2m29s
> > > > > traefik-758cd5fc85-lxpr8                 1/1     Running     0          2m29s
> > > > >
> > > > > I'm going back to all-in-one node debugging, but can look into the mount issue
> more later.
> > > > >
> > > > > Bruce
> > > > >
> > > > > > I'm attempting to build an all-in-one node, and that is likely
> > > > > > causing me some issues.
> > > > > >
> > > > > > I'm revisiting those potential conflicts now.
> > > > > >
> > > > > > But if anyone else does have an all in one working and has
> > > > > > some tips, feel free to share :D
> > > > > >
> > > > > > Bruce
> > > > > >
> > > > > > > I've sorted out more of the dependencies, and have
> > > > > > > packagegroups to make them easier now.
> > > > > > >
> > > > > > > Hopefully, I can figure out what is now missing and keeping
> > > > > > > my master from moving into ready today.
> > > > > > >
> > > > > > > Bruce
> > > > > > >
> > > > > > > > After the master node itself turned to ready state, I check the pods with
> kubectl:
> > > > > > > >
> > > > > > > > kubectl get nodes
> > > > > > > > NAME        STATUS   ROLES    AGE   VERSION
> > > > > > > > qemuarm64   Ready    master   11m   v1.18.9-k3s1
> > > > > > > > root@qemuarm64:~# ls
> > > > > > > > root@qemuarm64:~# kubectl get pods -n kube-system
> > > > > > > > NAME                                     READY   STATUS              RESTARTS   AGE
> > > > > > > > local-path-provisioner-6d59f47c7-xxvbl   0/1     ContainerCreating   0
> 12m
> > > > > > > > coredns-7944c66d8d-tlrm9                 0/1     ContainerCreating   0          12m
> > > > > > > > metrics-server-7566d596c8-svkff          0/1     ContainerCreating   0
> 12m
> > > > > > > > helm-install-traefik-s8p5g               0/1     ContainerCreating   0          12m
> > > > > > > >
> > > > > > > > Then I describe the pods with:
> > > > > > > >
> > > > > > > > Events:
> > > > > > > >   Type     Reason       Age                  From               Message
> > > > > > > >   ----     ------       ----                 ----               -------
> > > > > > > >   Normal   Scheduled    16m                  default-scheduler  Successfully
> assigned kube-
> > > > > system/coredns-7944c66d8d-tlrm9 to qemuarm64
> > > > > > > >   Warning  FailedMount  5m23s (x3 over 14m)  kubelet            Unable to
> attach or
> > > mount
> > > > > volumes: unmounted volumes=[coredns-token-b7nlh], unattached
> > > > > volumes=[coredns-token- b7nlh config-volume]: timed out waiting
> > > > > for the condition
> > > > > > > >   Warning  FailedMount  50s (x4 over 12m)    kubelet            Unable to attach
> or mount
> > > > > volumes: unmounted volumes=[coredns-token-b7nlh], unattached
> > > > > volumes=[config-volume
> > > > > coredns-token-b7nlh]: timed out waiting for the condition
> > > > > > > >   Warning  FailedMount  11s (x16 over 16m)   kubelet
> MountVolume.SetUp
> > > failed
> > > > > for volume "coredns-token-b7nlh" : mount failed: exec: "mount":
> > > > > executable file not found in $PATH
> > > > > > > >
> > > > > > > > I found the "mount" binary is not found in $PATH. However,
> > > > > > > > I confirmed the $PATH and
> > > > > mount binary on my qemuarm64 image:
> > > > > > > >
> > > > > > > > root@qemuarm64:~# echo $PATH
> > > > > > > > /usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sb
> > > > > > > > in root@qemuarm64:~# which mount /bin/mount
> > > > > > > >
> > > > > > > > When I type mount command, it worked fine:
> > > > > > > >
> > > > > > > > /dev/root on / type ext4 (rw,relatime) devtmpfs on /dev
> > > > > > > > type devtmpfs
> > > > > > > > (rw,relatime,size=2016212k,nr_inodes=504053,mode=755)
> > > > > > > > proc on /proc type proc (rw,relatime) sysfs on /sys type
> > > > > > > > sysfs
> > > > > > > > (rw,relatime) debugfs on /sys/kernel/debug type debugfs
> > > > > > > > (rw,relatime) tmpfs on /run type tmpfs
> > > > > > > > (rw,nosuid,nodev,mode=755) ...
> > > > > > > > ... (skipped the verbose output)
> > > > > > > >
> > > > > > > > I would like to know whether you have met this "mount" issue ever?
> > > > > > > >
> > > > > > > > Best Regards,
> > > > > > > > Lance
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: meta-virtualization@lists.yoctoproject.org
> > > > > > > > > <meta-virtualization@lists.yoctoproject.org>
> > > > > > > > > On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> > > > > > > > > Sent: Monday, October 26, 2020 11:46 PM
> > > > > > > > > To: Joakim Roubert <joakim.roubert@axis.com>
> > > > > > > > > Cc: meta-virtualization@yoctoproject.org
> > > > > > > > > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s
> > > > > > > > > recipe
> > > > > > > > >
> > > > > > > > > On Wed, Oct 21, 2020 at 2:00 AM Joakim Roubert
> <joakim.roubert@axis.com> wrote:
> > > > > > > > > >
> > > > > > > > > > On 2020-10-21 05:10, Bruce Ashfield wrote:
> > > > > > > > > > > Ha!!!!
> > > > > > > > > > >
> > > > > > > > > > > This applies.
> > > > > > > > > >
> > > > > > > > > > Wonderful, thank you! I guess this is what is called "five times lucky"...
> > > > > > > > > >
> > > > > > > > > > > I'm now testing and completing some of my networking
> > > > > > > > > > > factoring, as well as importing / forking some
> > > > > > > > > > > recipes to avoid extra layer depends.
> > > > > > > > > >
> > > > > > > > > > Excellent!
> > > > > > > > >
> > > > > > > > > I've pushed some of my WIP to:
> > > > > > > > > https://git.yoctoproject.org/cgit/cgit.cgi/meta-virtuali
> > > > > > > > > zati
> > > > > > > > > on/l
> > > > > > > > > og/?h=k3s-wip
> > > > > > > > >
> > > > > > > > > That includes the split of the networking, the import of
> > > > > > > > > some of the dependencies and some small tweaks I'm working on.
> > > > > > > > >
> > > > > > > > > I did have a couple of questions on the k3s packaging
> > > > > > > > > itself, I was getting the following
> > > > > > > > > error:
> > > > > > > > >
> > > > > > > > > ERROR: k3s-v1.18.9+k3s1-dirty-r0 do_package: QA Issue: k3s:
> > > > > > > > > Files/directories were installed but not shipped in any package:
> > > > > > > > >   /usr/local/bin/k3s-clean
> > > > > > > > >   /usr/local/bin/crictl
> > > > > > > > >   /usr/local/bin/kubectl
> > > > > > > > >   /usr/local/bin/k3s
> > > > > > > > >
> > > > > > > > > So I added them to the FILES of the k3s package itself
> > > > > > > > > (so both k3s-server and k3s-agent will get them), is that the split you were
> looking for ?
> > > > > > > > >
> > > > > > > > > Bruce
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > BR,
> > > > > > > > > >
> > > > > > > > > > /Joakim
> > > > > > > > > > --
> > > > > > > > > > Joakim Roubert
> > > > > > > > > > Senior Engineer
> > > > > > > > > >
> > > > > > > > > > Axis Communications AB Emdalavägen 14, SE-223 69 Lund,
> > > > > > > > > > Sweden
> > > > > > > > > > Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> > > > > > > > > > Fax: +46 46 13 61 30, www.axis.com
> > > > > > > > > >
> > > > > > > > > --
> > > > > > > > > - Thou shalt not follow the NULL pointer, for chaos and
> > > > > > > > > madness await thee at its end
> > > > > > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > > > > > IMPORTANT NOTICE: The contents of this email and any
> > > > > > > > attachments are confidential and
> > > > > may also be privileged. If you are not the intended recipient,
> > > > > please notify the sender immediately and do not disclose the
> > > > > contents to any other person, use it for any purpose, or store
> > > > > or copy the information in any
> > > medium. Thank you.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > - Thou shalt not follow the NULL pointer, for chaos and
> > > > > > > madness await thee at its end
> > > > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > - Thou shalt not follow the NULL pointer, for chaos and
> > > > > > madness await thee at its end
> > > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > > > await thee at its end
> > > > > - "Use the force Harry" - Gandalf, Star Trek II
> > > > IMPORTANT NOTICE: The contents of this email and any attachments
> > > > are confidential and may
> > > also be privileged. If you are not the intended recipient, please
> > > notify the sender immediately and do not disclose the contents to
> > > any other person, use it for any purpose, or store or copy the information in any
> medium. Thank you.
> > >
> > >
> > >
> > > --
> > > - Thou shalt not follow the NULL pointer, for chaos and madness
> > > await thee at its end
> > > - "Use the force Harry" - Gandalf, Star Trek II
> > IMPORTANT NOTICE: The contents of this email and any attachments are confidential
> and may also be privileged. If you are not the intended recipient, please notify the
> sender immediately and do not disclose the contents to any other person, use it for any
> purpose, or store or copy the information in any medium. Thank you.
>
>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [meta-virtualization][PATCH] k3s: Update README.md
  2020-11-12 13:38                                                   ` Bruce Ashfield
@ 2020-11-12 14:26                                                     ` Joakim Roubert
  2020-11-17 12:39                                                     ` [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3 Joakim Roubert
       [not found]                                                     ` <16484BFA14ED0B17.5807@lists.yoctoproject.org>
  2 siblings, 0 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-11-12 14:26 UTC (permalink / raw)
  To: meta-virtualization; +Cc: Joakim Roubert

Fix suggestions from markdownlint, i.e.

docker run -it --rm -v $(pwd):/x -w /x \
peterdavehello/markdownlint:0.22.0 markdownlint .

and some formatting done.

Change-Id: Ic8f9ccd40607258d934a9510113ae44344253f65
Signed-off-by: Joakim Roubert <joakimr@axis.com>
---
 recipes-containers/k3s/README.md | 63 +++++++++++++++++---------------
 1 file changed, 34 insertions(+), 29 deletions(-)

diff --git a/recipes-containers/k3s/README.md b/recipes-containers/k3s/README.md
index e4cb3e3..70f92a6 100644
--- a/recipes-containers/k3s/README.md
+++ b/recipes-containers/k3s/README.md
@@ -29,35 +29,37 @@ k3s-agent -t <token> -s https://<master>:6443
 (Here `<token>` is found in `/var/lib/rancher/k3s/server/node-token` at the
 k3s master.)
 
-Example:
+### Example
+
 ```shell
 k3s-agent -t /var/lib/rancher/k3s/server/node-token -s https://localhost:6443
 ```
 
-## Notes:
+## Notes
 
-if running under qemu, the default of 256M of memory is not enough, k3s will
-OOM and exit.
+**If running under qemu**, the default of 256 MB of memory is not enough, k3s
+will OOM and exit.
 
-Boot with qemuparams="-m 2048" to boot with 2G of memory (or choose the
+Boot with `qemuparams="-m 2048"` to boot with 2 GB of memory (or choose the
 appropriate amount for your configuration)
 
-Disk: if using qemu and core-image* you'll need to add extra space in your disks
-to ensure containers can start. The following in your image recipe, or local.conf
-would add 2G of extra space to the rootfs:
+Disk: if using qemu and core-image* you will need to add extra space in your
+disks to ensure containers can start. The following in your image recipe,
+or `local.conf` would add 2 GB of extra space to the rootfs:
 
 ```shell
 IMAGE_ROOTFS_EXTRA_SPACE = "2097152"
 ```
 
-## Example output from qemux86-64 running k3s server:
+## Example output from qemux86-64 running k3s server
 
 ```shell
 root@qemux86-64:~# kubectl get nodes
 NAME         STATUS   ROLES    AGE   VERSION
 qemux86-64   Ready    master   46s   v1.18.9-k3s1
+```
 
-
+```shell
 root@qemux86-64:~# kubectl get pods -n kube-system
 NAME                                     READY   STATUS      RESTARTS   AGE
 local-path-provisioner-6d59f47c7-h7lxk   1/1     Running     0          2m32s
@@ -66,7 +68,9 @@ helm-install-traefik-229v7               0/1     Completed   0          2m32s
 coredns-7944c66d8d-9rfj7                 1/1     Running     0          2m32s
 svclb-traefik-pb5j4                      2/2     Running     0          89s
 traefik-758cd5fc85-lxpr8                 1/1     Running     0          89s
+```
 
+```shell
 root@qemux86-64:~# kubectl describe pods -n kube-system
 
 root@qemux86-64:~# ip a s
@@ -74,25 +78,25 @@ root@qemux86-64:~# ip a s
     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
     inet 127.0.0.1/8 scope host lo
        valid_lft forever preferred_lft forever
-    inet6 ::1/128 scope host 
+    inet6 ::1/128 scope host
        valid_lft forever preferred_lft forever
 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
     link/ether 52:54:00:12:35:02 brd ff:ff:ff:ff:ff:ff
     inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
        valid_lft forever preferred_lft forever
-    inet6 fec0::5054:ff:fe12:3502/64 scope site dynamic mngtmpaddr 
+    inet6 fec0::5054:ff:fe12:3502/64 scope site dynamic mngtmpaddrg
        valid_lft 86239sec preferred_lft 14239sec
-    inet6 fe80::5054:ff:fe12:3502/64 scope link 
+    inet6 fe80::5054:ff:fe12:3502/64 scope linkg
        valid_lft forever preferred_lft forever
 3: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
     link/sit 0.0.0.0 brd 0.0.0.0
-4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
+4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group defaultg
     link/ether e2:aa:04:89:e6:0a brd ff:ff:ff:ff:ff:ff
     inet 10.42.0.0/32 brd 10.42.0.0 scope global flannel.1
        valid_lft forever preferred_lft forever
-    inet6 fe80::e0aa:4ff:fe89:e60a/64 scope link 
+    inet6 fe80::e0aa:4ff:fe89:e60a/64 scope linkg
        valid_lft forever preferred_lft forever
-5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
+5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group defaultg
     link/ether 02:42:be:3e:25:e7 brd ff:ff:ff:ff:ff:ff
     inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
        valid_lft forever preferred_lft forever
@@ -100,30 +104,31 @@ root@qemux86-64:~# ip a s
     link/ether 82:8e:b4:f8:06:e7 brd ff:ff:ff:ff:ff:ff
     inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0
        valid_lft forever preferred_lft forever
-    inet6 fe80::808e:b4ff:fef8:6e7/64 scope link 
+    inet6 fe80::808e:b4ff:fef8:6e7/64 scope linkg
        valid_lft forever preferred_lft forever
-7: veth82ac482e@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
+7: veth82ac482e@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group defaultg
     link/ether ea:9d:14:c1:00:70 brd ff:ff:ff:ff:ff:ff link-netns cni-c52e6e09-f6e0-a47b-aea3-d6c47d3e2d01
-    inet6 fe80::e89d:14ff:fec1:70/64 scope link 
+    inet6 fe80::e89d:14ff:fec1:70/64 scope linkg
        valid_lft forever preferred_lft forever
-8: vethb94745ed@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
+8: vethb94745ed@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group defaultg
     link/ether 1e:7f:7e:d3:ca:e8 brd ff:ff:ff:ff:ff:ff link-netns cni-86958efe-2462-016f-292d-81dbccc16a83
-    inet6 fe80::8046:3cff:fe23:ced1/64 scope link 
+    inet6 fe80::8046:3cff:fe23:ced1/64 scope linkg
        valid_lft forever preferred_lft forever
-9: veth81ffb276@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
+9: veth81ffb276@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group defaultg
     link/ether 2a:1d:48:54:76:50 brd ff:ff:ff:ff:ff:ff link-netns cni-5d77238e-6452-4fa3-40d2-91d48386080b
-    inet6 fe80::acf4:7fff:fe11:b6f2/64 scope link 
+    inet6 fe80::acf4:7fff:fe11:b6f2/64 scope linkg
        valid_lft forever preferred_lft forever
-10: vethce261f6a@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
+10: vethce261f6a@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group defaultg
     link/ether 72:a3:90:4a:c5:12 brd ff:ff:ff:ff:ff:ff link-netns cni-55675948-77f2-a952-31ce-615f2bdb0093
-    inet6 fe80::4d5:1bff:fe5d:db3a/64 scope link 
+    inet6 fe80::4d5:1bff:fe5d:db3a/64 scope linkg
        valid_lft forever preferred_lft forever
-11: vethee199cf4@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
+11: vethee199cf4@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group defaultg
     link/ether e6:90:a4:a3:bc:a1 brd ff:ff:ff:ff:ff:ff link-netns cni-4aeccd16-2976-8a78-b2c4-e028da3bb1ea
-    inet6 fe80::c85a:8bff:fe0b:aea0/64 scope link 
+    inet6 fe80::c85a:8bff:fe0b:aea0/64 scope linkg
        valid_lft forever preferred_lft forever
+```
 
-
+```shell
 root@qemux86-64:~# kubectl describe nodes
 
 Name:               qemux86-64
@@ -235,4 +240,4 @@ Events:
   Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet     Node qemux86-64 status is now: NodeHasSufficientPID
   Normal   NodeReady                10m                kubelet     Node qemux86-64 status is now: NodeReady
 
-```shell
+```
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-12 13:43                                                   ` [meta-virtualization][PATCH v5] Adding k3s recipe Joakim Roubert
@ 2020-11-13  5:48                                                     ` Lance Yang
  2020-11-13  6:20                                                       ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Lance Yang @ 2020-11-13  5:48 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization, Bruce Ashfield

Hi Joakim,

Please see the comments inline.
> -----Original Message-----
> From: Joakim Roubert <joakim.roubert@axis.com>
> Sent: Thursday, November 12, 2020 9:43 PM
> To: Lance Yang <Lance.Yang@arm.com>
> Cc: meta-virtualization@yoctoproject.org
> Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
>
> On 2020-11-12 08:30, Lance Yang wrote:
> >
> > Problem 1: conflicting requests
> >    - nothing provides kernel-module-ip-set needed by ipset-6.38-r0
>
> Do you have
>
> CONFIG_IP_SET=m

[ Uh! That works! I thought I set it but when I double checked the menuconfig, I found I forgot to set it.

As far as I know, ipset is a tool to simplify iptables configurations. (e.g. iptables -t nat -A PREROUTING -m set --match-set podnet dst  -j xxx) which decreases the number of iptables rules.

Because of my limited knowledge of k3s/k8s, I have a questions about ipset.

Is it a must installed component for k3s/k8s?
]
>
> in your kernel config?
>
> BR,
>
> /Joakim
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-13  5:48                                                     ` Lance Yang
@ 2020-11-13  6:20                                                       ` Bruce Ashfield
  0 siblings, 0 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-13  6:20 UTC (permalink / raw)
  To: Lance Yang; +Cc: Joakim Roubert, meta-virtualization

On Fri, Nov 13, 2020 at 12:48 AM Lance Yang <Lance.Yang@arm.com> wrote:
>
> Hi Joakim,
>
> Please see the comments inline.
> > -----Original Message-----
> > From: Joakim Roubert <joakim.roubert@axis.com>
> > Sent: Thursday, November 12, 2020 9:43 PM
> > To: Lance Yang <Lance.Yang@arm.com>
> > Cc: meta-virtualization@yoctoproject.org
> > Subject: Re: [meta-virtualization][PATCH v5] Adding k3s recipe
> >
> > On 2020-11-12 08:30, Lance Yang wrote:
> > >
> > > Problem 1: conflicting requests
> > >    - nothing provides kernel-module-ip-set needed by ipset-6.38-r0
> >
> > Do you have
> >
> > CONFIG_IP_SET=m
>
> [ Uh! That works! I thought I set it but when I double checked the menuconfig, I found I forgot to set it.
>
> As far as I know, ipset is a tool to simplify iptables configurations. (e.g. iptables -t nat -A PREROUTING -m set --match-set podnet dst  -j xxx) which decreases the number of iptables rules.
>
> Because of my limited knowledge of k3s/k8s, I have a questions about ipset.
>
> Is it a must installed component for k3s/k8s?

Check my reviews on the mailing list, you'll see where it is discussed.

Bruce

> ]
> >
> > in your kernel config?
> >
> > BR,
> >
> > /Joakim
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-12 13:38                                                   ` Bruce Ashfield
  2020-11-12 14:26                                                     ` [meta-virtualization][PATCH] k3s: Update README.md Joakim Roubert
@ 2020-11-17 12:39                                                     ` Joakim Roubert
  2020-11-17 13:27                                                       ` Bruce Ashfield
       [not found]                                                     ` <16484BFA14ED0B17.5807@lists.yoctoproject.org>
  2 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-11-17 12:39 UTC (permalink / raw)
  To: meta-virtualization; +Cc: Joakim Roubert

The current stable release installed when running the regular k3s
install script is now v1.19.3+k3s3. Using that version in our recipe
also mitigates the fixes for

https://github.com/rancher/kine/issues/61

Change-Id: I4c53d777d4d837628a6c7f8a9234da9d71eb45ae
Signed-off-by: Joakim Roubert <joakimr@axis.com>
---
 recipes-containers/k3s/k3s_git.bb | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/recipes-containers/k3s/k3s_git.bb b/recipes-containers/k3s/k3s_git.bb
index ea5ed75..28ce66b 100644
--- a/recipes-containers/k3s/k3s_git.bb
+++ b/recipes-containers/k3s/k3s_git.bb
@@ -4,7 +4,7 @@ HOMEPAGE = "https://k3s.io/"
 LICENSE = "Apache-2.0"
 LIC_FILES_CHKSUM = "file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
 
-SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
+SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.19;name=k3s \
            file://k3s.service \
            file://k3s-agent.service \
            file://k3s-agent \
@@ -13,9 +13,9 @@ SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
            file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
           "
 SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
-SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
+SRCREV_k3s = "0e4fbfefe1dd8734756dfa4f9ab4fc89665cece4"
 
-PV = "v1.18.9+git${SRCPV}"
+PV = "v1.19.3+git${SRCPV}"
 
 CNI_NETWORKING_FILES ?= "${WORKDIR}/cni-containerd-net.conf"
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
       [not found]                                                     ` <16484BFA14ED0B17.5807@lists.yoctoproject.org>
@ 2020-11-17 13:05                                                       ` Joakim Roubert
  0 siblings, 0 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-11-17 13:05 UTC (permalink / raw)
  To: meta-virtualization

On 2020-11-17 13:39, Joakim Roubert wrote:
> The current stable release installed when running the regular k3s
> install script is now v1.19.3+k3s3.

I launched a multi-arch cluster with this v1.19.3+k3s3, deployed Linkerd 
2.9.0 on it and then meshed a small test application sending traffic 
back and forth between a client and a server, and it all seems to work 
quite nicely.

BR,

/Joakim
-- 
Joakim Roubert
Senior Engineer

Axis Communications AB
Emdalavägen 14, SE-223 69 Lund, Sweden
Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
Fax: +46 46 13 61 30, www.axis.com


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-17 12:39                                                     ` [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3 Joakim Roubert
@ 2020-11-17 13:27                                                       ` Bruce Ashfield
  2020-11-17 13:31                                                         ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-17 13:27 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization, Joakim Roubert

[-- Attachment #1: Type: text/plain, Size: 2055 bytes --]

On Tue, Nov 17, 2020 at 7:40 AM Joakim Roubert <joakim.roubert@axis.com>
wrote:

> The current stable release installed when running the regular k3s
> install script is now v1.19.3+k3s3. Using that version in our recipe
> also mitigates the fixes for
>

I already have this queued in my WIP branch.

But without being able to assemble a single node working "cluster", this is
still stuck in my staging branch.

Bruce



>
> https://github.com/rancher/kine/issues/61
>
> Change-Id: I4c53d777d4d837628a6c7f8a9234da9d71eb45ae
> Signed-off-by: Joakim Roubert <joakimr@axis.com>
> ---
>  recipes-containers/k3s/k3s_git.bb | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/recipes-containers/k3s/k3s_git.bb b/recipes-containers/k3s/
> k3s_git.bb
> index ea5ed75..28ce66b 100644
> --- a/recipes-containers/k3s/k3s_git.bb
> +++ b/recipes-containers/k3s/k3s_git.bb
> @@ -4,7 +4,7 @@ HOMEPAGE = "https://k3s.io/"
>  LICENSE = "Apache-2.0"
>  LIC_FILES_CHKSUM =
> "file://${S}/src/import/LICENSE;md5=2ee41112a44fe7014dce33e26468ba93"
>
> -SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.18;name=k3s
> \
> +SRC_URI = "git://github.com/rancher/k3s.git;branch=release-1.19;name=k3s
> \
>             file://k3s.service \
>             file://k3s-agent.service \
>             file://k3s-agent \
> @@ -13,9 +13,9 @@ SRC_URI = "git://
> github.com/rancher/k3s.git;branch=release-1.18;name=k3s \
>
> file://0001-Finding-host-local-in-usr-libexec.patch;patchdir=src/import \
>            "
>  SRC_URI[k3s.md5sum] = "363d3a08dc0b72ba6e6577964f6e94a5"
> -SRCREV_k3s = "630bebf94b9dce6b8cd3d402644ed023b3af8f90"
> +SRCREV_k3s = "0e4fbfefe1dd8734756dfa4f9ab4fc89665cece4"
>
> -PV = "v1.18.9+git${SRCPV}"
> +PV = "v1.19.3+git${SRCPV}"
>
>  CNI_NETWORKING_FILES ?= "${WORKDIR}/cni-containerd-net.conf"
>
> --
> 2.20.1
>
>
> 
>
>

-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await thee
at its end
- "Use the force Harry" - Gandalf, Star Trek II

[-- Attachment #2: Type: text/html, Size: 4069 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-17 13:27                                                       ` Bruce Ashfield
@ 2020-11-17 13:31                                                         ` Joakim Roubert
  2020-11-17 13:40                                                           ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-11-17 13:31 UTC (permalink / raw)
  To: meta-virtualization

On 2020-11-17 14:27, Bruce Ashfield wrote:
> 
> I already have this queued in my WIP branch.

Ah, good.

> But without being able to assemble a single node working "cluster", this 
> is still stuck in my staging branch.

I am not sure what this single node cluster is. Is it a k3s master 
running on a single machine, where you can deploy things? What are the 
current issues that you face there?

BR,

/Joakim
-- 
Joakim Roubert
Senior Engineer

Axis Communications AB
Emdalavägen 14, SE-223 69 Lund, Sweden
Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
Fax: +46 46 13 61 30, www.axis.com


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-17 13:31                                                         ` Joakim Roubert
@ 2020-11-17 13:40                                                           ` Bruce Ashfield
  2020-11-17 13:50                                                             ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-17 13:40 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

[-- Attachment #1: Type: text/plain, Size: 1604 bytes --]

On Tue, Nov 17, 2020 at 8:32 AM Joakim Roubert <joakim.roubert@axis.com>
wrote:

> On 2020-11-17 14:27, Bruce Ashfield wrote:
> >
> > I already have this queued in my WIP branch.
>
> Ah, good.
>
> > But without being able to assemble a single node working "cluster", this
> > is still stuck in my staging branch.
>
> I am not sure what this single node cluster is. Is it a k3s master
> running on a single machine, where you can deploy things? What are the
>

Yes. That's the typical description we've been using of single node
"cluster" (I put cluster in quotes, because technically .. it's not a
cluster :D



> current issues that you face there?
>

The issues that we were describing at length in the other longer (more
muddled thread). It was very easy to get lost in that thread, so I'll start
something new here.

The agent is unable to register with the master due to one of several
errors, either certifications or some sort of authentication issue. I've
worked through the flannel and other config issues, so they are taken care
of at this point.

I'm rebuilding on a rebased master now, and will follow up with better logs
when I relaunch my test.

Bruce



>
> BR,
>
> /Joakim
> --
> Joakim Roubert
> Senior Engineer
>
> Axis Communications AB
> Emdalavägen 14, SE-223 69 Lund, Sweden
> Tel: +46 46 272 18 00, Tel (direct): +46 46 272 27 48
> Fax: +46 46 13 61 30, www.axis.com
>
>
> 
>
>

-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await thee
at its end
- "Use the force Harry" - Gandalf, Star Trek II

[-- Attachment #2: Type: text/html, Size: 3028 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-17 13:40                                                           ` Bruce Ashfield
@ 2020-11-17 13:50                                                             ` Joakim Roubert
  2020-11-17 14:15                                                               ` Bruce Ashfield
       [not found]                                                               ` <16485135E3A12798.28066@lists.yoctoproject.org>
  0 siblings, 2 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-11-17 13:50 UTC (permalink / raw)
  To: meta-virtualization

On 2020-11-17 14:40, Bruce Ashfield wrote:
> 
> The agent is unable to register with the master due to one of several
>  errors

But if you have an agent, that would be 2 nodes, right? One master and
one worker node?

> I'm rebuilding on a rebased master now, and will follow up with
> better logs when I relaunch my test.

Excellent!

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-12  7:04                                                       ` Lance Yang
  2020-11-12 13:40                                                         ` Bruce Ashfield
@ 2020-11-17 14:13                                                         ` Joakim Roubert
  2021-03-13 19:30                                                           ` Bruce Ashfield
  1 sibling, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-11-17 14:13 UTC (permalink / raw)
  To: meta-virtualization

On 2020-11-12 08:04, Lance Yang wrote:
> 
> I started k3s server without agent and confirmed the node is ready. Then 
> I copied the node_token and then ran k3s agent with that token and 
> specified the server url.

Did you do this on the same device?
I always run the k3s master on one device and the k3s agents on other 
machines (and thought that was how it is supposed to work).

BR,

/Joakim


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-17 13:50                                                             ` Joakim Roubert
@ 2020-11-17 14:15                                                               ` Bruce Ashfield
       [not found]                                                               ` <16485135E3A12798.28066@lists.yoctoproject.org>
  1 sibling, 0 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-17 14:15 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

[-- Attachment #1: Type: text/plain, Size: 1003 bytes --]

On Tue, Nov 17, 2020 at 8:50 AM Joakim Roubert <joakim.roubert@axis.com>
wrote:

> On 2020-11-17 14:40, Bruce Ashfield wrote:
> >
> > The agent is unable to register with the master due to one of several
> >  errors
>
> But if you have an agent, that would be 2 nodes, right? One master and
> one worker node?
>
In kubernetes terms, perhaps. But physical boxes .. no. This needs to be
testable under a single instance (qemu in my case). I can use docker /
container tricks from within the instance if required (the guides are mixed
on that), but it needs to be a single box as I have no way to easily
coordinate deployment on multiple boxes for my CI testing.

Bruce



>
> > I'm rebuilding on a rebased master now, and will follow up with
> > better logs when I relaunch my test.
>
> Excellent!
>
> BR,
>
> /Joakim
>
> 
>
>

-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await thee
at its end
- "Use the force Harry" - Gandalf, Star Trek II

[-- Attachment #2: Type: text/html, Size: 1970 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
       [not found]                                                               ` <16485135E3A12798.28066@lists.yoctoproject.org>
@ 2020-11-17 14:19                                                                 ` Bruce Ashfield
  2020-11-17 14:27                                                                   ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-17 14:19 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: Joakim Roubert, meta-virtualization

[-- Attachment #1: Type: text/plain, Size: 1507 bytes --]

On Tue, Nov 17, 2020 at 9:16 AM Bruce Ashfield via lists.yoctoproject.org
<bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:

>
>
> On Tue, Nov 17, 2020 at 8:50 AM Joakim Roubert <joakim.roubert@axis.com>
> wrote:
>
>> On 2020-11-17 14:40, Bruce Ashfield wrote:
>> >
>> > The agent is unable to register with the master due to one of several
>> >  errors
>>
>> But if you have an agent, that would be 2 nodes, right? One master and
>> one worker node?
>>
> In kubernetes terms, perhaps. But physical boxes .. no. This needs to be
> testable under a single instance (qemu in my case). I can use docker /
> container tricks from within the instance if required (the guides are mixed
> on that), but it needs to be a single box as I have no way to easily
> coordinate deployment on multiple boxes for my CI testing.
>
>
I should also say that this is how I've been testing k8s (but that also
needs work again), so it has been up and running in the past.

Bruce



> Bruce
>
>
>
>>
>> > I'm rebuilding on a rebased master now, and will follow up with
>> > better logs when I relaunch my test.
>>
>> Excellent!
>>
>> BR,
>>
>> /Joakim
>>
>>
>>
>>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee
> at its end
> - "Use the force Harry" - Gandalf, Star Trek II
>
>
> 
>
>

-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await thee
at its end
- "Use the force Harry" - Gandalf, Star Trek II

[-- Attachment #2: Type: text/html, Size: 3273 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-17 14:19                                                                 ` Bruce Ashfield
@ 2020-11-17 14:27                                                                   ` Joakim Roubert
  2020-11-17 14:41                                                                     ` Bruce Ashfield
       [not found]                                                                     ` <1648529A6FD37D30.5807@lists.yoctoproject.org>
  0 siblings, 2 replies; 73+ messages in thread
From: Joakim Roubert @ 2020-11-17 14:27 UTC (permalink / raw)
  To: meta-virtualization

On 2020-11-17 15:19, Bruce Ashfield wrote:
> 
>         But if you have an agent, that would be 2 nodes, right? One
>         master and
>         one worker node?
> 
>     In kubernetes terms, perhaps. But physical boxes .. no. This needs
>     to be testable under a single instance (qemu in my case).

I see. Well, I have never tried that, but would in that case use one 
qemu instance for the master and a separate qemu instance for each 
worker node. (I should try to run both master and a worker on one 
machine and see what happens on my setup.)

BR,

/Joakim


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-17 14:27                                                                   ` Joakim Roubert
@ 2020-11-17 14:41                                                                     ` Bruce Ashfield
       [not found]                                                                     ` <1648529A6FD37D30.5807@lists.yoctoproject.org>
  1 sibling, 0 replies; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-17 14:41 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

[-- Attachment #1: Type: text/plain, Size: 1247 bytes --]

On Tue, Nov 17, 2020 at 9:27 AM Joakim Roubert <joakim.roubert@axis.com>
wrote:

> On 2020-11-17 15:19, Bruce Ashfield wrote:
> >
> >         But if you have an agent, that would be 2 nodes, right? One
> >         master and
> >         one worker node?
> >
> >     In kubernetes terms, perhaps. But physical boxes .. no. This needs
> >     to be testable under a single instance (qemu in my case).
>
> I see. Well, I have never tried that, but would in that case use one
> qemu instance for the master and a separate qemu instance for each
> worker node. (I should try to run both master and a worker on one
> machine and see what happens on my setup.)
>

I'm experimenting with two qemu sessions as well (I had to do this with
meta-openstack), but then host networking becomes an issue and if that's
also the box you are doing builds on (like I am), it can cause issues.

Either way, I'm fairly close now and hopefully will have this sorted out
soon.

My build is still running, but I'll post specific logs later today.

Bruce



>
> BR,
>
> /Joakim
>
>
> 
>
>

-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await thee
at its end
- "Use the force Harry" - Gandalf, Star Trek II

[-- Attachment #2: Type: text/html, Size: 2372 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
       [not found]                                                                     ` <1648529A6FD37D30.5807@lists.yoctoproject.org>
@ 2020-11-17 19:39                                                                       ` Bruce Ashfield
  2020-11-18 18:27                                                                         ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-17 19:39 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: Joakim Roubert, meta-virtualization

[-- Attachment #1: Type: text/plain, Size: 4981 bytes --]

Here's problem #1, using qemux86-64 and the latest on my k3s-wip branch,
while the master shows itself as ready, the pods are crashing. I need to
sort that out before I can get into the single node config.

root@qemux86-64:~# uname -a

Linux qemux86-64 5.8.13-yocto-standard #1 SMP PREEMPT Tue Oct 6 12:23:29
UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

root@qemux86-64:~# k3s --version

k3s version v1.19.3+gitAUTOINC+970fbc66d3 (970fbc66)

root@qemux86-64:~# kubectl get nodes

NAME         STATUS   ROLES    AGE   VERSION

qemux86-64   Ready    master   17m   v1.19.3-k3s1

And flannel is up:

5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue
state UNKNOWN group default

    link/ether be:5e:55:3e:52:e6 brd ff:ff:ff:ff:ff:ff

    inet 10.42.0.0/32 scope global flannel.1

       valid_lft forever preferred_lft forever

    inet6 fe80::bc5e:55ff:fe3e:52e6/64 scope link

       valid_lft forever preferred_lft forever

6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
group default qlen 1000

    link/ether e2:38:35:11:31:6e brd ff:ff:ff:ff:ff:ff

    inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0

       valid_lft forever preferred_lft forever

    inet6 fe80::e038:35ff:fe11:316e/64 scope link

       valid_lft forever preferred_lft forever

1081: veth89c8ca0d@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
noqueue master cni0 state UP group default

    link/ether b6:76:4f:77:e7:a6 brd ff:ff:ff:ff:ff:ff link-netns
cni-255cf837-ca3c-f08d-730f-bbb359cb6ae1

    inet6 fe80::8cf:bff:febb:85d7/64 scope link

       valid_lft forever preferred_lft forever

1082: veth001513a2@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
noqueue master cni0 state UP group default

    link/ether 52:5c:64:41:c6:e1 brd ff:ff:ff:ff:ff:ff link-netns
cni-71490108-5f0a-a266-de1f-4bed565dcf88

    inet6 fe80::605b:23ff:fe50:c22f/64 scope link

       valid_lft forever preferred_lft forever

1083: veth4ae47008@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
noqueue master cni0 state UP group default

    link/ether be:0e:af:76:4d:66 brd ff:ff:ff:ff:ff:ff link-netns
cni-bb1a1e05-6fa1-e9dd-3de8-b3a4be47197a

    inet6 fe80::8467:31ff:fec3:6b20/64 scope link

       valid_lft forever preferred_lft forever

1084: veth07cd05c1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
noqueue master cni0 state UP group default

    link/ether 5e:75:7b:bc:ee:1c brd ff:ff:ff:ff:ff:ff link-netns
cni-07e997d6-7918-0f38-b1e5-1b08e9ac0581

    inet6 fe80::28e0:ceff:fe42:364d/64 scope link tentative
       valid_lft forever preferred_lft forever

root@qemux86-64:~# kubectl get pods -n kube-system

NAME                                     READY   STATUS             RESTARTS
  AGE

local-path-provisioner-7ff9579c6-bd947   0/1     CrashLoopBackOff   8
    22m

coredns-66c464876b-76bdl                 0/1     CrashLoopBackOff   8
    22m

helm-install-traefik-mvt5k               0/1     CrashLoopBackOff   8
    22m

metrics-server-7b4f8b595-ck5kz           0/1     CrashLoopBackOff   8
    22m

root@qemux86-64:~#

Those failures are a bit different than I used to see before going to 1.19

For whatever reason, my logs are empty, so I'm trying to see if my k3s host
image is missing something.

Bruce

On Tue, Nov 17, 2020 at 9:41 AM Bruce Ashfield via lists.yoctoproject.org
<bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:

>
>
> On Tue, Nov 17, 2020 at 9:27 AM Joakim Roubert <joakim.roubert@axis.com>
> wrote:
>
>> On 2020-11-17 15:19, Bruce Ashfield wrote:
>> >
>> >         But if you have an agent, that would be 2 nodes, right? One
>> >         master and
>> >         one worker node?
>> >
>> >     In kubernetes terms, perhaps. But physical boxes .. no. This needs
>> >     to be testable under a single instance (qemu in my case).
>>
>> I see. Well, I have never tried that, but would in that case use one
>> qemu instance for the master and a separate qemu instance for each
>> worker node. (I should try to run both master and a worker on one
>> machine and see what happens on my setup.)
>>
>
> I'm experimenting with two qemu sessions as well (I had to do this with
> meta-openstack), but then host networking becomes an issue and if that's
> also the box you are doing builds on (like I am), it can cause issues.
>
> Either way, I'm fairly close now and hopefully will have this sorted out
> soon.
>
> My build is still running, but I'll post specific logs later today.
>
> Bruce
>
>
>
>>
>> BR,
>>
>> /Joakim
>>
>>
>>
>>
>>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee
> at its end
> - "Use the force Harry" - Gandalf, Star Trek II
>
>
> 
>
>

-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await thee
at its end
- "Use the force Harry" - Gandalf, Star Trek II

[-- Attachment #2: Type: text/html, Size: 24140 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-17 19:39                                                                       ` Bruce Ashfield
@ 2020-11-18 18:27                                                                         ` Joakim Roubert
  2020-11-18 20:38                                                                           ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-11-18 18:27 UTC (permalink / raw)
  To: meta-virtualization

On 2020-11-17 20:39, Bruce Ashfield wrote:
 >
> root@qemux86-64:~# kubectl get pods -n kube-system
> NAME READY STATUS RESTARTS AGE
> local-path-provisioner-7ff9579c6-bd947 0/1 CrashLoopBackOff 822m
> coredns-66c464876b-76bdl 0/1 CrashLoopBackOff 822m
> helm-install-traefik-mvt5k 0/1 CrashLoopBackOff 822m
> metrics-server-7b4f8b595-ck5kz 0/1 CrashLoopBackOff 822m
> root@qemux86-64:~#

 From my own experience with CrashLoopBackOff situations is that there 
may be serveral different things causing that. Can you trace down what 
is causing the CrashLoopBackOff here?

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-18 18:27                                                                         ` Joakim Roubert
@ 2020-11-18 20:38                                                                           ` Bruce Ashfield
  2020-12-11  6:31                                                                             ` Lance Yang
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-11-18 20:38 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

[-- Attachment #1: Type: text/plain, Size: 1493 bytes --]

On Wed, Nov 18, 2020 at 1:27 PM Joakim Roubert <joakim.roubert@axis.com>
wrote:

> On 2020-11-17 20:39, Bruce Ashfield wrote:
>  >
> > root@qemux86-64:~# kubectl get pods -n kube-system
> > NAME READY STATUS RESTARTS AGE
> > local-path-provisioner-7ff9579c6-bd947 0/1 CrashLoopBackOff 822m
> > coredns-66c464876b-76bdl 0/1 CrashLoopBackOff 822m
> > helm-install-traefik-mvt5k 0/1 CrashLoopBackOff 822m
> > metrics-server-7b4f8b595-ck5kz 0/1 CrashLoopBackOff 822m
> > root@qemux86-64:~#
>
>  From my own experience with CrashLoopBackOff situations is that there
> may be serveral different things causing that. Can you trace down what
> is causing the CrashLoopBackOff here?
>

Not really. The build seems to have some sort of misconfiguration and no
logs are created (and hence can't be accessed via kubectl).

Running the server on the command line just gets us reams of messages and a
lot of false leads (too many nameservers, iptables --random-fully not
supported, etc ).

I've tried a few variants today, but can't get any really simple logs out.
I'll do more tomorrow.

I've been able to confirm that my packagegroup/image type runs docker,
containerd, and runc launched containers just fine, so this is just some
quirk/idiocy of k*s that is causing the problem.

Bruce


>
> BR,
>
> /Joakim
>
> 
>
>

-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await thee
at its end
- "Use the force Harry" - Gandalf, Star Trek II

[-- Attachment #2: Type: text/html, Size: 2709 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-11-18 20:38                                                                           ` Bruce Ashfield
@ 2020-12-11  6:31                                                                             ` Lance Yang
  2020-12-11 13:43                                                                               ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Lance Yang @ 2020-12-11  6:31 UTC (permalink / raw)
  To: bruce.ashfield, Joakim Roubert; +Cc: meta-virtualization, nd, Michael Zhao

[-- Attachment #1: Type: text/plain, Size: 2627 bytes --]

Hi Bruce and Joakim,

I recently saw a new branch master-next has been created in meta-virtualization. It mainly related to k3s and I saw k3s had been bumped to v1.19. I would like to know if the k3s patch will be merged into master branch soon.  I remembered you mentioned k3s v1.18 has some agent registration issue and some had crash loop issues.

What is the current status of the k3s patch? Is there any further issues?

Best Regards,
Lance
From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org> On Behalf Of Bruce Ashfield via lists.yoctoproject.org
Sent: Thursday, November 19, 2020 4:39 AM
To: Joakim Roubert <joakim.roubert@axis.com>
Cc: meta-virtualization <meta-virtualization@yoctoproject.org>
Subject: Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3



On Wed, Nov 18, 2020 at 1:27 PM Joakim Roubert <joakim.roubert@axis.com<mailto:joakim.roubert@axis.com>> wrote:
On 2020-11-17 20:39, Bruce Ashfield wrote:
 >
> root@qemux86-64:~# kubectl get pods -n kube-system
> NAME READY STATUS RESTARTS AGE
> local-path-provisioner-7ff9579c6-bd947 0/1 CrashLoopBackOff 822m
> coredns-66c464876b-76bdl 0/1 CrashLoopBackOff 822m
> helm-install-traefik-mvt5k 0/1 CrashLoopBackOff 822m
> metrics-server-7b4f8b595-ck5kz 0/1 CrashLoopBackOff 822m
> root@qemux86-64:~#

 From my own experience with CrashLoopBackOff situations is that there
may be serveral different things causing that. Can you trace down what
is causing the CrashLoopBackOff here?

Not really. The build seems to have some sort of misconfiguration and no logs are created (and hence can't be accessed via kubectl).

Running the server on the command line just gets us reams of messages and a lot of false leads (too many nameservers, iptables --random-fully not supported, etc ).

I've tried a few variants today, but can't get any really simple logs out. I'll do more tomorrow.

I've been able to confirm that my packagegroup/image type runs docker, containerd, and runc launched containers just fine, so this is just some quirk/idiocy of k*s that is causing the problem.

Bruce


BR,

/Joakim




--
- Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
- "Use the force Harry" - Gandalf, Star Trek II
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

[-- Attachment #2: Type: text/html, Size: 7018 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-12-11  6:31                                                                             ` Lance Yang
@ 2020-12-11 13:43                                                                               ` Bruce Ashfield
  2020-12-15  9:56                                                                                 ` Lance Yang
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-12-11 13:43 UTC (permalink / raw)
  To: Lance Yang; +Cc: Joakim Roubert, meta-virtualization, nd, Michael Zhao

On Fri, Dec 11, 2020 at 1:31 AM Lance Yang <Lance.Yang@arm.com> wrote:
>
> Hi Bruce and Joakim,
>
>
>
> I recently saw a new branch master-next has been created in meta-virtualization. It mainly related to k3s and I saw k3s had been bumped to v1.19. I would like to know if the k3s patch will be merged into master branch soon.  I remembered you mentioned k3s v1.18 has some agent registration issue and some had crash loop issues.
>
>
>
> What is the current status of the k3s patch? Is there any further issues?

It is just in master-next, to show that some cleanups have been done,
and the supporting component version bumps that I've done, but no, it
still doesn't work under qemux86-64, so it can't actually merge to
master.

The node becomes ready, but all the core containers go into a
crashbackoff loop, kubectl logs doesn't work, and we still can't
register a client with it (likely because of symptom #1).

I'm also chasing down an iptables issue (that I had previously fixed),
but should have that sorted shortly.

Bruce

>
>
>
> Best Regards,
>
> Lance
>
> From: meta-virtualization@lists.yoctoproject.org <meta-virtualization@lists.yoctoproject.org> On Behalf Of Bruce Ashfield via lists.yoctoproject.org
> Sent: Thursday, November 19, 2020 4:39 AM
> To: Joakim Roubert <joakim.roubert@axis.com>
> Cc: meta-virtualization <meta-virtualization@yoctoproject.org>
> Subject: Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
>
>
>
>
>
>
>
> On Wed, Nov 18, 2020 at 1:27 PM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-11-17 20:39, Bruce Ashfield wrote:
>  >
> > root@qemux86-64:~# kubectl get pods -n kube-system
> > NAME READY STATUS RESTARTS AGE
> > local-path-provisioner-7ff9579c6-bd947 0/1 CrashLoopBackOff 822m
> > coredns-66c464876b-76bdl 0/1 CrashLoopBackOff 822m
> > helm-install-traefik-mvt5k 0/1 CrashLoopBackOff 822m
> > metrics-server-7b4f8b595-ck5kz 0/1 CrashLoopBackOff 822m
> > root@qemux86-64:~#
>
>  From my own experience with CrashLoopBackOff situations is that there
> may be serveral different things causing that. Can you trace down what
> is causing the CrashLoopBackOff here?
>
>
>
> Not really. The build seems to have some sort of misconfiguration and no logs are created (and hence can't be accessed via kubectl).
>
>
>
> Running the server on the command line just gets us reams of messages and a lot of false leads (too many nameservers, iptables --random-fully not supported, etc ).
>
>
>
> I've tried a few variants today, but can't get any really simple logs out. I'll do more tomorrow.
>
>
>
> I've been able to confirm that my packagegroup/image type runs docker, containerd, and runc launched containers just fine, so this is just some quirk/idiocy of k*s that is causing the problem.
>
>
>
> Bruce
>
>
>
>
> BR,
>
> /Joakim
>
>
>
>
>
> --
>
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
>
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-12-11 13:43                                                                               ` Bruce Ashfield
@ 2020-12-15  9:56                                                                                 ` Lance Yang
  2020-12-15 18:58                                                                                   ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Lance Yang @ 2020-12-15  9:56 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: Joakim Roubert, Michael Zhao, meta-virtualization, nd

Hi Bruce,

Thanks for your quick reply.

Yesterday, I pulled meta-virtualization master-next branch to my local server and trying to build k3s with qemux86-64 image.  In master-next branch, I found you had created DISTRO_FEATURE check for k3s and some package groups for k3s. E.g. packagegroup-k3s-node, packagegroup-k3s-host.

I configured my local.conf to install k3s with following configurations:

DISTRO_FEATURES_append = " k3s virtualization"
IMAGE_INSTALL_append = " packagegroup-k3s-node packagegroup-k3s-host cgroup-lite kernel-modules"

However, when I use:

bitbake core-image-minimal

to compile a minimal image to test k3s.  I encountered the following error message:

gcc: error: unrecognized command line option ‘-fmacro-prefix-map=/path/to/build/tmp/work/core2-64-poky-linux/libvirt/6.3.0-r0=/usr/src/debug/libvirt/6.3.0-r0’
| error: command '/home/lance/repos/build/tmp/hosttools/gcc' failed with exit code 1
| WARNING: exit code 1 from a shell command.
| ERROR: Execution of '/path/to/build/tmp/work/core2-64-poky-linux/libvirt/6.3.0-r0/temp/run.do_compile.162048' failed with exit code 1:

It seems the error was related to libvirt. I two three questions here. If you need more log details, please let me know.

1. Do I mis-config anything so that the error above appeared? Could you please share your configuration when you compile the image with k3s?
2. Do we need libvirt when we install k3s? From my own experience, libvirt is for virtualization. When I run k3s on bare-metal, I don't think I install a libvirt component. And before, I ran a k3s successfully without libvirt on aarch64 Yocto Linux.

Once I successfully built the qemux86-64 image, I am willing to spare some effort to reproduce the bugs you mentioned in the previous email. Would you mind sharing your config to me?

Best Regards,
Lance

> -----Original Message-----
> From: Bruce Ashfield <bruce.ashfield@gmail.com>
> Sent: Friday, December 11, 2020 9:44 PM
> To: Lance Yang <Lance.Yang@arm.com>
> Cc: Joakim Roubert <joakim.roubert@axis.com>; meta-virtualization <meta-
> virtualization@yoctoproject.org>; nd <nd@arm.com>; Michael Zhao
> <Michael.Zhao@arm.com>
> Subject: Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
>
> On Fri, Dec 11, 2020 at 1:31 AM Lance Yang <Lance.Yang@arm.com> wrote:
> >
> > Hi Bruce and Joakim,
> >
> >
> >
> > I recently saw a new branch master-next has been created in meta-
> virtualization. It mainly related to k3s and I saw k3s had been bumped to
> v1.19. I would like to know if the k3s patch will be merged into master branch
> soon.  I remembered you mentioned k3s v1.18 has some agent registration
> issue and some had crash loop issues.
> >
> >
> >
> > What is the current status of the k3s patch? Is there any further issues?
>
> It is just in master-next, to show that some cleanups have been done, and
> the supporting component version bumps that I've done, but no, it still
> doesn't work under qemux86-64, so it can't actually merge to master.
>
> The node becomes ready, but all the core containers go into a crashbackoff
> loop, kubectl logs doesn't work, and we still can't register a client with it
> (likely because of symptom #1).
>
> I'm also chasing down an iptables issue (that I had previously fixed), but
> should have that sorted shortly.
>
> Bruce
>
> >
> >
> >
> > Best Regards,
> >
> > Lance
> >
> > From: meta-virtualization@lists.yoctoproject.org
> > <meta-virtualization@lists.yoctoproject.org> On Behalf Of Bruce
> > Ashfield via lists.yoctoproject.org
> > Sent: Thursday, November 19, 2020 4:39 AM
> > To: Joakim Roubert <joakim.roubert@axis.com>
> > Cc: meta-virtualization <meta-virtualization@yoctoproject.org>
> > Subject: Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
> >
> >
> >
> >
> >
> >
> >
> > On Wed, Nov 18, 2020 at 1:27 PM Joakim Roubert
> <joakim.roubert@axis.com> wrote:
> >
> > On 2020-11-17 20:39, Bruce Ashfield wrote:
> >  >
> > > root@qemux86-64:~# kubectl get pods -n kube-system NAME READY
> STATUS
> > > RESTARTS AGE
> > > local-path-provisioner-7ff9579c6-bd947 0/1 CrashLoopBackOff 822m
> > > coredns-66c464876b-76bdl 0/1 CrashLoopBackOff 822m
> > > helm-install-traefik-mvt5k 0/1 CrashLoopBackOff 822m
> > > metrics-server-7b4f8b595-ck5kz 0/1 CrashLoopBackOff 822m
> > > root@qemux86-64:~#
> >
> >  From my own experience with CrashLoopBackOff situations is that there
> > may be serveral different things causing that. Can you trace down what
> > is causing the CrashLoopBackOff here?
> >
> >
> >
> > Not really. The build seems to have some sort of misconfiguration and no
> logs are created (and hence can't be accessed via kubectl).
> >
> >
> >
> > Running the server on the command line just gets us reams of messages
> and a lot of false leads (too many nameservers, iptables --random-fully not
> supported, etc ).
> >
> >
> >
> > I've tried a few variants today, but can't get any really simple logs out. I'll do
> more tomorrow.
> >
> >
> >
> > I've been able to confirm that my packagegroup/image type runs docker,
> containerd, and runc launched containers just fine, so this is just some
> quirk/idiocy of k*s that is causing the problem.
> >
> >
> >
> > Bruce
> >
> >
> >
> >
> > BR,
> >
> > /Joakim
> >
> >
> >
> >
> >
> > --
> >
> > - Thou shalt not follow the NULL pointer, for chaos and madness await
> > thee at its end
> > - "Use the force Harry" - Gandalf, Star Trek II
> >
> > IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended recipient,
> please notify the sender immediately and do not disclose the contents to any
> other person, use it for any purpose, or store or copy the information in any
> medium. Thank you.
>
>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee
> at its end
> - "Use the force Harry" - Gandalf, Star Trek II
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-12-15  9:56                                                                                 ` Lance Yang
@ 2020-12-15 18:58                                                                                   ` Bruce Ashfield
  2020-12-18 14:23                                                                                     ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-12-15 18:58 UTC (permalink / raw)
  To: Lance Yang; +Cc: Joakim Roubert, Michael Zhao, meta-virtualization, nd

On Tue, Dec 15, 2020 at 4:57 AM Lance Yang <Lance.Yang@arm.com> wrote:
>
> Hi Bruce,
>
> Thanks for your quick reply.
>
> Yesterday, I pulled meta-virtualization master-next branch to my local server and trying to build k3s with qemux86-64 image.  In master-next branch, I found you had created DISTRO_FEATURE check for k3s and some package groups for k3s. E.g. packagegroup-k3s-node, packagegroup-k3s-host.
>
> I configured my local.conf to install k3s with following configurations:
>
> DISTRO_FEATURES_append = " k3s virtualization"
> IMAGE_INSTALL_append = " packagegroup-k3s-node packagegroup-k3s-host cgroup-lite kernel-modules"
>
> However, when I use:
>
> bitbake core-image-minimal
>
> to compile a minimal image to test k3s.  I encountered the following error message:
>
> gcc: error: unrecognized command line option ‘-fmacro-prefix-map=/path/to/build/tmp/work/core2-64-poky-linux/libvirt/6.3.0-r0=/usr/src/debug/libvirt/6.3.0-r0’
> | error: command '/home/lance/repos/build/tmp/hosttools/gcc' failed with exit code 1
> | WARNING: exit code 1 from a shell command.
> | ERROR: Execution of '/path/to/build/tmp/work/core2-64-poky-linux/libvirt/6.3.0-r0/temp/run.do_compile.162048' failed with exit code 1:
>
> It seems the error was related to libvirt. I two three questions here. If you need more log details, please let me know.
>
> 1. Do I mis-config anything so that the error above appeared? Could you please share your configuration when you compile the image with k3s?

The packagegroups are still a work in progress, and I actually have
some changes to them that aren't on that branch yet. In particular
docker has been removed as a dependency and libvirt has been shuffled
to an optional packagegroup.

That being said, I just built libvirt on two different machines on an
up to date master branch for all layers, and didn't get that same
error. So

You don't need the packagegroups for testing, but once I have a
working configuration, it will be split and captured into package
lists for re-use across many of the different configurations.

So I'd suggest dropping it and ignoring that failure for now (since) I
can't reproduce it.

> 2. Do we need libvirt when we install k3s? From my own experience, libvirt is for virtualization. When I run k3s on bare-metal, I don't think I install a libvirt component. And before, I ran a k3s successfully without libvirt on aarch64 Yocto Linux.
>
> Once I successfully built the qemux86-64 image, I am willing to spare some effort to reproduce the bugs you mentioned in the previous email. Would you mind sharing your config to me?
>

I've actually tracked this down to something in our build of the k3s
binary. I can bring up a node by substituting just the k3s all-in-one
executable. There have been issues with the go compiler reported in
master, so I'm doing a full rebuild now, in case the linker is causing
the problem. I've looked all through the build instructions from the
k3s repository, and nothing obvious is different between the two
(except possibly the compiler) .. I'm building the exact same git hash
of the one that works.

For now, I'd suggest waiting a couple more days, since I may have it
solved shortly.

Bruce

> Best Regards,
> Lance
>
> > -----Original Message-----
> > From: Bruce Ashfield <bruce.ashfield@gmail.com>
> > Sent: Friday, December 11, 2020 9:44 PM
> > To: Lance Yang <Lance.Yang@arm.com>
> > Cc: Joakim Roubert <joakim.roubert@axis.com>; meta-virtualization <meta-
> > virtualization@yoctoproject.org>; nd <nd@arm.com>; Michael Zhao
> > <Michael.Zhao@arm.com>
> > Subject: Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
> >
> > On Fri, Dec 11, 2020 at 1:31 AM Lance Yang <Lance.Yang@arm.com> wrote:
> > >
> > > Hi Bruce and Joakim,
> > >
> > >
> > >
> > > I recently saw a new branch master-next has been created in meta-
> > virtualization. It mainly related to k3s and I saw k3s had been bumped to
> > v1.19. I would like to know if the k3s patch will be merged into master branch
> > soon.  I remembered you mentioned k3s v1.18 has some agent registration
> > issue and some had crash loop issues.
> > >
> > >
> > >
> > > What is the current status of the k3s patch? Is there any further issues?
> >
> > It is just in master-next, to show that some cleanups have been done, and
> > the supporting component version bumps that I've done, but no, it still
> > doesn't work under qemux86-64, so it can't actually merge to master.
> >
> > The node becomes ready, but all the core containers go into a crashbackoff
> > loop, kubectl logs doesn't work, and we still can't register a client with it
> > (likely because of symptom #1).
> >
> > I'm also chasing down an iptables issue (that I had previously fixed), but
> > should have that sorted shortly.
> >
> > Bruce
> >
> > >
> > >
> > >
> > > Best Regards,
> > >
> > > Lance
> > >
> > > From: meta-virtualization@lists.yoctoproject.org
> > > <meta-virtualization@lists.yoctoproject.org> On Behalf Of Bruce
> > > Ashfield via lists.yoctoproject.org
> > > Sent: Thursday, November 19, 2020 4:39 AM
> > > To: Joakim Roubert <joakim.roubert@axis.com>
> > > Cc: meta-virtualization <meta-virtualization@yoctoproject.org>
> > > Subject: Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Wed, Nov 18, 2020 at 1:27 PM Joakim Roubert
> > <joakim.roubert@axis.com> wrote:
> > >
> > > On 2020-11-17 20:39, Bruce Ashfield wrote:
> > >  >
> > > > root@qemux86-64:~# kubectl get pods -n kube-system NAME READY
> > STATUS
> > > > RESTARTS AGE
> > > > local-path-provisioner-7ff9579c6-bd947 0/1 CrashLoopBackOff 822m
> > > > coredns-66c464876b-76bdl 0/1 CrashLoopBackOff 822m
> > > > helm-install-traefik-mvt5k 0/1 CrashLoopBackOff 822m
> > > > metrics-server-7b4f8b595-ck5kz 0/1 CrashLoopBackOff 822m
> > > > root@qemux86-64:~#
> > >
> > >  From my own experience with CrashLoopBackOff situations is that there
> > > may be serveral different things causing that. Can you trace down what
> > > is causing the CrashLoopBackOff here?
> > >
> > >
> > >
> > > Not really. The build seems to have some sort of misconfiguration and no
> > logs are created (and hence can't be accessed via kubectl).
> > >
> > >
> > >
> > > Running the server on the command line just gets us reams of messages
> > and a lot of false leads (too many nameservers, iptables --random-fully not
> > supported, etc ).
> > >
> > >
> > >
> > > I've tried a few variants today, but can't get any really simple logs out. I'll do
> > more tomorrow.
> > >
> > >
> > >
> > > I've been able to confirm that my packagegroup/image type runs docker,
> > containerd, and runc launched containers just fine, so this is just some
> > quirk/idiocy of k*s that is causing the problem.
> > >
> > >
> > >
> > > Bruce
> > >
> > >
> > >
> > >
> > > BR,
> > >
> > > /Joakim
> > >
> > >
> > >
> > >
> > >
> > > --
> > >
> > > - Thou shalt not follow the NULL pointer, for chaos and madness await
> > > thee at its end
> > > - "Use the force Harry" - Gandalf, Star Trek II
> > >
> > > IMPORTANT NOTICE: The contents of this email and any attachments are
> > confidential and may also be privileged. If you are not the intended recipient,
> > please notify the sender immediately and do not disclose the contents to any
> > other person, use it for any purpose, or store or copy the information in any
> > medium. Thank you.
> >
> >
> >
> > --
> > - Thou shalt not follow the NULL pointer, for chaos and madness await thee
> > at its end
> > - "Use the force Harry" - Gandalf, Star Trek II
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.



--
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-12-15 18:58                                                                                   ` Bruce Ashfield
@ 2020-12-18 14:23                                                                                     ` Joakim Roubert
  2020-12-22 16:15                                                                                       ` Bruce Ashfield
  0 siblings, 1 reply; 73+ messages in thread
From: Joakim Roubert @ 2020-12-18 14:23 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2020-12-15 19:58, Bruce Ashfield wrote:
> 
> For now, I'd suggest waiting a couple more days, since I may have it
> solved shortly.

If there continues to be strange problems, perhaps the Rancher guys 
would be interested in helping out? They are both skilled and friendly, 
and given Rancher's focused agenda on being _the_ Kubernetes provider 
for edge with k3s, I guess having k3s in meta-virtualization would be 
attractive to them.

BR,

/Joakim


^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-12-18 14:23                                                                                     ` Joakim Roubert
@ 2020-12-22 16:15                                                                                       ` Bruce Ashfield
  2021-01-04  7:12                                                                                         ` Joakim Roubert
  0 siblings, 1 reply; 73+ messages in thread
From: Bruce Ashfield @ 2020-12-22 16:15 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

On Fri, Dec 18, 2020 at 9:24 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-12-15 19:58, Bruce Ashfield wrote:
> >
> > For now, I'd suggest waiting a couple more days, since I may have it
> > solved shortly.
>
> If there continues to be strange problems, perhaps the Rancher guys
> would be interested in helping out? They are both skilled and friendly,
> and given Rancher's focused agenda on being _the_ Kubernetes provider
> for edge with k3s, I guess having k3s in meta-virtualization would be
> attractive to them.

I have reached out to them, but had to put down my debug on this for a
few days as I prepared 5.10 as the new reference kernel for oe-core.

I'm almost through that, and will cycle back to this shortly.

The bottom line is that only the rancher build k3s binary works with
our constructed rootfs (in my tests).

If I use our built binary, or one that I build outside of yocto
(natively, on my build server), we end up in crashloops on all the
core containers. The binaries are built from the identical hash (for
yocto) and using the official build container images / process on my
workstation .. same result. So that rules out something in the kernel
(and its config), the go version or something in our rootfs being
missing. But as soon as you fetch the rancher binary, restart the
services .. the containers come up just fine.

And that unfortunately means that it is difficult to get help from the
project, since they don't have the right understanding of the runtime
environment (but I'm trying!).

I'm diving deeper to try and sort it out.

Bruce

>
> BR,
>
> /Joakim
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2020-12-22 16:15                                                                                       ` Bruce Ashfield
@ 2021-01-04  7:12                                                                                         ` Joakim Roubert
  2021-01-04 13:40                                                                                           ` Bruce Ashfield
       [not found]                                                                                           ` <16570B29E8680DE8.14857@lists.yoctoproject.org>
  0 siblings, 2 replies; 73+ messages in thread
From: Joakim Roubert @ 2021-01-04  7:12 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2020-12-22 17:15, Bruce Ashfield wrote:
> So that rules out something in the kernel (and its config), the go 
> version or something in our rootfs being missing.

That surely narrows things down.

> But as soon as you fetch the rancher binary, restart the services ..
> the containers come up just fine.

Is this the same thing that happened for the k3s v1.18.9 build too?

> I'm diving deeper to try and sort it out.

👍

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
  2021-01-04  7:12                                                                                         ` Joakim Roubert
@ 2021-01-04 13:40                                                                                           ` Bruce Ashfield
       [not found]                                                                                           ` <16570B29E8680DE8.14857@lists.yoctoproject.org>
  1 sibling, 0 replies; 73+ messages in thread
From: Bruce Ashfield @ 2021-01-04 13:40 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

On Mon, Jan 4, 2021 at 2:12 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-12-22 17:15, Bruce Ashfield wrote:
> > So that rules out something in the kernel (and its config), the go
> > version or something in our rootfs being missing.
>
> That surely narrows things down.
>
> > But as soon as you fetch the rancher binary, restart the services ..
> > the containers come up just fine.
>
> Is this the same thing that happened for the k3s v1.18.9 build too?
>

Yep. It is consistent with all versions that I've built, including the
pre-release master branch as well. I was hoping for some "uprev magic"
to fix it, but that wasn't the case :)

I'm back from the holidays now, and will be back into this shortly.

Bruce

> > I'm diving deeper to try and sort it out.
>
>
>
> BR,
>
> /Joakim



-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3
       [not found]                                                                                           ` <16570B29E8680DE8.14857@lists.yoctoproject.org>
@ 2021-01-05 13:58                                                                                             ` Bruce Ashfield
  0 siblings, 0 replies; 73+ messages in thread
From: Bruce Ashfield @ 2021-01-05 13:58 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: Joakim Roubert, meta-virtualization

On Mon, Jan 4, 2021 at 8:41 AM Bruce Ashfield via
lists.yoctoproject.org
<bruce.ashfield=gmail.com@lists.yoctoproject.org> wrote:
>
> On Mon, Jan 4, 2021 at 2:12 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
> >
> > On 2020-12-22 17:15, Bruce Ashfield wrote:
> > > So that rules out something in the kernel (and its config), the go
> > > version or something in our rootfs being missing.
> >
> > That surely narrows things down.
> >
> > > But as soon as you fetch the rancher binary, restart the services ..
> > > the containers come up just fine.
> >
> > Is this the same thing that happened for the k3s v1.18.9 build too?
> >
>
> Yep. It is consistent with all versions that I've built, including the
> pre-release master branch as well. I was hoping for some "uprev magic"
> to fix it, but that wasn't the case :)
>
> I'm back from the holidays now, and will be back into this shortly.
>

I do have one more thing I'd like to try (I didn't get much debug done
yesterday), do you happen to have a k3s binary that works in your
environment ?

I'd like to toss it into my image and see if it behaves differently
from my builds and the rancher builds of k3s.

Bruce

> Bruce
>
> > > I'm diving deeper to try and sort it out.
> >
> >
> >
> > BR,
> >
> > /Joakim
>
>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await
> thee at its end
> - "Use the force Harry" - Gandalf, Star Trek II
>
> 
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2020-11-17 14:13                                                         ` Joakim Roubert
@ 2021-03-13 19:30                                                           ` Bruce Ashfield
  2021-03-14  4:32                                                             ` Yocto
  2021-03-15  9:46                                                             ` Joakim Roubert
  0 siblings, 2 replies; 73+ messages in thread
From: Bruce Ashfield @ 2021-03-13 19:30 UTC (permalink / raw)
  To: Joakim Roubert; +Cc: meta-virtualization

On Tue, Nov 17, 2020 at 9:14 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>
> On 2020-11-12 08:04, Lance Yang wrote:
> >
> > I started k3s server without agent and confirmed the node is ready. Then
> > I copied the node_token and then ran k3s agent with that token and
> > specified the server url.
>
> Did you do this on the same device?
> I always run the k3s master on one device and the k3s agents on other
> machines (and thought that was how it is supposed to work).
>

FYI: after literally 5 months of on and off (most off) debugging and
integration, I have finally gotten to the bottom of this, and now can
bring up my all in one node out of the box!

I can now pull k3s into lots of little bits and am confident that I
can support it, given my recent findings and debug.

I'm cleaning up changes, and will push a fully updated k3s to
meta-virt master in the coming week.

Cheers,

Bruce


> BR,
>
> /Joakim
>
>
> 
>


-- 
- Thou shalt not follow the NULL pointer, for chaos and madness await
thee at its end
- "Use the force Harry" - Gandalf, Star Trek II

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2021-03-13 19:30                                                           ` Bruce Ashfield
@ 2021-03-14  4:32                                                             ` Yocto
  2021-03-15  9:46                                                             ` Joakim Roubert
  1 sibling, 0 replies; 73+ messages in thread
From: Yocto @ 2021-03-14  4:32 UTC (permalink / raw)
  To: meta-virtualization

[-- Attachment #1: Type: text/plain, Size: 1272 bytes --]


On 3/14/21 2:30 AM, Bruce Ashfield wrote:
> On Tue, Nov 17, 2020 at 9:14 AM Joakim Roubert <joakim.roubert@axis.com> wrote:
>> On 2020-11-12 08:04, Lance Yang wrote:
>>> I started k3s server without agent and confirmed the node is ready. Then
>>> I copied the node_token and then ran k3s agent with that token and
>>> specified the server url.
>> Did you do this on the same device?
>> I always run the k3s master on one device and the k3s agents on other
>> machines (and thought that was how it is supposed to work).
>>
> FYI: after literally 5 months of on and off (most off) debugging and
> integration, I have finally gotten to the bottom of this, and now can
> bring up my all in one node out of the box!
>
bravo kube-edge next, or starlingx simplex aio ---- meta-stx

which combines k8s and openstack into a single platform

which i thought would be the ultimate goal for OverC :)

just thinking to myself outloud...



> I can now pull k3s into lots of little bits and am confident that I
> can support it, given my recent findings and debug.
>
> I'm cleaning up changes, and will push a fully updated k3s to
> meta-virt master in the coming week.
>
> Cheers,
>
> Bruce
>
>
>> BR,
>>
>> /Joakim
>>
>>
>>
>>
>
>
> 
>

[-- Attachment #2: Type: text/html, Size: 2538 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [meta-virtualization][PATCH v5] Adding k3s recipe
  2021-03-13 19:30                                                           ` Bruce Ashfield
  2021-03-14  4:32                                                             ` Yocto
@ 2021-03-15  9:46                                                             ` Joakim Roubert
  1 sibling, 0 replies; 73+ messages in thread
From: Joakim Roubert @ 2021-03-15  9:46 UTC (permalink / raw)
  To: Bruce Ashfield; +Cc: meta-virtualization

On 2021-03-13 20:30, Bruce Ashfield wrote:
> 
> FYI: after literally 5 months of on and off (most off) debugging and
> integration, I have finally gotten to the bottom of this, and now can
> bring up my all in one node out of the box!
> 
> I can now pull k3s into lots of little bits and am confident that I
> can support it, given my recent findings and debug.

This is fantastic news, Bruce!

> I'm cleaning up changes, and will push a fully updated k3s to
> meta-virt master in the coming week.

This officially makes you the king.

BR,

/Joakim

^ permalink raw reply	[flat|nested] 73+ messages in thread

end of thread, other threads:[~2021-03-15  9:47 UTC | newest]

Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20200821205529.29901-1-erik.jansson@axis.com>
2020-09-21  8:38 ` [meta-virtualization][PATCH] Adding k3s recipe Joakim Roubert
2020-09-21 11:11   ` Bruce Ashfield
2020-09-21 13:15     ` Joakim Roubert
2020-09-24 14:02       ` Bruce Ashfield
2020-09-24 14:46         ` Joakim Roubert
2020-09-24 15:41           ` Bruce Ashfield
2020-09-25  6:20             ` Joakim Roubert
2020-09-25 13:12               ` Bruce Ashfield
2020-09-25 13:50                 ` Joakim Roubert
     [not found]                 ` <16380B0CA000AB98.28124@lists.yoctoproject.org>
2020-09-28 13:48                   ` Joakim Roubert
2020-09-29 19:58                     ` Bruce Ashfield
2020-09-30  8:12                       ` Joakim Roubert
     [not found]                       ` <1639818C3E50A226.8589@lists.yoctoproject.org>
2020-09-30  8:14                         ` Joakim Roubert
2020-10-01 10:32                         ` Joakim Roubert
     [not found]                         ` <1639D7B9311FC65C.18704@lists.yoctoproject.org>
2020-10-01 10:32                           ` Joakim Roubert
2020-10-14 16:38                             ` Bruce Ashfield
2020-10-15 11:40                               ` Joakim Roubert
2020-10-15 11:47                               ` [meta-virtualization][PATCH v4] " Joakim Roubert
2020-10-15 15:02                                 ` Bruce Ashfield
2020-10-20 11:14                                   ` [meta-virtualization][PATCH v5] " Joakim Roubert
2020-10-21  3:10                                     ` Bruce Ashfield
2020-10-21  6:00                                       ` Joakim Roubert
2020-10-26 15:46                                         ` Bruce Ashfield
2020-10-28  8:32                                           ` Joakim Roubert
2020-11-06 21:20                                             ` Bruce Ashfield
2020-11-09  7:48                                               ` Joakim Roubert
2020-11-09  9:26                                                 ` Lance.Yang
2020-11-09 13:45                                                   ` Bruce Ashfield
2020-11-10  8:45                                                     ` Lance Yang
2020-11-09 13:44                                                 ` Bruce Ashfield
2020-11-10  6:43                                           ` Lance Yang
2020-11-10 12:46                                             ` Bruce Ashfield
     [not found]                                             ` <16462648E2B320A8.24110@lists.yoctoproject.org>
2020-11-10 13:17                                               ` Bruce Ashfield
2020-11-12  7:30                                                 ` Lance Yang
2020-11-12 13:38                                                   ` Bruce Ashfield
2020-11-12 14:26                                                     ` [meta-virtualization][PATCH] k3s: Update README.md Joakim Roubert
2020-11-17 12:39                                                     ` [meta-virtualization][PATCH] k3s: Bump to v1.19.3+k3s3 Joakim Roubert
2020-11-17 13:27                                                       ` Bruce Ashfield
2020-11-17 13:31                                                         ` Joakim Roubert
2020-11-17 13:40                                                           ` Bruce Ashfield
2020-11-17 13:50                                                             ` Joakim Roubert
2020-11-17 14:15                                                               ` Bruce Ashfield
     [not found]                                                               ` <16485135E3A12798.28066@lists.yoctoproject.org>
2020-11-17 14:19                                                                 ` Bruce Ashfield
2020-11-17 14:27                                                                   ` Joakim Roubert
2020-11-17 14:41                                                                     ` Bruce Ashfield
     [not found]                                                                     ` <1648529A6FD37D30.5807@lists.yoctoproject.org>
2020-11-17 19:39                                                                       ` Bruce Ashfield
2020-11-18 18:27                                                                         ` Joakim Roubert
2020-11-18 20:38                                                                           ` Bruce Ashfield
2020-12-11  6:31                                                                             ` Lance Yang
2020-12-11 13:43                                                                               ` Bruce Ashfield
2020-12-15  9:56                                                                                 ` Lance Yang
2020-12-15 18:58                                                                                   ` Bruce Ashfield
2020-12-18 14:23                                                                                     ` Joakim Roubert
2020-12-22 16:15                                                                                       ` Bruce Ashfield
2021-01-04  7:12                                                                                         ` Joakim Roubert
2021-01-04 13:40                                                                                           ` Bruce Ashfield
     [not found]                                                                                           ` <16570B29E8680DE8.14857@lists.yoctoproject.org>
2021-01-05 13:58                                                                                             ` Bruce Ashfield
     [not found]                                                     ` <16484BFA14ED0B17.5807@lists.yoctoproject.org>
2020-11-17 13:05                                                       ` Joakim Roubert
2020-11-12 13:43                                                   ` [meta-virtualization][PATCH v5] Adding k3s recipe Joakim Roubert
2020-11-13  5:48                                                     ` Lance Yang
2020-11-13  6:20                                                       ` Bruce Ashfield
2020-11-12 13:40                                                 ` Joakim Roubert
     [not found]                                               ` <164627F27D18DB55.10479@lists.yoctoproject.org>
2020-11-10 13:34                                                 ` Bruce Ashfield
2020-11-11 10:06                                                   ` Lance Yang
2020-11-11 13:40                                                     ` Bruce Ashfield
2020-11-12  7:04                                                       ` Lance Yang
2020-11-12 13:40                                                         ` Bruce Ashfield
2020-11-12 14:07                                                           ` Lance Yang
2020-11-17 14:13                                                         ` Joakim Roubert
2021-03-13 19:30                                                           ` Bruce Ashfield
2021-03-14  4:32                                                             ` Yocto
2021-03-15  9:46                                                             ` Joakim Roubert
2020-10-13 12:22                     ` [meta-virtualization][PATCH] " Bruce Ashfield

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.