All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/4] hashserv: Add support for equivalent hash reporting
@ 2019-12-04 11:52 Richard Purdie
  2019-12-04 11:53 ` [PATCH 2/4] runqueue/siggen: Allow handling of equivalent hashes Richard Purdie
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Richard Purdie @ 2019-12-04 11:52 UTC (permalink / raw)
  To: bitbake-devel

The reason for this should be recorded in the commit logs. Imagine
you have a target recipe (e.g. meta-extsdk-toolchain) which depends on
gdb-cross. sstate in OE-Core allows gdb-cross to have the same hash
regardless of whether its built on x86 or arm. The outhash will be
different.

We need hashequiv to be able to adapt to the prescence of sstate artefacts
for meta-extsdk-toolchain and allow the hashes to re-intersect, rather than
trying to force a rebuild of meta-extsdk-toolchain. By this point in the build,
it would have already been installed from sstate so the build needs to adapt.

Equivalent hashes should be reported to the server as a taskhash that
needs to map to an specific unihash. This patch adds API to the hashserv
client/server to allow this.

[Thanks to Joshua Watt for help with this patch]

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
---
 lib/hashserv/client.py |  8 ++++++++
 lib/hashserv/server.py | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/lib/hashserv/client.py b/lib/hashserv/client.py
index f65956617b..ae0cce9df4 100644
--- a/lib/hashserv/client.py
+++ b/lib/hashserv/client.py
@@ -148,6 +148,14 @@ class Client(object):
         m['unihash'] = unihash
         return self.send_message({'report': m})
 
+    def report_unihash_equiv(self, taskhash, method, unihash, extra={}):
+        self._set_mode(self.MODE_NORMAL)
+        m = extra.copy()
+        m['taskhash'] = taskhash
+        m['method'] = method
+        m['unihash'] = unihash
+        return self.send_message({'report-equiv': m})
+
     def get_stats(self):
         self._set_mode(self.MODE_NORMAL)
         return self.send_message({'get-stats': None})
diff --git a/lib/hashserv/server.py b/lib/hashserv/server.py
index 0aff77688e..cc7e48233b 100644
--- a/lib/hashserv/server.py
+++ b/lib/hashserv/server.py
@@ -143,6 +143,7 @@ class ServerClient(object):
             handlers = {
                 'get': self.handle_get,
                 'report': self.handle_report,
+                'report-equiv': self.handle_equivreport,
                 'get-stream': self.handle_get_stream,
                 'get-stats': self.handle_get_stats,
                 'reset-stats': self.handle_reset_stats,
@@ -303,6 +304,41 @@ class ServerClient(object):
 
         self.write_message(d)
 
+    async def handle_equivreport(self, data):
+        with closing(self.db.cursor()) as cursor:
+            insert_data = {
+                'method': data['method'],
+                'outhash': "",
+                'taskhash': data['taskhash'],
+                'unihash': data['unihash'],
+                'created': datetime.now()
+            }
+
+            for k in ('owner', 'PN', 'PV', 'PR', 'task', 'outhash_siginfo'):
+                if k in data:
+                    insert_data[k] = data[k]
+
+            cursor.execute('''INSERT OR IGNORE INTO tasks_v2 (%s) VALUES (%s)''' % (
+                ', '.join(sorted(insert_data.keys())),
+                ', '.join(':' + k for k in sorted(insert_data.keys()))),
+                insert_data)
+
+            self.db.commit()
+
+            # Fetch the unihash that will be reported for the taskhash. If the
+            # unihash matches, it means this row was inserted (or the mapping
+            # was already valid)
+            row = self.query_equivalent(data['method'], data['taskhash'])
+
+            if row['unihash'] == data['unihash']:
+                logger.info('Adding taskhash equivalence for %s with unihash %s',
+                                data['taskhash'], row['unihash'])
+
+            d = {k: row[k] for k in ('taskhash', 'method', 'unihash')}
+
+        self.write_message(d)
+
+
     async def handle_get_stats(self, request):
         d = {
             'requests': self.request_stats.todict(),
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/4] runqueue/siggen: Allow handling of equivalent hashes
  2019-12-04 11:52 [PATCH 1/4] hashserv: Add support for equivalent hash reporting Richard Purdie
@ 2019-12-04 11:53 ` Richard Purdie
  2019-12-04 11:53 ` [PATCH 3/4] runqueue: Add extra debugging when locked sigs mismatches occur Richard Purdie
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Richard Purdie @ 2019-12-04 11:53 UTC (permalink / raw)
  To: bitbake-devel

Based on the hashserv's new ability to accept hash mappings, update runqueue
to use this through a helper function in siggen.

This addresses problems with meta-extsdk-toolchain and its dependency on
gdb-cross which caused errors when building eSDK. See the previous commit
for more details.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
---
 lib/bb/runqueue.py | 31 +++++++++++++++++++------------
 lib/bb/siggen.py   | 26 ++++++++++++++++++++++++++
 2 files changed, 45 insertions(+), 12 deletions(-)

diff --git a/lib/bb/runqueue.py b/lib/bb/runqueue.py
index bd7f03f981..a869ba527a 100644
--- a/lib/bb/runqueue.py
+++ b/lib/bb/runqueue.py
@@ -2283,12 +2283,26 @@ class RunQueueExecute:
                         for dep in self.rqdata.runtaskentries[tid].depends:
                             procdep.append(dep)
                         orighash = self.rqdata.runtaskentries[tid].hash
-                        self.rqdata.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, procdep, self.rqdata.dataCaches[mc_from_tid(tid)])
+                        newhash = bb.parse.siggen.get_taskhash(tid, procdep, self.rqdata.dataCaches[mc_from_tid(tid)])
                         origuni = self.rqdata.runtaskentries[tid].unihash
-                        self.rqdata.runtaskentries[tid].unihash = bb.parse.siggen.get_unihash(tid)
-                        logger.debug(1, "Task %s hash changes: %s->%s %s->%s" % (tid, orighash, self.rqdata.runtaskentries[tid].hash, origuni, self.rqdata.runtaskentries[tid].unihash))
+                        newuni = bb.parse.siggen.get_unihash(tid)
+                        # FIXME, need to check it can come from sstate at all for determinism?
+                        remapped = False
+                        if newuni == origuni:
+                            # Nothing to do, we match, skip code below
+                            remapped = True
+                        elif tid in self.scenequeue_covered or tid in self.sq_live:
+                            # Already ran this setscene task or it running. Report the new taskhash
+                            remapped = bb.parse.siggen.report_unihash_equiv(tid, newhash, origuni, newuni, self.rqdata.dataCaches)
+                            logger.info("Already covered setscene for %s so ignoring rehash (remap)" % (tid))
+
+                        if not remapped:
+                            logger.debug(1, "Task %s hash changes: %s->%s %s->%s" % (tid, orighash, newhash, origuni, newuni))
+                            self.rqdata.runtaskentries[tid].hash = newhash
+                            self.rqdata.runtaskentries[tid].unihash = newuni
+                            changed.add(tid)
+
                         next |= self.rqdata.runtaskentries[tid].revdeps
-                        changed.add(tid)
                         total.remove(tid)
                         next.intersection_update(total)
 
@@ -2307,18 +2321,11 @@ class RunQueueExecute:
                 self.pending_migrations.add(tid)
 
         for tid in self.pending_migrations.copy():
-            if tid in self.runq_running:
+            if tid in self.runq_running or tid in self.sq_live:
                 # Too late, task already running, not much we can do now
                 self.pending_migrations.remove(tid)
                 continue
 
-            if tid in self.scenequeue_covered or tid in self.sq_live:
-                # Already ran this setscene task or it running
-                # Potentially risky, should we report this hash as a match?
-                logger.info("Already covered setscene for %s so ignoring rehash" % (tid))
-                self.pending_migrations.remove(tid)
-                continue
-
             valid = True
             # Check no tasks this covers are running
             for dep in self.sqdata.sq_covered_tasks[tid]:
diff --git a/lib/bb/siggen.py b/lib/bb/siggen.py
index e19812b17c..edf10105f9 100644
--- a/lib/bb/siggen.py
+++ b/lib/bb/siggen.py
@@ -525,6 +525,32 @@ class SignatureGeneratorUniHashMixIn(object):
                 except OSError:
                     pass
 
+    def report_unihash_equiv(self, tid, taskhash, wanted_unihash, current_unihash, datacaches):
+        try:
+            extra_data = {}
+            data = self.client().report_unihash_equiv(taskhash, self.method, wanted_unihash, extra_data)
+            bb.note('Reported task %s as unihash %s to %s (%s)' % (tid, wanted_unihash, self.server, str(data)))
+
+            if data is None:
+                bb.warn("Server unable to handle unihash report")
+                return False
+
+            finalunihash = data['unihash']
+
+            if finalunihash == current_unihash:
+                bb.note('Task %s unihash %s unchanged by server' % (tid, finalunihash))
+            elif finalunihash == wanted_unihash:
+                bb.note('Task %s unihash changed %s -> %s as wanted' % (tid, current_unihash, finalunihash))
+                self.set_unihash(tid, finalunihash)
+                return True
+            else:
+                # TODO: What to do here?
+                bb.note('Task %s unihash reported as unwanted hash %s' % (tid, finalunihash))
+
+        except hashserv.client.HashConnectionError as e:
+            bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
+
+        return False
 
 #
 # Dummy class used for bitbake-selftest
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/4] runqueue: Add extra debugging when locked sigs mismatches occur
  2019-12-04 11:52 [PATCH 1/4] hashserv: Add support for equivalent hash reporting Richard Purdie
  2019-12-04 11:53 ` [PATCH 2/4] runqueue/siggen: Allow handling of equivalent hashes Richard Purdie
@ 2019-12-04 11:53 ` Richard Purdie
  2019-12-04 11:53 ` [PATCH 4/4] knotty/uihelper: Switch from pids to tids for Task event management Richard Purdie
  2019-12-04 18:20 ` [PATCH 1/4] hashserv: Add support for equivalent hash reporting akuster808
  3 siblings, 0 replies; 6+ messages in thread
From: Richard Purdie @ 2019-12-04 11:53 UTC (permalink / raw)
  To: bitbake-devel

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
---
 lib/bb/runqueue.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/bb/runqueue.py b/lib/bb/runqueue.py
index a869ba527a..246a9cdb64 100644
--- a/lib/bb/runqueue.py
+++ b/lib/bb/runqueue.py
@@ -2524,6 +2524,8 @@ class RunQueueExecute:
                 msg = 'Task %s.%s attempted to execute unexpectedly and should have been setscened' % (pn, taskname)
             else:
                 msg = 'Task %s.%s attempted to execute unexpectedly' % (pn, taskname)
+            for t in self.scenequeue_notcovered:
+                msg = msg + "\nTask %s, unihash %s, taskhash %s" % (t, self.rqdata.runtaskentries[t].unihash, self.rqdata.runtaskentries[t].hash)
             logger.error(msg + '\nThis is usually due to missing setscene tasks. Those missing in this build were: %s' % pprint.pformat(self.scenequeue_notcovered))
             return True
         return False
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 4/4] knotty/uihelper: Switch from pids to tids for Task event management
  2019-12-04 11:52 [PATCH 1/4] hashserv: Add support for equivalent hash reporting Richard Purdie
  2019-12-04 11:53 ` [PATCH 2/4] runqueue/siggen: Allow handling of equivalent hashes Richard Purdie
  2019-12-04 11:53 ` [PATCH 3/4] runqueue: Add extra debugging when locked sigs mismatches occur Richard Purdie
@ 2019-12-04 11:53 ` Richard Purdie
  2019-12-04 18:20 ` [PATCH 1/4] hashserv: Add support for equivalent hash reporting akuster808
  3 siblings, 0 replies; 6+ messages in thread
From: Richard Purdie @ 2019-12-04 11:53 UTC (permalink / raw)
  To: bitbake-devel

We've seen cases where a task can execute with a given pid, complete
and a new task can start using the same pid before the UI handler has
had time to adapt.

This means using pids to match up events on the UI side is a bad
idea. Change the code to use task ids instead. There is a small
amount of fuzzy matching for the progress information since there
is no task information there and we don't want the overhead of a task
ID in every event, however since pid reuse is unlikely, we can live
with a progress bar not quite working properly in a corner case like
this.

[YOCTO #13667]

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
---
 lib/bb/build.py       | 25 +++++++++++++------------
 lib/bb/ui/knotty.py   |  8 ++++----
 lib/bb/ui/uihelper.py | 39 ++++++++++++++++++++++++---------------
 3 files changed, 41 insertions(+), 31 deletions(-)

diff --git a/lib/bb/build.py b/lib/bb/build.py
index 30a2ba236f..3d9cc10c8c 100644
--- a/lib/bb/build.py
+++ b/lib/bb/build.py
@@ -57,8 +57,9 @@ builtins['os'] = os
 class TaskBase(event.Event):
     """Base class for task events"""
 
-    def __init__(self, t, logfile, d):
+    def __init__(self, t, fn, logfile, d):
         self._task = t
+        self._fn = fn
         self._package = d.getVar("PF")
         self._mc = d.getVar("BB_CURRENT_MC")
         self.taskfile = d.getVar("FILE")
@@ -81,8 +82,8 @@ class TaskBase(event.Event):
 
 class TaskStarted(TaskBase):
     """Task execution started"""
-    def __init__(self, t, logfile, taskflags, d):
-        super(TaskStarted, self).__init__(t, logfile, d)
+    def __init__(self, t, fn, logfile, taskflags, d):
+        super(TaskStarted, self).__init__(t, fn, logfile, d)
         self.taskflags = taskflags
 
 class TaskSucceeded(TaskBase):
@@ -91,9 +92,9 @@ class TaskSucceeded(TaskBase):
 class TaskFailed(TaskBase):
     """Task execution failed"""
 
-    def __init__(self, task, logfile, metadata, errprinted = False):
+    def __init__(self, task, fn, logfile, metadata, errprinted = False):
         self.errprinted = errprinted
-        super(TaskFailed, self).__init__(task, logfile, metadata)
+        super(TaskFailed, self).__init__(task, fn, logfile, metadata)
 
 class TaskFailedSilent(TaskBase):
     """Task execution failed (silently)"""
@@ -103,8 +104,8 @@ class TaskFailedSilent(TaskBase):
 
 class TaskInvalid(TaskBase):
 
-    def __init__(self, task, metadata):
-        super(TaskInvalid, self).__init__(task, None, metadata)
+    def __init__(self, task, fn, metadata):
+        super(TaskInvalid, self).__init__(task, fn, None, metadata)
         self._message = "No such task '%s'" % task
 
 class TaskProgress(event.Event):
@@ -572,7 +573,7 @@ def _exec_task(fn, task, d, quieterr):
 
     try:
         try:
-            event.fire(TaskStarted(task, logfn, flags, localdata), localdata)
+            event.fire(TaskStarted(task, fn, logfn, flags, localdata), localdata)
         except (bb.BBHandledException, SystemExit):
             return 1
 
@@ -583,15 +584,15 @@ def _exec_task(fn, task, d, quieterr):
             for func in (postfuncs or '').split():
                 exec_func(func, localdata)
         except bb.BBHandledException:
-            event.fire(TaskFailed(task, logfn, localdata, True), localdata)
+            event.fire(TaskFailed(task, fn, logfn, localdata, True), localdata)
             return 1
         except Exception as exc:
             if quieterr:
-                event.fire(TaskFailedSilent(task, logfn, localdata), localdata)
+                event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
             else:
                 errprinted = errchk.triggered
                 logger.error(str(exc))
-                event.fire(TaskFailed(task, logfn, localdata, errprinted), localdata)
+                event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
             return 1
     finally:
         sys.stdout.flush()
@@ -614,7 +615,7 @@ def _exec_task(fn, task, d, quieterr):
             logger.debug(2, "Zero size logfn %s, removing", logfn)
             bb.utils.remove(logfn)
             bb.utils.remove(loglink)
-    event.fire(TaskSucceeded(task, logfn, localdata), localdata)
+    event.fire(TaskSucceeded(task, fn, logfn, localdata), localdata)
 
     if not localdata.getVarFlag(task, 'nostamp', False) and not localdata.getVarFlag(task, 'selfstamp', False):
         make_stamp(task, localdata)
diff --git a/lib/bb/ui/knotty.py b/lib/bb/ui/knotty.py
index 35736ade03..3bbebfe722 100644
--- a/lib/bb/ui/knotty.py
+++ b/lib/bb/ui/knotty.py
@@ -255,19 +255,19 @@ class TerminalFilter(object):
                 start_time = activetasks[t].get("starttime", None)
                 if not pbar or pbar.bouncing != (progress < 0):
                     if progress < 0:
-                        pbar = BBProgress("0: %s (pid %s) " % (activetasks[t]["title"], t), 100, widgets=[progressbar.BouncingSlider(), ''], extrapos=2, resize_handler=self.sigwinch_handle)
+                        pbar = BBProgress("0: %s (pid %s) " % (activetasks[t]["title"], activetasks[t]["pid"]), 100, widgets=[progressbar.BouncingSlider(), ''], extrapos=2, resize_handler=self.sigwinch_handle)
                         pbar.bouncing = True
                     else:
-                        pbar = BBProgress("0: %s (pid %s) " % (activetasks[t]["title"], t), 100, widgets=[progressbar.Percentage(), ' ', progressbar.Bar(), ''], extrapos=4, resize_handler=self.sigwinch_handle)
+                        pbar = BBProgress("0: %s (pid %s) " % (activetasks[t]["title"], activetasks[t]["pid"]), 100, widgets=[progressbar.Percentage(), ' ', progressbar.Bar(), ''], extrapos=4, resize_handler=self.sigwinch_handle)
                         pbar.bouncing = False
                     activetasks[t]["progressbar"] = pbar
                 tasks.append((pbar, progress, rate, start_time))
             else:
                 start_time = activetasks[t].get("starttime", None)
                 if start_time:
-                    tasks.append("%s - %s (pid %s)" % (activetasks[t]["title"], self.elapsed(currenttime - start_time), t))
+                    tasks.append("%s - %s (pid %s)" % (activetasks[t]["title"], self.elapsed(currenttime - start_time), activetasks[t]["pid"]))
                 else:
-                    tasks.append("%s (pid %s)" % (activetasks[t]["title"], t))
+                    tasks.append("%s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"]))
 
         if self.main.shutdown:
             content = "Waiting for %s running tasks to finish:" % len(activetasks)
diff --git a/lib/bb/ui/uihelper.py b/lib/bb/ui/uihelper.py
index c8dd7df087..74e9e1958d 100644
--- a/lib/bb/ui/uihelper.py
+++ b/lib/bb/ui/uihelper.py
@@ -15,39 +15,48 @@ class BBUIHelper:
         # Running PIDs preserves the order tasks were executed in
         self.running_pids = []
         self.failed_tasks = []
+        self.pidmap = {}
         self.tasknumber_current = 0
         self.tasknumber_total = 0
 
     def eventHandler(self, event):
+        # PIDs are a bad idea as they can be reused before we process all UI events.
+        # We maintain a 'fuzzy' match for TaskProgress since there is no other way to match
+        def removetid(pid, tid):
+            self.running_pids.remove(tid)
+            del self.running_tasks[tid]
+            if self.pidmap[pid] == tid:
+                del self.pidmap[pid]
+            self.needUpdate = True
+
         if isinstance(event, bb.build.TaskStarted):
+            tid = event._fn + ":" + event._task
             if event._mc != "default":
-                self.running_tasks[event.pid] = { 'title' : "mc:%s:%s %s" % (event._mc, event._package, event._task), 'starttime' : time.time() }
+                self.running_tasks[tid] = { 'title' : "mc:%s:%s %s" % (event._mc, event._package, event._task), 'starttime' : time.time(), 'pid' : event.pid }
             else:
-                self.running_tasks[event.pid] = { 'title' : "%s %s" % (event._package, event._task), 'starttime' : time.time() }
-            self.running_pids.append(event.pid)
+                self.running_tasks[tid] = { 'title' : "%s %s" % (event._package, event._task), 'starttime' : time.time(), 'pid' : event.pid }
+            self.running_pids.append(tid)
+            self.pidmap[event.pid] = tid
             self.needUpdate = True
         elif isinstance(event, bb.build.TaskSucceeded):
-            del self.running_tasks[event.pid]
-            self.running_pids.remove(event.pid)
-            self.needUpdate = True
+            tid = event._fn + ":" + event._task
+            removetid(event.pid, tid)
         elif isinstance(event, bb.build.TaskFailedSilent):
-            del self.running_tasks[event.pid]
-            self.running_pids.remove(event.pid)
+            tid = event._fn + ":" + event._task
+            removetid(event.pid, tid)
             # Don't add to the failed tasks list since this is e.g. a setscene task failure
-            self.needUpdate = True
         elif isinstance(event, bb.build.TaskFailed):
-            del self.running_tasks[event.pid]
-            self.running_pids.remove(event.pid)
+            tid = event._fn + ":" + event._task
+            removetid(sevent.pid, tid)
             self.failed_tasks.append( { 'title' : "%s %s" % (event._package, event._task)})
-            self.needUpdate = True
         elif isinstance(event, bb.runqueue.runQueueTaskStarted):
             self.tasknumber_current = event.stats.completed + event.stats.active + event.stats.failed + 1
             self.tasknumber_total = event.stats.total
             self.needUpdate = True
         elif isinstance(event, bb.build.TaskProgress):
-            if event.pid > 0:
-                self.running_tasks[event.pid]['progress'] = event.progress
-                self.running_tasks[event.pid]['rate'] = event.rate
+            if event.pid > 0 and event.pid in self.pidmap:
+                self.running_tasks[self.pidmap[event.pid]]['progress'] = event.progress
+                self.running_tasks[self.pidmap[event.pid]]['rate'] = event.rate
                 self.needUpdate = True
         else:
             return False
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/4] hashserv: Add support for equivalent hash reporting
  2019-12-04 11:52 [PATCH 1/4] hashserv: Add support for equivalent hash reporting Richard Purdie
                   ` (2 preceding siblings ...)
  2019-12-04 11:53 ` [PATCH 4/4] knotty/uihelper: Switch from pids to tids for Task event management Richard Purdie
@ 2019-12-04 18:20 ` akuster808
  2019-12-04 18:22   ` Richard Purdie
  3 siblings, 1 reply; 6+ messages in thread
From: akuster808 @ 2019-12-04 18:20 UTC (permalink / raw)
  To: Richard Purdie, bitbake-devel



On 12/4/19 3:52 AM, Richard Purdie wrote:
> The reason for this should be recorded in the commit logs. Imagine
> you have a target recipe (e.g. meta-extsdk-toolchain) which depends on
> gdb-cross. sstate in OE-Core allows gdb-cross to have the same hash
> regardless of whether its built on x86 or arm. The outhash will be
> different.
>
> We need hashequiv to be able to adapt to the prescence of sstate artefacts
> for meta-extsdk-toolchain and allow the hashes to re-intersect, rather than
> trying to force a rebuild of meta-extsdk-toolchain. By this point in the build,
> it would have already been installed from sstate so the build needs to adapt.
>
> Equivalent hashes should be reported to the server as a taskhash that
> needs to map to an specific unihash. This patch adds API to the hashserv
> client/server to allow this.
>
> [Thanks to Joshua Watt for help with this patch]

This sounds like 1.44 backport worthy?

- armin
>
> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
> ---
>  lib/hashserv/client.py |  8 ++++++++
>  lib/hashserv/server.py | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 44 insertions(+)
>
> diff --git a/lib/hashserv/client.py b/lib/hashserv/client.py
> index f65956617b..ae0cce9df4 100644
> --- a/lib/hashserv/client.py
> +++ b/lib/hashserv/client.py
> @@ -148,6 +148,14 @@ class Client(object):
>          m['unihash'] = unihash
>          return self.send_message({'report': m})
>  
> +    def report_unihash_equiv(self, taskhash, method, unihash, extra={}):
> +        self._set_mode(self.MODE_NORMAL)
> +        m = extra.copy()
> +        m['taskhash'] = taskhash
> +        m['method'] = method
> +        m['unihash'] = unihash
> +        return self.send_message({'report-equiv': m})
> +
>      def get_stats(self):
>          self._set_mode(self.MODE_NORMAL)
>          return self.send_message({'get-stats': None})
> diff --git a/lib/hashserv/server.py b/lib/hashserv/server.py
> index 0aff77688e..cc7e48233b 100644
> --- a/lib/hashserv/server.py
> +++ b/lib/hashserv/server.py
> @@ -143,6 +143,7 @@ class ServerClient(object):
>              handlers = {
>                  'get': self.handle_get,
>                  'report': self.handle_report,
> +                'report-equiv': self.handle_equivreport,
>                  'get-stream': self.handle_get_stream,
>                  'get-stats': self.handle_get_stats,
>                  'reset-stats': self.handle_reset_stats,
> @@ -303,6 +304,41 @@ class ServerClient(object):
>  
>          self.write_message(d)
>  
> +    async def handle_equivreport(self, data):
> +        with closing(self.db.cursor()) as cursor:
> +            insert_data = {
> +                'method': data['method'],
> +                'outhash': "",
> +                'taskhash': data['taskhash'],
> +                'unihash': data['unihash'],
> +                'created': datetime.now()
> +            }
> +
> +            for k in ('owner', 'PN', 'PV', 'PR', 'task', 'outhash_siginfo'):
> +                if k in data:
> +                    insert_data[k] = data[k]
> +
> +            cursor.execute('''INSERT OR IGNORE INTO tasks_v2 (%s) VALUES (%s)''' % (
> +                ', '.join(sorted(insert_data.keys())),
> +                ', '.join(':' + k for k in sorted(insert_data.keys()))),
> +                insert_data)
> +
> +            self.db.commit()
> +
> +            # Fetch the unihash that will be reported for the taskhash. If the
> +            # unihash matches, it means this row was inserted (or the mapping
> +            # was already valid)
> +            row = self.query_equivalent(data['method'], data['taskhash'])
> +
> +            if row['unihash'] == data['unihash']:
> +                logger.info('Adding taskhash equivalence for %s with unihash %s',
> +                                data['taskhash'], row['unihash'])
> +
> +            d = {k: row[k] for k in ('taskhash', 'method', 'unihash')}
> +
> +        self.write_message(d)
> +
> +
>      async def handle_get_stats(self, request):
>          d = {
>              'requests': self.request_stats.todict(),



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/4] hashserv: Add support for equivalent hash reporting
  2019-12-04 18:20 ` [PATCH 1/4] hashserv: Add support for equivalent hash reporting akuster808
@ 2019-12-04 18:22   ` Richard Purdie
  0 siblings, 0 replies; 6+ messages in thread
From: Richard Purdie @ 2019-12-04 18:22 UTC (permalink / raw)
  To: akuster808, bitbake-devel

On Wed, 2019-12-04 at 10:20 -0800, akuster808 wrote:
> 
> On 12/4/19 3:52 AM, Richard Purdie wrote:
> > The reason for this should be recorded in the commit logs. Imagine
> > you have a target recipe (e.g. meta-extsdk-toolchain) which depends
> > on
> > gdb-cross. sstate in OE-Core allows gdb-cross to have the same hash
> > regardless of whether its built on x86 or arm. The outhash will be
> > different.
> > 
> > We need hashequiv to be able to adapt to the prescence of sstate
> > artefacts
> > for meta-extsdk-toolchain and allow the hashes to re-intersect,
> > rather than
> > trying to force a rebuild of meta-extsdk-toolchain. By this point
> > in the build,
> > it would have already been installed from sstate so the build needs
> > to adapt.
> > 
> > Equivalent hashes should be reported to the server as a taskhash
> > that
> > needs to map to an specific unihash. This patch adds API to the
> > hashserv
> > client/server to allow this.
> > 
> > [Thanks to Joshua Watt for help with this patch]
> 
> This sounds like 1.44 backport worthy?

Yes. Needs a little settling time in master first though.

Cheers,

Richard



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-12-04 18:22 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-04 11:52 [PATCH 1/4] hashserv: Add support for equivalent hash reporting Richard Purdie
2019-12-04 11:53 ` [PATCH 2/4] runqueue/siggen: Allow handling of equivalent hashes Richard Purdie
2019-12-04 11:53 ` [PATCH 3/4] runqueue: Add extra debugging when locked sigs mismatches occur Richard Purdie
2019-12-04 11:53 ` [PATCH 4/4] knotty/uihelper: Switch from pids to tids for Task event management Richard Purdie
2019-12-04 18:20 ` [PATCH 1/4] hashserv: Add support for equivalent hash reporting akuster808
2019-12-04 18:22   ` Richard Purdie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.