Coccinelle archive on
 help / color / Atom feed
From: Markus Elfring <>
To: Coccinelle <>
Subject: [Cocci] Data exchange through message queue interfaces by SmPL scripts
Date: Thu, 25 Apr 2019 20:12:34 +0200
Message-ID: <> (raw)
In-Reply-To: <>

[-- Attachment #1: Type: text/plain, Size: 1460 bytes --]

> Is there still a need to perform parallelisation for the mentioned software
> components by other approaches?

The multi-processing support by the Coccinelle software triggers some
development challenges.
If data should be shared between started (background) processes,
an external system need to be selected for the desired storage service.
I got into the mood then to send these data eventually also through evolving
message queue interfaces as the attached script for the semantic patch language
demonstrates another data processing approach.

elfring@Sonne:~/Projekte/Linux/next-patched> spatch ~/Projekte/Coccinelle/janitor/list_duplicate_statement_pairs_from_if_branches10.cocci drivers/media/dvb-frontends/stv0297.c
statement1|statement2|"function name"|"source file"|incidence
dprintk ( "%s: readreg error (reg == 0x%02x, ret == %i)\n" , __func__ , reg , ret ) ;|return - 1 ;|stv0297_readreg|drivers/media/dvb-frontends/stv0297.c|3
dprintk ( "%s: readreg error (reg == 0x%02x, ret == %i)\n" , __func__ , reg1 , ret ) ;|return - 1 ;|stv0297_readregs|drivers/media/dvb-frontends/stv0297.c|3

Such a simple test case works because it stays within known default system limits.
If more questionable source code combinations should be analysed,
it can be needed to increase the configuration parameter “msg_max” considerably.
How do you think about to try any further fine-tuning out in affected areas?


[-- Attachment #2: list_duplicate_statement_pairs_from_if_branches10.cocci --]
[-- Type: text/plain, Size: 3386 bytes --]

import io, posix_ipc, json, sys
sys.stderr.write("Creation of a message queue\n"
                 "QUEUE_MESSAGES_MAX_DEFAULT: %d\n"
                 % (posix_ipc.QUEUE_MESSAGES_MAX_DEFAULT))
# See also:
# * man mq_overview
# *
mq = posix_ipc.MessageQueue(None, posix_ipc.O_CREX)
sys.stderr.write("A message queue was created.\n")

def store_statements(fun, source, s1, s2):
    """Send data for the service."""
    records = []

    for place in source:
                      % (json.dumps(fun),
                         json.dumps(int(place.column) + 1),

    result = "[\n"
    result += ",\n".join(records)
    result += "\n]"
    mq.send(bytes(result), 0)

@searching exists@
identifier work;
statement s1, s2;
position pos;
type T;
 T work(...)
 ... when any
 if (...)
 ... when any
 ... when any

@script:python collection@
fun <<;
s1 << searching.s1;
s2 << searching.s2;
place << searching.pos;
store_statements(fun, place, s1, s2)

if mq.current_messages > 0:
   mapping = {}

   def insert(x):
       """Add data to an internal table."""
       key = x["name"], x["file"], x["line"], x["column"]
       if key in mapping:
          sys.stderr.write("""A duplicate key was passed.
function: %s
file: %s
line: %s
column: %d
""" % key)
          raise RuntimeError
          mapping[key] = x["s1"], x["s2"]

   def data_import():
      while True:
            for v in json.loads(mq.receive(0)[0]):
         except posix_ipc.BusyError:

   from collections import Counter
   counts = Counter()

   for k, v in mapping.items():
      counts[(v[0], v[1], k[0], k[1])] += 1

   delimiter = "|"
   duplicates = {}

   for k, v in counts.items():
      if v > 1:
         duplicates[k] = v

   if len(duplicates.keys()) > 0:
                                       '"function name"',
                                       '"source file"',

      for k, v in duplicates.items():
         sys.stdout.write(delimiter.join([k[0], k[1], k[2], k[3], str(v)]))
      sys.stderr.write("Duplicate statements were not determined from "
                       + str(len(records)) + " records.\n")
                                       '"function name"',
                                       '"source file"',

      for k, v in counts.items():
         sys.stdout.write(delimiter.join([v[0], v[1], k[1], k[0], k[2]]))
   sys.stderr.write("No result for this analysis!\n")

[-- Attachment #3: Type: text/plain, Size: 136 bytes --]

Cocci mailing list

      parent reply index

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-20 18:50 [Cocci] Checking import of code search results into a table by parallel SmPL data processing Markus Elfring
     [not found] ` <alpine.DEB.2.21.1904202112150.2499@hadrien>
2019-04-20 19:31   ` Markus Elfring
2019-04-23  9:48   ` Markus Elfring
2019-04-24  6:25 ` [Cocci] Rejecting parallel execution of SmPL scripts Markus Elfring
2019-04-25  8:06 ` [Cocci] Data exchange over network interfaces by " Markus Elfring
     [not found]   ` <alpine.DEB.2.21.1904251039000.2550@hadrien>
2019-04-25 10:32     ` Markus Elfring
2019-04-27 17:20       ` Markus Elfring
2019-04-27 17:24       ` Markus Elfring
2019-04-30  8:55         ` Markus Elfring
2019-06-01 11:13         ` Markus Elfring
2019-04-25 18:12 ` Markus Elfring [this message]

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Coccinelle archive on

Archives are clonable:
	git clone --mirror cocci/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 cocci cocci/ \
	public-inbox-index cocci

Newsgroup available over NNTP:

AGPL code for this site: git clone public-inbox