cocci.inria.fr archive mirror
 help / color / mirror / Atom feed
* [Cocci] Checking import of code search results into a table by parallel SmPL data processing
@ 2019-04-20 18:50 Markus Elfring
       [not found] ` <alpine.DEB.2.21.1904202112150.2499@hadrien>
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Markus Elfring @ 2019-04-20 18:50 UTC (permalink / raw)
  To: Coccinelle

Hello,

I have noticed another questionable software behaviour during the application
of the semantic patch language.

elfring@Sonne:~/Projekte/Linux/next-patched> time spatch --timeout 34 -j 2 --chunksize 1 -D database_URL=postgresql+psycopg2:///parallel_DVB_duplicates --dir drivers/media/dvb-frontends --sp-file ~/Projekte/Coccinelle/janitor/list_duplicate_statement_pairs_from_if_branches4.cocci > ~/Projekte/Bau/Linux/scripts/Coccinelle/duplicates1/next/20190418/pair-DVB-results.txt 2> ~/Projekte/Bau/Linux/scripts/Coccinelle/duplicates1/next/20190418/pair-DVB-errors.txt

real	5m56,708s
user	11m4,775s
sys	0m0,688s


I know from my previous update suggestion “[media] Use common error handling code
in 19 functions” that change possibilities can be found.
https://lkml.org/lkml/2018/3/9/823
https://lore.kernel.org/lkml/57ef3a56-2578-1d5f-1268-348b49b0c573@users.sourceforge.net/


But the generated log file contains the information “No result for this analysis!”.
I wonder then why desired data were not stored in the corresponding database table
by such a SmPL script variant.

Is there still a need to perform parallelisation for the mentioned software
components by other approaches?

Regards,
Markus
_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cocci] Checking import of code search results into a table by parallel SmPL data processing
       [not found] ` <alpine.DEB.2.21.1904202112150.2499@hadrien>
@ 2019-04-20 19:31   ` Markus Elfring
  2019-04-23  9:48   ` Markus Elfring
  1 sibling, 0 replies; 11+ messages in thread
From: Markus Elfring @ 2019-04-20 19:31 UTC (permalink / raw)
  To: Julia Lawall; +Cc: Coccinelle

>> elfring@Sonne:~/Projekte/Linux/next-patched> time spatch --timeout 34 -j 2 --chunksize 1 -D database_URL=postgresql+psycopg2:///parallel_DVB_duplicates --dir drivers/media/dvb-frontends --sp-file ~/Projekte/Coccinelle/janitor/list_duplicate_statement_pairs_from_if_branches4.cocci > ~/Projekte/Bau/Linux/scripts/Coccinelle/duplicates1/next/20190418/pair-DVB-results.txt 2> ~/Projekte/Bau/Linux/scripts/Coccinelle/duplicates1/next/20190418/pair-DVB-errors.txt
>
> Since you haven't included the semantic patch,

I intentionally omitted this implementation detail for the beginning
of another discussion.


> it seems that there is no way anyone can help you.

I imagine that a possible system clarification will depend on the willingness
to check parallel SmPL data processing (together with a class library like
“SQLAlchemy 1.3.2”) once more.
I am curious on how the development interests will evolve in such software areas.

Regards,
Markus
_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cocci] Checking import of code search results into a table by parallel SmPL data processing
       [not found] ` <alpine.DEB.2.21.1904202112150.2499@hadrien>
  2019-04-20 19:31   ` Markus Elfring
@ 2019-04-23  9:48   ` Markus Elfring
  1 sibling, 0 replies; 11+ messages in thread
From: Markus Elfring @ 2019-04-23  9:48 UTC (permalink / raw)
  To: Julia Lawall; +Cc: Coccinelle

> Since you haven't included the semantic patch,

This information can become useful later eventually.


> it seems that there is no way anyone can help you.

Other developers can provide also helpful advices.

Example:
Mike Bayer
Topic: Checking approaches around parallel data import for records
https://groups.google.com/d/msg/sqlalchemy/5-6O-Pwzh4A/5xSnxE_pDAAJ

See also:
https://docs.sqlalchemy.org/en/13/core/connections.html#engine-disposal


I am curious if more extensions will evolve for affected software areas.

Regards,
Markus
_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cocci] Rejecting parallel execution of SmPL scripts
  2019-04-20 18:50 [Cocci] Checking import of code search results into a table by parallel SmPL data processing Markus Elfring
       [not found] ` <alpine.DEB.2.21.1904202112150.2499@hadrien>
@ 2019-04-24  6:25 ` Markus Elfring
  2019-04-25  8:06 ` [Cocci] Data exchange over network interfaces by " Markus Elfring
  2019-04-25 18:12 ` [Cocci] Data exchange through message queue " Markus Elfring
  3 siblings, 0 replies; 11+ messages in thread
From: Markus Elfring @ 2019-04-24  6:25 UTC (permalink / raw)
  To: Coccinelle

Hello,

The Coccinelle software supports multi-processing (parameter “--jobs”)
for a while. It would be occasionally useful to share data between started
(background) processes for specific analysis tasks.
Such parallel data collection would require a software design which is safe
for data synchronisation.

The information “Database connections generally do not travel across
process boundaries.” is provided by a known software documentation.
https://docs.sqlalchemy.org/en/13/core/connections.html#engine-disposal

It was determined once more then that transaction management does only work
within a single process by this programming interface so far.
https://groups.google.com/d/msg/sqlalchemy/5-6O-Pwzh4A/5xSnxE_pDAAJ


I got information like the following (on 2018-05-12).

@initialize:ocaml@
@@
let _ =
  if not (!Flag.parmap_cores = None) then failwith "bad"


It seems that I did not become familiar enough with the programming
language “OCaml” to integrate another check (in a convenient way) for
the special case that the parameter “-j 1” could be passed.
(Related system constraints can be also interesting.)

Would you like to help a bit more to achieve the restriction for
serial execution of selected SmPL scripts?

Regards,
Markus
_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cocci] Data exchange over network interfaces by SmPL scripts
  2019-04-20 18:50 [Cocci] Checking import of code search results into a table by parallel SmPL data processing Markus Elfring
       [not found] ` <alpine.DEB.2.21.1904202112150.2499@hadrien>
  2019-04-24  6:25 ` [Cocci] Rejecting parallel execution of SmPL scripts Markus Elfring
@ 2019-04-25  8:06 ` Markus Elfring
       [not found]   ` <alpine.DEB.2.21.1904251039000.2550@hadrien>
  2019-04-25 18:12 ` [Cocci] Data exchange through message queue " Markus Elfring
  3 siblings, 1 reply; 11+ messages in thread
From: Markus Elfring @ 2019-04-25  8:06 UTC (permalink / raw)
  To: Coccinelle

[-- Attachment #1: Type: text/plain, Size: 1241 bytes --]

> Is there still a need to perform parallelisation for the mentioned software
> components by other approaches?

The multi-processing support by the Coccinelle software triggers some
development challenges.
If data should be shared between started (background) processes,
an external system need to be selected for the desired storage service.
Thus I would like to send these data over network interfaces by the attached
script for the evolving semantic patch language.

I stumble on the following error message.

elfring@Sonne:~/Projekte/Coccinelle/janitor> /usr/local/bin/spatch -D server_id=localhost -D server_port=1234 list_duplicate_statement_pairs_from_if_branches-client2.cocci ~/Projekte/Linux/next-patched/drivers/media/dvb-frontends/stv0297.c
…
Using Python version:
2.7.15 (default, May 21 2018, 17:53:03) [GCC]
…
Traceback (most recent call last):
  File "<string>", line 4, in <module>
  File "<string>", line 34, in store_statements
AttributeError: __exit__
Error in Python script, line 55, file …


I would appreciate if the shown data processing approach can work with
version ranges of involved recent software components.
So I am looking for additional solution ideas.

Regards,
Markus

[-- Attachment #2: list_duplicate_statement_pairs_from_if_branches-client2.cocci --]
[-- Type: text/plain, Size: 1406 bytes --]

@initialize:python@
server_id << virtual.server_id;
server_port << virtual.server_port;
@@
import json, socket, struct, sys
sys.stderr.write("Using Python version:\n%s\n" % (sys.version))

if server_id == False:
   server_id = "localhost"

if server_port == False:
   server_port = 1234

def store_statements(fun, source, s1, s2):
    """Send data for the service."""
    records = []

    for place in source:
       records.append('{"name":%s,"file":%s,"line":%s,"column":%s,"s1":%s,"s2":%s}'
                      % (json.dumps(fun),
                         json.dumps(place.file),
                         json.dumps(place.line),
                         json.dumps(int(place.column) + 1),
                         json.dumps(s1),
                         json.dumps(s2)))

    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as so:
         so.connect((server_id, server_port))
         result = "[\n"
         result += ",\n".join(records)
         result += "]"
         b = bytes(result)
         p = struct.pack(">I", len(b))
         p += b
         so.sendall(p)

@searching exists@
identifier work;
statement s1, s2;
position pos;
type T;
@@
 T work(...)
 {
 ... when any
 if (...)
 {
 ... when any
 s1@pos
 s2
 }
 ... when any
 }

@script:python collection@
fun << searching.work;
s1 << searching.s1;
s2 << searching.s2;
place << searching.pos;
@@
store_statements(fun, place, s1, s2)

[-- Attachment #3: Type: text/plain, Size: 136 bytes --]

_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cocci] Data exchange over network interfaces by SmPL scripts
       [not found]   ` <alpine.DEB.2.21.1904251039000.2550@hadrien>
@ 2019-04-25 10:32     ` Markus Elfring
  2019-04-27 17:20       ` Markus Elfring
  2019-04-27 17:24       ` Markus Elfring
  0 siblings, 2 replies; 11+ messages in thread
From: Markus Elfring @ 2019-04-25 10:32 UTC (permalink / raw)
  To: Julia Lawall; +Cc: Coccinelle

>>   File "<string>", line 34, in store_statements
>> AttributeError: __exit__
>> Error in Python script, line 55, file …
>
> I have no idea.  It looks like a python problem.

Partly, yes (of course).


> If you want help, you will have to construct a script
> that exhibits the error with print statements only.

This suggestion will probably not work (because additional test output
will not trigger the shown error response alone.)

I see solution challenges like the following after the addition of another data
type conversion for the network service port variable.

elfring@Sonne:~/Projekte/Coccinelle/janitor> /usr/local/bin/spatch --python /usr/bin/python3 -D server_id=127.0.0.1 -D server_port=1234 list_duplicate_statement_pairs_from_if_branches-client2.cocci ~/Projekte/Linux/next-patched/drivers/media/dvb-frontends/stv0297.c
…
Using Python version:
3.7.2 (default, Dec 30 2018, 16:18:15) [GCC]
…
connecting
Traceback (most recent call last):
  File "<string>", line 4, in <module>
  File "<string>", line 35, in store_statements
socket.gaierror: [Errno -2] Name or service not known
Error in Python script, line 56, file …


How should the system configuration details be improved for a simple test connection?

Regards,
Markus
_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Cocci] Data exchange through message queue interfaces by SmPL scripts
  2019-04-20 18:50 [Cocci] Checking import of code search results into a table by parallel SmPL data processing Markus Elfring
                   ` (2 preceding siblings ...)
  2019-04-25  8:06 ` [Cocci] Data exchange over network interfaces by " Markus Elfring
@ 2019-04-25 18:12 ` Markus Elfring
  3 siblings, 0 replies; 11+ messages in thread
From: Markus Elfring @ 2019-04-25 18:12 UTC (permalink / raw)
  To: Coccinelle

[-- Attachment #1: Type: text/plain, Size: 1460 bytes --]

> Is there still a need to perform parallelisation for the mentioned software
> components by other approaches?

The multi-processing support by the Coccinelle software triggers some
development challenges.
If data should be shared between started (background) processes,
an external system need to be selected for the desired storage service.
I got into the mood then to send these data eventually also through evolving
message queue interfaces as the attached script for the semantic patch language
demonstrates another data processing approach.


elfring@Sonne:~/Projekte/Linux/next-patched> spatch ~/Projekte/Coccinelle/janitor/list_duplicate_statement_pairs_from_if_branches10.cocci drivers/media/dvb-frontends/stv0297.c
…
statement1|statement2|"function name"|"source file"|incidence
dprintk ( "%s: readreg error (reg == 0x%02x, ret == %i)\n" , __func__ , reg , ret ) ;|return - 1 ;|stv0297_readreg|drivers/media/dvb-frontends/stv0297.c|3
dprintk ( "%s: readreg error (reg == 0x%02x, ret == %i)\n" , __func__ , reg1 , ret ) ;|return - 1 ;|stv0297_readregs|drivers/media/dvb-frontends/stv0297.c|3


Such a simple test case works because it stays within known default system limits.
If more questionable source code combinations should be analysed,
it can be needed to increase the configuration parameter “msg_max” considerably.
How do you think about to try any further fine-tuning out in affected areas?

Regards,
Markus

[-- Attachment #2: list_duplicate_statement_pairs_from_if_branches10.cocci --]
[-- Type: text/plain, Size: 3386 bytes --]

@initialize:python@
@@
import io, posix_ipc, json, sys
sys.stderr.write("Creation of a message queue\n"
                 "QUEUE_MESSAGES_MAX_DEFAULT: %d\n"
                 % (posix_ipc.QUEUE_MESSAGES_MAX_DEFAULT))
# See also:
# * man mq_overview
# * https://stackoverflow.com/questions/32757046/is-it-possible-to-open-message-queue-in-linux-with-huge-number-of-elements
mq = posix_ipc.MessageQueue(None, posix_ipc.O_CREX)
sys.stderr.write("A message queue was created.\n")

def store_statements(fun, source, s1, s2):
    """Send data for the service."""
    records = []

    for place in source:
       records.append('{"name":%s,"file":%s,"line":%s,"column":%s,"s1":%s,"s2":%s}'
                      % (json.dumps(fun),
                         json.dumps(place.file),
                         json.dumps(place.line),
                         json.dumps(int(place.column) + 1),
                         json.dumps(s1),
                         json.dumps(s2)))

    result = "[\n"
    result += ",\n".join(records)
    result += "\n]"
    mq.send(bytes(result), 0)

@searching exists@
identifier work;
statement s1, s2;
position pos;
type T;
@@
 T work(...)
 {
 ... when any
 if (...)
 {
 ... when any
 s1@pos
 s2
 }
 ... when any
 }

@script:python collection@
fun << searching.work;
s1 << searching.s1;
s2 << searching.s2;
place << searching.pos;
@@
store_statements(fun, place, s1, s2)

@finalize:python@
@@
if mq.current_messages > 0:
   mapping = {}

   def insert(x):
       """Add data to an internal table."""
       key = x["name"], x["file"], x["line"], x["column"]
       if key in mapping:
          sys.stderr.write("""A duplicate key was passed.
function: %s
file: %s
line: %s
column: %d
""" % key)
          raise RuntimeError
       else:
          mapping[key] = x["s1"], x["s2"]

   def data_import():
      while True:
         try:
            for v in json.loads(mq.receive(0)[0]):
               insert(v)
         except posix_ipc.BusyError:
            break

   data_import()
   from collections import Counter
   counts = Counter()

   for k, v in mapping.items():
      counts[(v[0], v[1], k[0], k[1])] += 1

   delimiter = "|"
   duplicates = {}

   for k, v in counts.items():
      if v > 1:
         duplicates[k] = v

   if len(duplicates.keys()) > 0:
      sys.stdout.write(delimiter.join(["statement1",
                                       "statement2",
                                       '"function name"',
                                       '"source file"',
                                       "incidence"]))
      sys.stdout.write("\r\n")

      for k, v in duplicates.items():
         sys.stdout.write(delimiter.join([k[0], k[1], k[2], k[3], str(v)]))
         sys.stdout.write("\r\n")
   else:
      sys.stderr.write("Duplicate statements were not determined from "
                       + str(len(records)) + " records.\n")
      sys.stderr.write(delimiter.join(["statement1",
                                       "statement2",
                                       '"function name"',
                                       '"source file"',
                                       "line"]))
      sys.stderr.write("\r\n")

      for k, v in counts.items():
         sys.stdout.write(delimiter.join([v[0], v[1], k[1], k[0], k[2]]))
         sys.stderr.write("\r\n")
else:
   sys.stderr.write("No result for this analysis!\n")

[-- Attachment #3: Type: text/plain, Size: 136 bytes --]

_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cocci] Data exchange over network interfaces by SmPL scripts
  2019-04-25 10:32     ` Markus Elfring
@ 2019-04-27 17:20       ` Markus Elfring
  2019-04-27 17:24       ` Markus Elfring
  1 sibling, 0 replies; 11+ messages in thread
From: Markus Elfring @ 2019-04-27 17:20 UTC (permalink / raw)
  To: Coccinelle

>   File "<string>", line 35, in store_statements
> socket.gaierror: [Errno -2] Name or service not known
> Error in Python script, line 56, file …

It seems that the attached adjusted data processing approach can produce
an usable analysis result.

elfring@Sonne:~/Projekte/Coccinelle/janitor> time /usr/bin/python3 list_duplicate_statement_pairs_from_if_branches-server4.py
statement1|statement2|"function name"|"source file"|incidence
dprintk ( "%s: readreg error (reg == 0x%02x, ret == %i)\n" , __func__ , reg , ret ) ;|return - 1 ;|stv0297_readreg|/home/elfring/Projekte/Linux/next-patched/drivers/media/dvb-frontends/stv0297.c|3
dprintk ( "%s: readreg error (reg == 0x%02x, ret == %i)\n" , __func__ , reg1 , ret ) ;|return - 1 ;|stv0297_readregs|/home/elfring/Projekte/Linux/next-patched/drivers/media/dvb-frontends/stv0297.c|3

real	0m1,044s
user	0m0,389s
sys	0m0,055s


Unfortunately, I observed during a few runs on my test system
that the displayed record sets can vary. Thus I guess that this approach
(which works together with Python multi-threading functionality) will need
further software adjustments.
Would you like to add any advices here?

Regards,
Markus
_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cocci] Data exchange over network interfaces by SmPL scripts
  2019-04-25 10:32     ` Markus Elfring
  2019-04-27 17:20       ` Markus Elfring
@ 2019-04-27 17:24       ` Markus Elfring
  2019-04-30  8:55         ` Markus Elfring
  2019-06-01 11:13         ` Markus Elfring
  1 sibling, 2 replies; 11+ messages in thread
From: Markus Elfring @ 2019-04-27 17:24 UTC (permalink / raw)
  Cc: Coccinelle

[-- Attachment #1: Type: text/plain, Size: 1221 bytes --]

> connecting
>   File "<string>", line 35, in store_statements
> socket.gaierror: [Errno -2] Name or service not known
> Error in Python script, line 56, file …

It seems that the attached adjusted data processing approach can produce
an usable analysis result.

elfring@Sonne:~/Projekte/Coccinelle/janitor> time /usr/bin/python3 list_duplicate_statement_pairs_from_if_branches-server4.py
statement1|statement2|"function name"|"source file"|incidence
dprintk ( "%s: readreg error (reg == 0x%02x, ret == %i)\n" , __func__ , reg , ret ) ;|return - 1 ;|stv0297_readreg|/home/elfring/Projekte/Linux/next-patched/drivers/media/dvb-frontends/stv0297.c|3
dprintk ( "%s: readreg error (reg == 0x%02x, ret == %i)\n" , __func__ , reg1 , ret ) ;|return - 1 ;|stv0297_readregs|/home/elfring/Projekte/Linux/next-patched/drivers/media/dvb-frontends/stv0297.c|3

real	0m1,044s
user	0m0,389s
sys	0m0,055s


Unfortunately, I observed during a few runs on my test system
that the displayed record sets can vary. Thus I guess that this approach
(which works together with Python multi-threading functionality) will need
further software adjustments.
Would you like to add any advices here?

Regards,
Markus

[-- Attachment #2: list_duplicate_statement_pairs_from_if_branches-server4.py --]
[-- Type: text/x-python, Size: 4454 bytes --]

import threading, socket, socketserver, struct, subprocess
inputs = []

def receive_data(s, n):
    d = b''

    while len(d) < n:
        p = s.recv(n - len(d))
        if not p:
           return None

        d += p

    return d

def receive_message(s):
    expect = receive_data(s, 4)
    if not expect:
       return None

    return receive_data(s, struct.unpack(">I", expect)[0])

class threaded_TCP_request_handler(socketserver.BaseRequestHandler):
    def handle(self):
        data = receive_message(self.request)
        if data:
           inputs.append(data.decode())

class threaded_TCP_server(socketserver.ThreadingMixIn, socketserver.TCPServer):
    pass

if __name__ == "__main__":
    server = threaded_TCP_server(("localhost", 1234), threaded_TCP_request_handler)
    with server:
        ip, port = server.server_address
        server_thread = threading.Thread(target = server.serve_forever)
        server_thread.daemon = True
        server_thread.start()
        cp = subprocess.run(["/usr/local/bin/spatch",
                             "--timeout",
                             "9",
                             "--python",
                             "/usr/bin/python3",
                             "-D",
                             "server_id=" + str(ip),
                             "-D",
                             "server_port=" + str(port),
                             "/home/elfring/Projekte/Coccinelle/janitor/list_duplicate_statement_pairs_from_if_branches-client3.cocci",
                             "/home/elfring/Projekte/Linux/next-patched/drivers/media/dvb-frontends/stv0297.c"],
                            capture_output = True, text = True)
        server.shutdown()
        import sys

        if cp.returncode:
           sys.stderr.write("%s\n===\nexit code: %d" % (cp.stderr, cp.returncode))
        else:
           if len(inputs) > 0:
              def report():
                 mapping = {}

                 def insert(x):
                    """Add data to an internal table."""
                    key = x["name"], x["file"], x["line"], x["column"]
                    if key in mapping:
                       sys.stderr.write("""A duplicate key was passed.
function: %s
file: %s
line: %s
column: %d
""" % key)
                       raise RuntimeError
                    else:
                       mapping[key] = x["s1"], x["s2"]

                 def data_import():
                    import json
                    for k in inputs:
                       for v in json.loads(k):
                          insert(v)

                 data_import()
                 from collections import Counter
                 counts = Counter()

                 for k, v in mapping.items():
                    counts[(v[0], v[1], k[0], k[1])] += 1

                 delimiter = "|"
                 duplicates = {}

                 for k, v in counts.items():
                    if v > 1:
                       duplicates[k] = v

                 if len(duplicates.keys()) > 0:
                    sys.stdout.write(delimiter.join(["statement1",
                                                     "statement2",
                                                     '"function name"',
                                                     '"source file"',
                                                     "incidence"])
                                     + "\r\n")

                    for k, v in duplicates.items():
                       sys.stdout.write(delimiter.join([k[0], k[1], k[2], k[3], str(v)])
                                        + "\r\n")
                 else:
                    sys.stderr.write("Duplicate statements were not determined"
                                     " from the following records.\n"
                                     + delimiter.join(["statement1",
                                                       "statement2",
                                                       '"function name"',
                                                       '"source file"'])
                                     + "\r\n")

                    for k, v in counts.items():
                       if v < 2:
                          sys.stderr.write(delimiter.join([k[0], k[1], k[2], k[3]])
                                           + "\r\n")

              report()
           else:
              sys.stderr.write("No result for this analysis!\n")

[-- Attachment #3: list_duplicate_statement_pairs_from_if_branches-client3.cocci --]
[-- Type: text/plain, Size: 1483 bytes --]

@initialize:python@
s_id << virtual.server_id;
s_port << virtual.server_port;
@@
import json, socket, struct, sys

if s_id == False:
   s_id = "localhost"

target = s_id, int(s_port) if s_port else 1234
sys.stderr.write("Using Python version:\n%s\n" % (sys.version))
sys.stderr.write('Connections will be tried with server “%s” on port “%d”.\n'
                 % target)

def store_statements(fun, source, s1, s2):
    """Send data for the service."""
    records = []

    for place in source:
       records.append('{"name":%s,"file":%s,"line":%s,"column":%s,"s1":%s,"s2":%s}'
                      % (json.dumps(fun),
                         json.dumps(place.file),
                         json.dumps(place.line),
                         json.dumps(int(place.column) + 1),
                         json.dumps(s1),
                         json.dumps(s2)))

    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as so:
        so.connect(target)
        result = "[\n"
        result += ",\n".join(records)
        result += "\n]"
        b = bytes(result, "utf8")
        p = struct.pack(">I", len(b))
        p += b
        so.sendall(p)

@searching exists@
identifier work;
statement s1, s2;
position pos;
type T;
@@
 T work(...)
 {
 ... when any
 if (...)
 {
 ... when any
 s1@pos
 s2
 }
 ... when any
 }

@script:python collection@
fun << searching.work;
s1 << searching.s1;
s2 << searching.s2;
place << searching.pos;
@@
store_statements(fun, place, s1, s2)

[-- Attachment #4: Type: text/plain, Size: 136 bytes --]

_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cocci] Data exchange over network interfaces by SmPL scripts
  2019-04-27 17:24       ` Markus Elfring
@ 2019-04-30  8:55         ` Markus Elfring
  2019-06-01 11:13         ` Markus Elfring
  1 sibling, 0 replies; 11+ messages in thread
From: Markus Elfring @ 2019-04-30  8:55 UTC (permalink / raw)
  To: Coccinelle

> Unfortunately, I observed during a few runs on my test system
> that the displayed record sets can vary. Thus I guess that this approach
> (which works together with Python multi-threading functionality) will need
> further software adjustments.

I am curious how the clarification of such software behaviour will evolve
further also with the help from additional information around the topic
“Checking network input processing by Python for a multi-threaded server”.
https://mail.python.org/pipermail/python-list/2019-April/740645.html

Regards,
Markus
_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Cocci] Data exchange over network interfaces by SmPL scripts
  2019-04-27 17:24       ` Markus Elfring
  2019-04-30  8:55         ` Markus Elfring
@ 2019-06-01 11:13         ` Markus Elfring
  1 sibling, 0 replies; 11+ messages in thread
From: Markus Elfring @ 2019-06-01 11:13 UTC (permalink / raw)
  To: Coccinelle

> Unfortunately, I observed during a few runs on my test system
> that the displayed record sets can vary. Thus I guess that this approach
> (which works together with Python multi-threading functionality) will need
> further software adjustments.

I stumbled on general software development challenges from inter-process
communication over TCP connections.
This programming interface supports reliable data transmissions.
But the POSIX API does not directly support so far to determine how many
of the sent data are still on the way for delivery to the receiving process.

* Operating systems can provide additional functions for this purpose.
  I find then that Linux APIs could be improved for more efficient analysis
  of network connections.

* Network protocols influence also corresponding data processing approaches.

  + Customised network communication is not needed if you can depend on
    system functionality by databases.

  + If you would occasionally like to experiment with related services,
    the application of the technology “Common Object Request Broker Architecture”
    can be another interesting design option.
    Example:
    http://omniorb.sourceforge.net/

Regards,
Markus
_______________________________________________
Cocci mailing list
Cocci@systeme.lip6.fr
https://systeme.lip6.fr/mailman/listinfo/cocci

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-06-01 11:13 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-20 18:50 [Cocci] Checking import of code search results into a table by parallel SmPL data processing Markus Elfring
     [not found] ` <alpine.DEB.2.21.1904202112150.2499@hadrien>
2019-04-20 19:31   ` Markus Elfring
2019-04-23  9:48   ` Markus Elfring
2019-04-24  6:25 ` [Cocci] Rejecting parallel execution of SmPL scripts Markus Elfring
2019-04-25  8:06 ` [Cocci] Data exchange over network interfaces by " Markus Elfring
     [not found]   ` <alpine.DEB.2.21.1904251039000.2550@hadrien>
2019-04-25 10:32     ` Markus Elfring
2019-04-27 17:20       ` Markus Elfring
2019-04-27 17:24       ` Markus Elfring
2019-04-30  8:55         ` Markus Elfring
2019-06-01 11:13         ` Markus Elfring
2019-04-25 18:12 ` [Cocci] Data exchange through message queue " Markus Elfring

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).