From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A4DBC10F13 for ; Mon, 8 Apr 2019 15:50:54 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 0198121473 for ; Mon, 8 Apr 2019 15:50:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0198121473 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 53E172BCE; Mon, 8 Apr 2019 17:50:52 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 4E9122BC9; Mon, 8 Apr 2019 17:50:50 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Apr 2019 08:50:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,325,1549958400"; d="scan'208";a="289748987" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.252.26.147]) ([10.252.26.147]) by orsmga004.jf.intel.com with ESMTP; 08 Apr 2019 08:50:47 -0700 To: David Marchand Cc: Ray Kinsella , Thomas Monjalon , techboard@dpdk.org, Bruce Richardson , dev , Kevin Traynor References: <455a61b4-891d-eaaf-d784-2be884bcacbd@intel.com> <7166381.CkH77a7QuE@xps> <5e27f573-bbf5-30f1-73ee-d35fc5191632@ashroe.eu> <6a9bf695-b287-9e5e-358c-d6c3f93db045@intel.com> From: "Burakov, Anatoly" Message-ID: Date: Mon, 8 Apr 2019 16:50:46 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [dpdk-techboard] DPDK ABI/API Stability X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 08-Apr-19 3:38 PM, David Marchand wrote: > > > On Mon, Apr 8, 2019 at 4:03 PM Burakov, Anatoly > > wrote: > > On 08-Apr-19 2:58 PM, David Marchand wrote: > > On Mon, Apr 8, 2019 at 3:39 PM Burakov, Anatoly > > > >> wrote: > > > >     As a concrete proposal, my number one dream would be to see > >     multiprocess > >     gone. I also recall desire for "DPDK to be more lightweight", > and i > >     maintain that DPDK *cannot* be lightweight if we are to support > >     multiprocess - we can have one or the other, but not both. > However, > >     realistically, i don't think dropping multiprocess is ever > going to > >     happen - not only it is too entrenched in DPDK use cases, it is > >     actually > >     quite useful despite its flaws. > > > > > > Well, honestly, I'd like to hear about this. > > What are the real usecases for multi process support? > > Do we have even a single opensource project that uses it? > > > > I'm aware of a few closed source usages of multiprocess. I also think > current versions of collectd rely on secondary process (there's been a > Telemetry API added to avoid that, but AFAIK the support for Telemetry > is not upstream in collectd yet), and so do/would any dump-style > applications - in fact, we ourselves include one such application in > our > codebase (pdump, proc-info, etc.). > > > Sorry, I don't want to highjack this thread, I can start a separate > thread if people feel like it. > If we go with stabilisation, we must be careful that we want to support > the features. > > So about multiprocess, again, in those closed source projects you know > of, what are the usecases? > > For what we provide in dpdk pdump, proc-info, referring to oneself is > not that convincing to me as I don't use those tools. > > I don't see what we could not achieve the same with a control thread > running in the dpdk process and handling commands. > It would be open to the outside via a more standard channel, like a UNIX > socket or something like this. > If we need to declare a dynamic channel, it can be constructed as an > extension of the existing standard channel: we can open something like a > POSIX shm and push things in it. > Was this explored ? There are certainly things that we can do that can make some aspects of multiprocess redundant. For example, for any kind of collectd-like scenario, the Telemetry API (or Keith's DFS, or...) could conceivably provide a better and more maintainable way of doing things. Our multiprocess also makes it easier to write pipeline/load-balancing type applications. To see an example, look at our multiprocess/client-server example. This is demonstrating how, instead of writing one big monolithic application, one could instead write a number of smaller applications each doing their thing. It is of course possible to do the same without multiprocess, as evidenced by our sample applications such as load-balancer, distributor, ip-pipeline etc., but it is arguably easier to implement *real* applications that way due to separation of concerns and more focused codebase. However, there are two use cases that i can think of that are either hard or outright not possible without our multiprocess API's. The first one is dumping functionality. For example, dpdk_proc_info can display info from a currently-running or defunct process - list its memzones/mempools/etc. - basically, everything there is to know about the shared memory can be known that way. While this isn't a "real" use case, it is useful for debugging. More importantly, our multiprocess model provides resilience. In an event of a crash, the entire application is not brought down - instead, only the crashed process goes down. It's not /perfect/ resilience, of course, and there are caveats (memory leaking, locks, etc.), but you do get /some/ resilience that way - your process went down, you spin another secondary and you're back up and running again. The above described scenario is how most people (that i know of) appear to be using multiprocess - some kind of "crash-resilient" load-balancing/pipelining app. > > > -- > David Marchand -- Thanks, Anatoly