All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Khetan, Sharad" <sharad.khetan@intel.com>
To: Jeremy Kerr <jk@ozlabs.org>
Cc: Deepak Kodihalli <dkodihal@linux.vnet.ibm.com>,
	Andrew Jeffery <andrew@aj.id.au>,
	Vijay Khemka <vijaykhemka@fb.com>, rgrs <rgrs@protonmail.com>,
	"openbmc@lists.ozlabs.org" <openbmc@lists.ozlabs.org>,
	 "Winiarska, Iwona" <iwona.winiarska@intel.com>,
	"Bhat, Sumanth" <sumanth.bhat@intel.com>
Subject: Re: MCTP over PCI on AST2500
Date: Tue, 14 Jan 2020 06:39:41 +0000	[thread overview]
Message-ID: <C232EE9B-92CA-4E7E-BBC7-083D0EBC547B@intel.com> (raw)
In-Reply-To: <22A3B800-F833-4615-B980-EE933E1F83A9@ozlabs.org>

[-- Attachment #1: Type: text/plain, Size: 2418 bytes --]

Thanks for the pointer Jeremy. We will look into demux daemon.
Thanks,
-Sharad

On Jan 13, 2020, at 10:21 PM, Jeremy Kerr <jk@ozlabs.org<mailto:jk@ozlabs.org>> wrote:

Hi Ketan,

Just a suggestion - you probably don't want to be passing MCTP messages over dbus - this is something we learnt from the IPMI implementation.

The current design of the mctp-demux-daemon (included in the libmctp codebase) is intended to provide an interface that will be easy to migrate to a future kernel implementation (ie., using sockets to pass MCTP messages), and allows multiple applications to be listening for MCTP messages of different types.

Regards,


Jeremy

On 14 January 2020 1:54:49 pm AWST, "Khetan, Sharad" <sharad.khetan@intel.com<mailto:sharad.khetan@intel.com>> wrote:

Hi Deepak,

On 13/01/20 10:23 PM, Khetan, Sharad wrote:
Hi Andrew,

On Thu, 9 Jan 2020, at 12:27, Andrew Jeffery wrote:


 On Sat, 21 Dec 2019, at 10:45, Khetan, Sharad wrote:
 Hi Andrew,
 Sorry for late response.
 The plan is to have MCTP in user space.


 How are you handling this then? mmap()'ing the BAR from sysfs?

Sorry, let me put my brain back in, I was thinking of the wrong side of the  BMC/Host MCTP channel. How much were you planning to do in userspace on the BMC? As in, are you planning to drive the BMC's PCIe MCTP controller from userspace (presumably via /dev/mem)?


For implementation on the BMC, we agree that it's better to do it in kernel (and as you mentioned  - use consistent interface to upper layers, provide another transport). However, given the time needed to implement things in kernel (and the review after), we are starting with a short term solution. We will be implementing MCTP (protocol elements) in user space, along with a low level MCTP PCIe driver just to push bits on PCIe. Iwona is working on this and should be able to describe the exact primitive.

Do you plan to do the user-space work as an extension to/reusing components from openbmc/libmctp?

Thanks,
Deepak

Yes we plan to reuse and extend the libmctp, support PCIe as well as SMBus bindings. We plan to use d-bus extensions to existing libmctp. That said, we will know the exact extent of reuse/modifications when we really start implementing.

We are implementing this for AST 2600 (will not support any workarounds for AST 2500 bug).

@Andrew, Thanks for your response.

Thanks,
Sharad


[-- Attachment #2: Type: text/html, Size: 3607 bytes --]

  reply	other threads:[~2020-01-14  6:39 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-20  5:26 MCTP over PCI on AST2500 rgrs
2019-11-20  6:54 ` Vijay Khemka
2019-11-20  6:59   ` Khetan, Sharad
2019-11-22  0:38     ` Andrew Jeffery
2019-12-21  0:15       ` Khetan, Sharad
2020-01-09  1:57         ` Andrew Jeffery
2020-01-09 18:17           ` Vijay Khemka
2020-01-09 20:45             ` Richard Hanley
2020-01-10  1:29               ` Andrew Jeffery
2020-01-10  0:30           ` Andrew Jeffery
2020-01-13 16:53             ` Khetan, Sharad
2020-01-13 18:54               ` Deepak Kodihalli
2020-01-14  5:54                 ` Khetan, Sharad
2020-01-14  6:20                   ` Jeremy Kerr
2020-01-14  6:39                     ` Khetan, Sharad [this message]
2020-01-14  8:10                       ` Deepak Kodihalli
2020-01-14 15:54                       ` Thomaiyar, Richard Marian
2020-01-14 17:45                     ` Patrick Williams
2020-01-15 13:51                       ` Jeremy Kerr
2020-01-15 14:16                         ` Patrick Williams
2020-01-14  8:54                   ` rgrs
2020-01-13 23:22               ` Andrew Jeffery
2020-01-10  3:40           ` Michael Richardson
2020-01-10  5:05             ` Andrew Jeffery
2020-01-10 15:38               ` Michael Richardson
2020-01-12 23:38                 ` Andrew Jeffery
2020-01-13 17:09                   ` Michael Richardson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C232EE9B-92CA-4E7E-BBC7-083D0EBC547B@intel.com \
    --to=sharad.khetan@intel.com \
    --cc=andrew@aj.id.au \
    --cc=dkodihal@linux.vnet.ibm.com \
    --cc=iwona.winiarska@intel.com \
    --cc=jk@ozlabs.org \
    --cc=openbmc@lists.ozlabs.org \
    --cc=rgrs@protonmail.com \
    --cc=sumanth.bhat@intel.com \
    --cc=vijaykhemka@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.