From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Guo, Jia" Subject: Re: [PATCH v2 1/2] eal: add uevent api for hot plug Date: Wed, 5 Jul 2017 17:04:25 +0800 Message-ID: References: <1495986280-26207-1-git-send-email-jia.guo@intel.com> <1643564.1KfnGSoeuV@xps> <3931985.8o2Ck0PCcH@xps> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org, helin.zhang@intel.com, jingjing.wu@intel.com To: Thomas Monjalon Return-path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id E62F62C8 for ; Wed, 5 Jul 2017 11:04:28 +0200 (CEST) In-Reply-To: <3931985.8o2Ck0PCcH@xps> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 7/5/2017 3:32 PM, Thomas Monjalon wrote: > 05/07/2017 05:02, Guo, Jia: >> hi, thomas >> >> >> On 7/5/2017 7:45 AM, Thomas Monjalon wrote: >>> Hi, >>> >>> This is an interesting step for hotplug in DPDK. >>> >>> 28/06/2017 13:07, Jeff Guo: >>>> + netlink_fd = socket(PF_NETLINK, SOCK_DGRAM, NETLINK_KOBJECT_UEVENT); >>> It is monitoring the whole system... >>> >>>> +int >>>> +rte_uevent_get(int fd, struct rte_uevent *uevent) >>>> +{ >>>> + int ret; >>>> + char buf[RTE_UEVENT_MSG_LEN]; >>>> + >>>> + memset(uevent, 0, sizeof(struct rte_uevent)); >>>> + memset(buf, 0, RTE_UEVENT_MSG_LEN); >>>> + >>>> + ret = recv(fd, buf, RTE_UEVENT_MSG_LEN - 1, MSG_DONTWAIT); >>> ... and it is read from this function called by one driver. >>> It cannot work without a global dispatch. >> the rte_uevent-connect is called in func of pci_uio_alloc_resource, so >> each socket is created by by each uio device. so i think that would not >> affect each driver isolate to use it. > Ah OK, I missed it. > >>> It must be a global mechanism, probably a service core. >>> The question is also to know whether it should be a mandatory >>> service in DPDK or an optional helper? >> a global mechanism would be good, but so far, include mlx driver, we all >> handle the hot plug event in driver by app's registered callback. maybe >> a better global would be try in the future. but now is would work for all >> pci uio device. > mlx drivers have a special connection to the kernel through the associated > mlx kernel drivers. That's why the PMD handle the events in a specific way. > > You are adding event handling for UIO. > Now we need also VFIO. > > I am wondering how it could be better integrated in the bus layer. absolutely, hot plug for VFIO must be request more for the live migration, and we plan to add it at next level, when we go thought all uio hot plug feature integration done. so, could i expect an ack if there aren't other concern about uio uevent here. thanks. >> and more, if in pci uio device to use hot plug , i think it might be >> mandatory.