All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sonal Santan <sonals@xilinx.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
	Cyril Chemparathy <cyrilc@xilinx.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Lizhi Hou <lizhih@xilinx.com>, Michal Simek <michals@xilinx.com>,
	"airlied@redhat.com" <airlied@redhat.com>
Subject: RE: [RFC PATCH Xilinx Alveo 0/6] Xilinx PCIe accelerator driver
Date: Thu, 28 Mar 2019 00:13:55 +0000	[thread overview]
Message-ID: <BYAPR02MB5160162639C2EAC6B19901B7BB590@BYAPR02MB5160.namprd02.prod.outlook.com> (raw)
In-Reply-To: <20190327141137.GK2665@phenom.ffwll.local>



> -----Original Message-----
> From: Daniel Vetter [mailto:daniel.vetter@ffwll.ch] On Behalf Of Daniel Vetter
> Sent: Wednesday, March 27, 2019 7:12 AM
> To: Sonal Santan <sonals@xilinx.com>
> Cc: Daniel Vetter <daniel@ffwll.ch>; dri-devel@lists.freedesktop.org;
> gregkh@linuxfoundation.org; Cyril Chemparathy <cyrilc@xilinx.com>; linux-
> kernel@vger.kernel.org; Lizhi Hou <lizhih@xilinx.com>; Michal Simek
> <michals@xilinx.com>; airlied@redhat.com
> Subject: Re: [RFC PATCH Xilinx Alveo 0/6] Xilinx PCIe accelerator driver
> 
> On Wed, Mar 27, 2019 at 12:50:14PM +0000, Sonal Santan wrote:
> >
> >
> > > -----Original Message-----
> > > From: Daniel Vetter [mailto:daniel@ffwll.ch]
> > > Sent: Wednesday, March 27, 2019 1:23 AM
> > > To: Sonal Santan <sonals@xilinx.com>
> > > Cc: dri-devel@lists.freedesktop.org; gregkh@linuxfoundation.org;
> > > Cyril Chemparathy <cyrilc@xilinx.com>; linux-kernel@vger.kernel.org;
> > > Lizhi Hou <lizhih@xilinx.com>; Michal Simek <michals@xilinx.com>;
> > > airlied@redhat.com
> > > Subject: Re: [RFC PATCH Xilinx Alveo 0/6] Xilinx PCIe accelerator
> > > driver
> > >
> > > On Wed, Mar 27, 2019 at 12:30 AM Sonal Santan <sonals@xilinx.com>
> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Daniel Vetter [mailto:daniel.vetter@ffwll.ch] On Behalf Of
> > > > > Daniel Vetter
> > > > > Sent: Monday, March 25, 2019 1:28 PM
> > > > > To: Sonal Santan <sonals@xilinx.com>
> > > > > Cc: dri-devel@lists.freedesktop.org; gregkh@linuxfoundation.org;
> > > > > Cyril Chemparathy <cyrilc@xilinx.com>;
> > > > > linux-kernel@vger.kernel.org; Lizhi Hou <lizhih@xilinx.com>;
> > > > > Michal Simek <michals@xilinx.com>; airlied@redhat.com
> > > > > Subject: Re: [RFC PATCH Xilinx Alveo 0/6] Xilinx PCIe
> > > > > accelerator driver
> > > > >
> > > > > On Tue, Mar 19, 2019 at 02:53:55PM -0700,
> > > > > sonal.santan@xilinx.com
> > > wrote:
> > > > > > From: Sonal Santan <sonal.santan@xilinx.com>
> > > > > >
> > > > > > Hello,
> > > > > >
> > > > > > This patch series adds drivers for Xilinx Alveo PCIe accelerator cards.
> > > > > > These drivers are part of Xilinx Runtime (XRT) open source
> > > > > > stack and have been deployed by leading FaaS vendors and many
> > > > > > enterprise
> > > > > customers.
> > > > >
> > > > > Cool, first fpga driver submitted to drm! And from a high level
> > > > > I think this makes a lot of sense.
> > > > >
> > > > > > PLATFORM ARCHITECTURE
> > > > > >
> > > > > > Alveo PCIe platforms have a static shell and a reconfigurable
> > > > > > (dynamic) region. The shell is automatically loaded from PROM
> > > > > > when host is booted and PCIe is enumerated by BIOS. Shell
> > > > > > cannot be changed till next cold reboot. The shell exposes two
> physical functions:
> > > > > > management physical function and user physical function.
> > > > > >
> > > > > > Users compile their high level design in C/C++/OpenCL or RTL
> > > > > > into FPGA image using SDx compiler. The FPGA image packaged as
> > > > > > xclbin file can be loaded onto reconfigurable region. The
> > > > > > image may contain one or more compute unit. Users can
> > > > > > dynamically swap the full image running on the reconfigurable
> > > > > > region in order to switch between different
> > > > > workloads.
> > > > > >
> > > > > > XRT DRIVERS
> > > > > >
> > > > > > XRT Linux kernel driver xmgmt binds to mgmt pf. The driver is
> > > > > > modular and organized into several platform drivers which
> > > > > > primarily handle the following functionality:
> > > > > > 1.  ICAP programming (FPGA bitstream download with FPGA Mgr
> > > > > > integration) 2.  Clock scaling 3.  Loading firmware container
> > > > > > also called dsabin (embedded Microblaze
> > > > > >     firmware for ERT and XMC, optional clearing bitstream) 4.
> > > > > > In-band
> > > > > > sensors: temp, voltage, power, etc.
> > > > > > 5.  AXI Firewall management
> > > > > > 6.  Device reset and rescan
> > > > > > 7.  Hardware mailbox for communication between two physical
> > > > > > functions
> > > > > >
> > > > > > XRT Linux kernel driver xocl binds to user pf. Like its peer,
> > > > > > this driver is also modular and organized into several
> > > > > > platform drivers which handle the following functionality:
> > > > > > 1.  Device memory topology discovery and memory management 2.
> > > > > > Buffer object abstraction and management for client process 3.
> > > > > > XDMA MM PCIe DMA engine programming 4.  Multi-process aware
> > > context management 5.
> > > > > > Compute unit execution management (optionally with help of ERT)
> for
> > > > > >     client processes
> > > > > > 6.  Hardware mailbox for communication between two physical
> > > > > > functions
> > > > > >
> > > > > > The drivers export ioctls and sysfs nodes for various services.
> > > > > > xocl driver makes heavy use of DRM GEM features for device
> > > > > > memory management, reference counting, mmap support and
> export/import.
> > > > > > xocl also includes a simple scheduler called KDS which
> > > > > > schedules compute units and interacts with hardware scheduler
> > > > > > running ERT firmware. The scheduler understands custom opcodes
> > > > > > packaged into command objects
> > > > > and
> > > > > > provides an asynchronous command done notification via POSIX poll.
> > > > > >
> > > > > > More details on architecture, software APIs, ioctl
> > > > > > definitions, execution model, etc. is available as Sphinx
> > > > > > documentation--
> > > > > >
> > > > > > https://xilinx.github.io/XRT/2018.3/html/index.html
> > > > > >
> > > > > > The complete runtime software stack (XRT) which includes out
> > > > > > of tree kernel drivers, user space libraries, board utilities
> > > > > > and firmware for the hardware scheduler is open source and
> > > > > > available at https://github.com/Xilinx/XRT
> > > > >
> > > > > Before digging into the implementation side more I looked into
> > > > > the userspace here. I admit I got lost a bit, since there's lots
> > > > > of indirections and abstractions going on, but it seems like
> > > > > this is just a fancy ioctl wrapper/driver backend abstractions.
> > > > > Not really
> > > something applications would use.
> > > > Sonal Santan <sonals@xilinx.com>
> > > >
> > > > 4:20 PM (1 minute ago)
> > > >
> > > > to me
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Daniel Vetter [mailto:daniel.vetter@ffwll.ch] On Behalf Of
> > > > > Daniel Vetter
> > > > > Sent: Monday, March 25, 2019 1:28 PM
> > > > > To: Sonal Santan <sonals@xilinx.com>
> > > > > Cc: dri-devel@lists.freedesktop.org; gregkh@linuxfoundation.org;
> > > > > Cyril Chemparathy <cyrilc@xilinx.com>;
> > > > > linux-kernel@vger.kernel.org; Lizhi Hou <lizhih@xilinx.com>;
> > > > > Michal Simek <michals@xilinx.com>; airlied@redhat.com
> > > > > Subject: Re: [RFC PATCH Xilinx Alveo 0/6] Xilinx PCIe
> > > > > accelerator driver
> > > > >
> > > > > On Tue, Mar 19, 2019 at 02:53:55PM -0700,
> > > > > sonal.santan@xilinx.com
> > > wrote:
> > > > > > From: Sonal Santan <sonal.santan@xilinx.com>
> > > > > >
> > > > > > Hello,
> > > > > >
> > > > > > This patch series adds drivers for Xilinx Alveo PCIe accelerator cards.
> > > > > > These drivers are part of Xilinx Runtime (XRT) open source
> > > > > > stack and have been deployed by leading FaaS vendors and many
> > > > > > enterprise
> > > > > customers.
> > > > >
> > > > > Cool, first fpga driver submitted to drm! And from a high level
> > > > > I think this makes a lot of sense.
> > > > >
> > > > > > PLATFORM ARCHITECTURE
> > > > > >
> > > > > > Alveo PCIe platforms have a static shell and a reconfigurable
> > > > > > (dynamic) region. The shell is automatically loaded from PROM
> > > > > > when host is booted and PCIe is enumerated by BIOS. Shell
> > > > > > cannot be changed till next cold reboot. The shell exposes two
> physical functions:
> > > > > > management physical function and user physical function.
> > > > > >
> > > > > > Users compile their high level design in C/C++/OpenCL or RTL
> > > > > > into FPGA image using SDx compiler. The FPGA image packaged as
> > > > > > xclbin file can be loaded onto reconfigurable region. The
> > > > > > image may contain one or more compute unit. Users can
> > > > > > dynamically swap the full image running on the reconfigurable
> > > > > > region in order to switch between different
> > > > > workloads.
> > > > > >
> > > > > > XRT DRIVERS
> > > > > >
> > > > > > XRT Linux kernel driver xmgmt binds to mgmt pf. The driver is
> > > > > > modular and organized into several platform drivers which
> > > > > > primarily handle the following functionality:
> > > > > > 1.  ICAP programming (FPGA bitstream download with FPGA Mgr
> > > > > > integration) 2.  Clock scaling 3.  Loading firmware container
> > > > > > also called dsabin (embedded Microblaze
> > > > > >     firmware for ERT and XMC, optional clearing bitstream) 4.
> > > > > > In-band
> > > > > > sensors: temp, voltage, power, etc.
> > > > > > 5.  AXI Firewall management
> > > > > > 6.  Device reset and rescan
> > > > > > 7.  Hardware mailbox for communication between two physical
> > > > > > functions
> > > > > >
> > > > > > XRT Linux kernel driver xocl binds to user pf. Like its peer,
> > > > > > this driver is also modular and organized into several
> > > > > > platform drivers which handle the following functionality:
> > > > > > 1.  Device memory topology discovery and memory management 2.
> > > > > > Buffer object abstraction and management for client process 3.
> > > > > > XDMA MM PCIe DMA engine programming 4.  Multi-process aware
> > > context management 5.
> > > > > > Compute unit execution management (optionally with help of ERT)
> for
> > > > > >     client processes
> > > > > > 6.  Hardware mailbox for communication between two physical
> > > > > > functions
> > > > > >
> > > > > > The drivers export ioctls and sysfs nodes for various services.
> > > > > > xocl driver makes heavy use of DRM GEM features for device
> > > > > > memory management, reference counting, mmap support and
> export/import.
> > > > > > xocl also includes a simple scheduler called KDS which
> > > > > > schedules compute units and interacts with hardware scheduler
> > > > > > running ERT firmware. The scheduler understands custom opcodes
> > > > > > packaged into command objects
> > > > > and
> > > > > > provides an asynchronous command done notification via POSIX poll.
> > > > > >
> > > > > > More details on architecture, software APIs, ioctl
> > > > > > definitions, execution model, etc. is available as Sphinx
> > > > > > documentation--
> > > > > >
> > > > > > https://xilinx.github.io/XRT/2018.3/html/index.html
> > > > > >
> > > > > > The complete runtime software stack (XRT) which includes out
> > > > > > of tree kernel drivers, user space libraries, board utilities
> > > > > > and firmware for the hardware scheduler is open source and
> > > > > > available at https://github.com/Xilinx/XRT
> > > > >
> > > > > Before digging into the implementation side more I looked into
> > > > > the userspace here. I admit I got lost a bit, since there's lots
> > > > > of indirections and abstractions going on, but it seems like
> > > > > this is just a fancy ioctl wrapper/driver backend abstractions.
> > > > > Not really
> > > something applications would use.
> > > > >
> > > >
> > > > Appreciate your feedback.
> > > >
> > > > The userspace libraries define a common abstraction but have
> > > > different implementations for Zynq Ultrascale+ embedded platform,
> > > > PCIe based Alveo (and Faas) and emulation flows. The latter lets
> > > > you run your
> > > application without physical hardware.
> > > >
> > > > >
> > > > > From the pretty picture on github it looks like there's some
> > > > > opencl/ml/other fancy stuff sitting on top that applications
> > > > > would use. Is
> > > that also available?
> > > >
> > > > The full OpenCL runtime is available in the same repository.
> > > > Xilinx ML Suite is also based on XRT and its source can be found
> > > > at
> > > https://github.com/Xilinx/ml-suite.
> > >
> > > Hm, I did a few git grep for the usual opencl entry points, but
> > > didn't find anything. Do I need to run some build scripts first
> > > (which downloads additional sourcecode)? Or is there some symbol
> > > mangling going on and that's why I don't find anything? Pointers very
> much appreciated.
> >
> > The bulk of the OCL runtime code can be found inside
> > https://github.com/Xilinx/XRT/tree/master/src/runtime_src/xocl.
> > The OCL runtime also includes
> https://github.com/Xilinx/XRT/tree/master/src/runtime_src/xrt.
> > The OCL runtime library called libxilinxopencl.so in turn then uses XRT APIs
> to talk to the drivers.
> > For PCIe these XRT APIs are implemented in the library libxrt_core.so
> > the source for which is
> https://github.com/Xilinx/XRT/tree/master/src/runtime_src/driver/xclng/xrt.
> >
> > You can build a fully functioning runtime stack by following very
> > simple build instructions--
> > https://xilinx.github.io/XRT/master/html/build.html
> >
> > We do have a few dependencies on standard Linux packages including a
> > few OpenCL packages bundled by Linux distros: ocl-icd, ocl-icd-devel
> > and opencl-headers
> 
> Thanks a lot for pointers. No idea why I didn't find this stuff, I guess I was
> blind.
> 
> The thing I'm really interested in is the compiler, since at least the experience
> from gpus says that very much is part of the overall uapi, and definitely
> needed to be able to make any chances to the implementation.
> Looking at clCreateProgramWithSource there's only a lookup up cached
> compiles (it looks for xclbin), and src/runtime_src/xclbin doesn't look like that
> provides a compiler either. It seems like apps need to precompile everything
> first. Am I again missing something, or is this how it's supposed to work?
> 
XRT works with precompiled binaries which are compiled by Xilinx SDx compiler 
called xocc. The binary (xclbin) is loaded by clCreateProgramWithBinary(). 

> Note: There's no expectation for the fully optimizing compiler, and we're
> totally ok if there's an optimizing proprietary compiler and a basic open one
> (amd, and bunch of other companies all have such dual stacks running on top
> of drm kernel drivers). But a basic compiler that can convert basic kernels into
> machine code is expected.
> 
Although the compiler is not open source the compilation flow lets users examine
output from various stages. For example if you write your kernel in OpenCL/C/C++ 
you can view the RTL (Verilog/VHDL) output produced by first stage of compilation. 
Note that the compiler is really generating a custom circuit given a high level 
input which in the last phase gets synthesized into bitstream. Expert hardware 
designers can handcraft a circuit in RTL and feed it to the compiler. Our FPGA tools 
let you view the generated hardware design, the register map, etc. You can get more 
information about a compiled design by running XRT tool like xclbinutil on the 
generated file.

In essence compiling for FPGAs is quite different than compiling for GPU/CPU/DSP.
Interestingly FPGA compilers can run anywhere from 30 mins to a few hours to 
compile a testcase.

Thanks,
-Sonal

> Thanks, Daniel
> 
> >
> > Thanks,
> > -Sonal
> >
> > >
> > > > Typically end users use OpenCL APIs which are part of XRT stack.
> > > > One can write an application to directly call XRT APIs defined at
> > > > https://xilinx.github.io/XRT/2018.3/html/xclhal2.main.html
> > >
> > > I have no clue about DNN/ML unfortunately, I think I'll try to look
> > > into the ocl side a bit more first.
> > >
> > > Thanks, Daniel
> > >
> > > >
> > > > Thanks,
> > > > -Sonal
> > > > >
> > > > > Thanks, Daniel
> > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > -Sonal
> > > > > >
> > > > > > Sonal Santan (6):
> > > > > >   Add skeleton code: ioctl definitions and build hooks
> > > > > >   Global data structures shared between xocl and xmgmt drivers
> > > > > >   Add platform drivers for various IPs and frameworks
> > > > > >   Add core of XDMA driver
> > > > > >   Add management driver
> > > > > >   Add user physical function driver
> > > > > >
> > > > > >  drivers/gpu/drm/Kconfig                    |    2 +
> > > > > >  drivers/gpu/drm/Makefile                   |    1 +
> > > > > >  drivers/gpu/drm/xocl/Kconfig               |   22 +
> > > > > >  drivers/gpu/drm/xocl/Makefile              |    3 +
> > > > > >  drivers/gpu/drm/xocl/devices.h             |  954 +++++
> > > > > >  drivers/gpu/drm/xocl/ert.h                 |  385 ++
> > > > > >  drivers/gpu/drm/xocl/lib/Makefile.in       |   16 +
> > > > > >  drivers/gpu/drm/xocl/lib/cdev_sgdma.h      |   63 +
> > > > > >  drivers/gpu/drm/xocl/lib/libxdma.c         | 4368
> ++++++++++++++++++++
> > > > > >  drivers/gpu/drm/xocl/lib/libxdma.h         |  596 +++
> > > > > >  drivers/gpu/drm/xocl/lib/libxdma_api.h     |  127 +
> > > > > >  drivers/gpu/drm/xocl/mgmtpf/Makefile       |   29 +
> > > > > >  drivers/gpu/drm/xocl/mgmtpf/mgmt-core.c    |  960 +++++
> > > > > >  drivers/gpu/drm/xocl/mgmtpf/mgmt-core.h    |  147 +
> > > > > >  drivers/gpu/drm/xocl/mgmtpf/mgmt-cw.c      |   30 +
> > > > > >  drivers/gpu/drm/xocl/mgmtpf/mgmt-ioctl.c   |  148 +
> > > > > >  drivers/gpu/drm/xocl/mgmtpf/mgmt-reg.h     |  244 ++
> > > > > >  drivers/gpu/drm/xocl/mgmtpf/mgmt-sysfs.c   |  318 ++
> > > > > >  drivers/gpu/drm/xocl/mgmtpf/mgmt-utils.c   |  399 ++
> > > > > >  drivers/gpu/drm/xocl/subdev/dna.c          |  356 ++
> > > > > >  drivers/gpu/drm/xocl/subdev/feature_rom.c  |  412 ++
> > > > > >  drivers/gpu/drm/xocl/subdev/firewall.c     |  389 ++
> > > > > >  drivers/gpu/drm/xocl/subdev/fmgr.c         |  198 +
> > > > > >  drivers/gpu/drm/xocl/subdev/icap.c         | 2859 +++++++++++++
> > > > > >  drivers/gpu/drm/xocl/subdev/mailbox.c      | 1868 +++++++++
> > > > > >  drivers/gpu/drm/xocl/subdev/mb_scheduler.c | 3059
> ++++++++++++++
> > > > > >  drivers/gpu/drm/xocl/subdev/microblaze.c   |  722 ++++
> > > > > >  drivers/gpu/drm/xocl/subdev/mig.c          |  256 ++
> > > > > >  drivers/gpu/drm/xocl/subdev/sysmon.c       |  385 ++
> > > > > >  drivers/gpu/drm/xocl/subdev/xdma.c         |  510 +++
> > > > > >  drivers/gpu/drm/xocl/subdev/xmc.c          | 1480 +++++++
> > > > > >  drivers/gpu/drm/xocl/subdev/xvc.c          |  461 +++
> > > > > >  drivers/gpu/drm/xocl/userpf/Makefile       |   27 +
> > > > > >  drivers/gpu/drm/xocl/userpf/common.h       |  157 +
> > > > > >  drivers/gpu/drm/xocl/userpf/xocl_bo.c      | 1255 ++++++
> > > > > >  drivers/gpu/drm/xocl/userpf/xocl_bo.h      |  119 +
> > > > > >  drivers/gpu/drm/xocl/userpf/xocl_drm.c     |  640 +++
> > > > > >  drivers/gpu/drm/xocl/userpf/xocl_drv.c     |  743 ++++
> > > > > >  drivers/gpu/drm/xocl/userpf/xocl_ioctl.c   |  396 ++
> > > > > >  drivers/gpu/drm/xocl/userpf/xocl_sysfs.c   |  344 ++
> > > > > >  drivers/gpu/drm/xocl/version.h             |   22 +
> > > > > >  drivers/gpu/drm/xocl/xclbin.h              |  314 ++
> > > > > >  drivers/gpu/drm/xocl/xclfeatures.h         |  107 +
> > > > > >  drivers/gpu/drm/xocl/xocl_ctx.c            |  196 +
> > > > > >  drivers/gpu/drm/xocl/xocl_drm.h            |   91 +
> > > > > >  drivers/gpu/drm/xocl/xocl_drv.h            |  783 ++++
> > > > > >  drivers/gpu/drm/xocl/xocl_subdev.c         |  540 +++
> > > > > >  drivers/gpu/drm/xocl/xocl_thread.c         |   64 +
> > > > > >  include/uapi/drm/xmgmt_drm.h               |  204 +
> > > > > >  include/uapi/drm/xocl_drm.h                |  483 +++
> > > > > >  50 files changed, 28252 insertions(+)  create mode 100644
> > > > > > drivers/gpu/drm/xocl/Kconfig  create mode 100644
> > > > > > drivers/gpu/drm/xocl/Makefile  create mode 100644
> > > > > > drivers/gpu/drm/xocl/devices.h  create mode 100644
> > > > > > drivers/gpu/drm/xocl/ert.h  create mode 100644
> > > > > > drivers/gpu/drm/xocl/lib/Makefile.in
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/lib/cdev_sgdma.h
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/lib/libxdma.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/lib/libxdma.h
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/lib/libxdma_api.h
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/mgmtpf/Makefile
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/mgmtpf/mgmt-core.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/mgmtpf/mgmt-core.h
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/mgmtpf/mgmt-cw.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/mgmtpf/mgmt-ioctl.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/mgmtpf/mgmt-reg.h
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/mgmtpf/mgmt-sysfs.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/mgmtpf/mgmt-utils.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/subdev/dna.c  create
> > > > > > mode
> > > > > > 100644 drivers/gpu/drm/xocl/subdev/feature_rom.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/subdev/firewall.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/subdev/fmgr.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/subdev/icap.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/subdev/mailbox.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/subdev/mb_scheduler.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/subdev/microblaze.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/subdev/mig.c  create
> > > > > > mode
> > > > > > 100644 drivers/gpu/drm/xocl/subdev/sysmon.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/subdev/xdma.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/subdev/xmc.c  create
> > > > > > mode
> > > > > > 100644 drivers/gpu/drm/xocl/subdev/xvc.c  create mode 100644
> > > > > > drivers/gpu/drm/xocl/userpf/Makefile
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/userpf/common.h
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/userpf/xocl_bo.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/userpf/xocl_bo.h
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/userpf/xocl_drm.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/userpf/xocl_drv.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/userpf/xocl_ioctl.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/userpf/xocl_sysfs.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/version.h  create
> > > > > > mode
> > > > > > 100644 drivers/gpu/drm/xocl/xclbin.h  create mode 100644
> > > > > > drivers/gpu/drm/xocl/xclfeatures.h
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/xocl_ctx.c  create
> > > > > > mode
> > > > > > 100644 drivers/gpu/drm/xocl/xocl_drm.h  create mode 100644
> > > > > > drivers/gpu/drm/xocl/xocl_drv.h  create mode 100644
> > > > > > drivers/gpu/drm/xocl/xocl_subdev.c
> > > > > >  create mode 100644 drivers/gpu/drm/xocl/xocl_thread.c
> > > > > >  create mode 100644 include/uapi/drm/xmgmt_drm.h  create mode
> > > > > > 100644 include/uapi/drm/xocl_drm.h
> > > > > >
> > > > > > --
> > > > > > 2.17.0
> > > > > > _______________________________________________
> > > > > > dri-devel mailing list
> > > > > > dri-devel@lists.freedesktop.org
> > > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > > > >
> > > > > --
> > > > > Daniel Vetter
> > > > > Software Engineer, Intel Corporation http://blog.ffwll.ch
> > >
> > >
> > >
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> 
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

  reply	other threads:[~2019-03-28  0:14 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-19 21:53 [RFC PATCH Xilinx Alveo 0/6] Xilinx PCIe accelerator driver sonal.santan
2019-03-19 21:53 ` sonal.santan
2019-03-19 21:53 ` [RFC PATCH Xilinx Alveo 1/6] Add skeleton code: ioctl definitions and build hooks sonal.santan
2019-03-19 21:53   ` sonal.santan
2019-03-19 21:53 ` [RFC PATCH Xilinx Alveo 2/6] Global data structures shared between xocl and xmgmt drivers sonal.santan
2019-03-19 21:53   ` sonal.santan
2019-03-19 21:53 ` [RFC PATCH Xilinx Alveo 3/6] Add platform drivers for various IPs and frameworks sonal.santan
2019-03-19 21:53   ` sonal.santan
2019-03-19 21:53 ` [RFC PATCH Xilinx Alveo 4/6] Add core of XDMA driver sonal.santan
2019-03-19 21:53   ` sonal.santan
2019-03-19 21:54 ` [RFC PATCH Xilinx Alveo 5/6] Add management driver sonal.santan
2019-03-19 21:54   ` sonal.santan
2019-03-19 21:54 ` [RFC PATCH Xilinx Alveo 6/6] Add user physical function driver sonal.santan
2019-03-19 21:54   ` sonal.santan
2019-03-25 20:28 ` [RFC PATCH Xilinx Alveo 0/6] Xilinx PCIe accelerator driver Daniel Vetter
2019-03-25 20:28   ` Daniel Vetter
2019-03-26 23:30   ` Sonal Santan
2019-03-27  8:22     ` Daniel Vetter
2019-03-27 12:50       ` Sonal Santan
2019-03-27 14:11         ` Daniel Vetter
2019-03-27 14:11           ` Daniel Vetter
2019-03-28  0:13           ` Sonal Santan [this message]
2019-03-29  4:56             ` Dave Airlie
2019-03-30  1:09               ` Ronan KERYELL
2019-03-30  1:09                 ` Ronan KERYELL
2019-04-03 13:14                 ` Daniel Vetter
2019-04-03 13:14                   ` Daniel Vetter
2019-04-03 14:17                   ` Moritz Fischer
2019-04-03 14:53                     ` Daniel Vetter
2019-04-03 15:47                 ` Jerome Glisse
2019-04-03 15:47                   ` Jerome Glisse
2019-04-05 22:15                   ` Sonal Santan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR02MB5160162639C2EAC6B19901B7BB590@BYAPR02MB5160.namprd02.prod.outlook.com \
    --to=sonals@xilinx.com \
    --cc=airlied@redhat.com \
    --cc=cyrilc@xilinx.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizhih@xilinx.com \
    --cc=michals@xilinx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.