* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
[not found] <20210406232321.12104-1-decui@microsoft.com>
@ 2021-04-07 1:07 ` Andrew Lunn
2021-04-07 8:02 ` Dexuan Cui
2021-04-07 1:30 ` kernel test robot
` (4 subsequent siblings)
5 siblings, 1 reply; 25+ messages in thread
From: Andrew Lunn @ 2021-04-07 1:07 UTC (permalink / raw)
To: Dexuan Cui
Cc: davem, kuba, kys, haiyangz, sthemmin, wei.liu, liuwe, netdev,
linux-kernel, linux-hyperv
> +static int gdma_query_max_resources(struct pci_dev *pdev)
> +{
> + struct gdma_context *gc = pci_get_drvdata(pdev);
> + struct gdma_general_req req = { 0 };
> + struct gdma_query_max_resources_resp resp = { 0 };
> + int err;
Network drivers need to use reverse christmas tree. I spotted a number
of functions getting this wrong. Please review the whole driver.
> +
> + gdma_init_req_hdr(&req.hdr, GDMA_QUERY_MAX_RESOURCES,
> + sizeof(req), sizeof(resp));
> +
> + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> + if (err || resp.hdr.status) {
> + pr_err("%s, line %d: err=%d, err=0x%x\n", __func__, __LINE__,
> + err, resp.hdr.status);
I expect checkpatch complained about this. Don't use pr_err(), use
dev_err(pdev->dev, ... of netdev_err(ndev, ... You should always have
access to dev or ndev, so please change all pr_ calls.
> +static unsigned int num_queues = ANA_DEFAULT_NUM_QUEUE;
> +module_param(num_queues, uint, 0444);
No module parameters please.
Andrew
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
[not found] <20210406232321.12104-1-decui@microsoft.com>
2021-04-07 1:07 ` [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA) Andrew Lunn
@ 2021-04-07 1:30 ` kernel test robot
2021-04-07 8:08 ` Dexuan Cui
2021-04-07 1:52 ` kernel test robot
` (3 subsequent siblings)
5 siblings, 1 reply; 25+ messages in thread
From: kernel test robot @ 2021-04-07 1:30 UTC (permalink / raw)
To: Dexuan Cui, davem, kuba, kys, haiyangz, sthemmin, wei.liu, liuwe, netdev
Cc: kbuild-all, linux-kernel, linux-hyperv
[-- Attachment #1: Type: text/plain, Size: 20581 bytes --]
Hi Dexuan,
I love your patch! Perhaps something to improve:
[auto build test WARNING on net-next/master]
url: https://github.com/0day-ci/linux/commits/Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git cc0626c2aaed8e475efdd85fa374b497a7192e35
config: i386-randconfig-m021-20210406 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
# https://github.com/0day-ci/linux/commit/f086d8bc693c2686de24a81398e49496ab3747a9
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
git checkout f086d8bc693c2686de24a81398e49496ab3747a9
# save the attached .config to linux build tree
make W=1 ARCH=i386
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
drivers/pci/controller/pci-hyperv.c: In function 'hv_irq_unmask':
drivers/pci/controller/pci-hyperv.c:1220:2: error: implicit declaration of function 'hv_set_msi_entry_from_desc' [-Werror=implicit-function-declaration]
1220 | hv_set_msi_entry_from_desc(¶ms->int_entry.msi_entry, msi_desc);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/pci/controller/pci-hyperv.c:1252:13: error: implicit declaration of function 'cpumask_to_vpset'; did you mean 'cpumask_subset'? [-Werror=implicit-function-declaration]
1252 | nr_bank = cpumask_to_vpset(¶ms->int_target.vp_set, tmp);
| ^~~~~~~~~~~~~~~~
| cpumask_subset
drivers/pci/controller/pci-hyperv.c:1269:14: error: implicit declaration of function 'hv_cpu_number_to_vp_number' [-Werror=implicit-function-declaration]
1269 | (1ULL << hv_cpu_number_to_vp_number(cpu));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/pci/controller/pci-hyperv.c: In function 'prepopulate_bars':
>> drivers/pci/controller/pci-hyperv.c:1756:26: warning: right shift count >= width of type [-Wshift-count-overflow]
1756 | 4, (u32)(high_base >> 32));
| ^~
drivers/pci/controller/pci-hyperv.c: In function 'hv_pci_onchannelcallback':
>> drivers/pci/controller/pci-hyperv.c:2438:18: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
2438 | comp_packet = (struct pci_packet *)req_id;
| ^
In file included from include/linux/device.h:15,
from include/linux/pci.h:37,
from drivers/pci/controller/pci-hyperv.c:42:
drivers/pci/controller/pci-hyperv.c: In function 'hv_pci_allocate_bridge_windows':
>> drivers/pci/controller/pci-hyperv.c:2675:5: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'resource_size_t' {aka 'unsigned int'} [-Wformat=]
2675 | "Need %#llx of low MMIO space. Consider reconfiguring the VM.\n",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/dev_printk.h:19:22: note: in definition of macro 'dev_fmt'
19 | #define dev_fmt(fmt) fmt
| ^~~
drivers/pci/controller/pci-hyperv.c:2674:4: note: in expansion of macro 'dev_err'
2674 | dev_err(&hbus->hdev->device,
| ^~~~~~~
drivers/pci/controller/pci-hyperv.c:2675:15: note: format string is defined here
2675 | "Need %#llx of low MMIO space. Consider reconfiguring the VM.\n",
| ~~~~^
| |
| long long unsigned int
| %#x
>> drivers/pci/controller/pci-hyperv.c:2690:8: warning: unsigned conversion from 'long long int' to 'resource_size_t' {aka 'unsigned int'} changes value from '4294967296' to '0' [-Woverflow]
2690 | 0x100000000, -1,
| ^~~~~~~~~~~
In file included from include/linux/device.h:15,
from include/linux/pci.h:37,
from drivers/pci/controller/pci-hyperv.c:42:
drivers/pci/controller/pci-hyperv.c:2695:5: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'resource_size_t' {aka 'unsigned int'} [-Wformat=]
2695 | "Need %#llx of high MMIO space. Consider reconfiguring the VM.\n",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/dev_printk.h:19:22: note: in definition of macro 'dev_fmt'
19 | #define dev_fmt(fmt) fmt
| ^~~
drivers/pci/controller/pci-hyperv.c:2694:4: note: in expansion of macro 'dev_err'
2694 | dev_err(&hbus->hdev->device,
| ^~~~~~~
drivers/pci/controller/pci-hyperv.c:2695:15: note: format string is defined here
2695 | "Need %#llx of high MMIO space. Consider reconfiguring the VM.\n",
| ~~~~^
| |
| long long unsigned int
| %#x
cc1: some warnings being treated as errors
--
drivers/net/ethernet/microsoft/mana/gdma_main.c: In function 'gdma_r64':
drivers/net/ethernet/microsoft/mana/gdma_main.c:18:9: error: implicit declaration of function 'readq'; did you mean 'readl'? [-Werror=implicit-function-declaration]
18 | return readq(g->bar0_va + offset);
| ^~~~~
| readl
drivers/net/ethernet/microsoft/mana/gdma_main.c: In function 'gdma_ring_doorbell':
drivers/net/ethernet/microsoft/mana/gdma_main.c:259:2: error: implicit declaration of function 'writeq'; did you mean 'writel'? [-Werror=implicit-function-declaration]
259 | writeq(e.as_uint64, addr);
| ^~~~~~
| writel
drivers/net/ethernet/microsoft/mana/gdma_main.c: In function 'gdma_write_sgl':
>> drivers/net/ethernet/microsoft/mana/gdma_main.c:1090:14: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
1090 | address = (u8 *)wqe_req->sgl[i].address;
| ^
cc1: some warnings being treated as errors
--
drivers/net/ethernet/microsoft/mana/hw_channel.c: In function 'hwc_post_rx_wqe':
>> drivers/net/ethernet/microsoft/mana/hw_channel.c:88:17: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
88 | sge->address = (u64)req->buf_sge_addr;
| ^
drivers/net/ethernet/microsoft/mana/hw_channel.c: In function 'hwc_post_tx_wqe':
drivers/net/ethernet/microsoft/mana/hw_channel.c:542:17: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
542 | sge->address = (u64)req->buf_sge_addr;
| ^
Kconfig warnings: (for reference only)
WARNING: unmet direct dependencies detected for PCI_HYPERV
Depends on PCI && X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
Selected by
- MICROSOFT_MANA && NETDEVICES && ETHERNET && NET_VENDOR_MICROSOFT && PCI_MSI
vim +1756 drivers/pci/controller/pci-hyperv.c
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1671
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1672 /**
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1673 * prepopulate_bars() - Fill in BARs with defaults
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1674 * @hbus: Root PCI bus, as understood by this driver
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1675 *
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1676 * The core PCI driver code seems much, much happier if the BARs
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1677 * for a device have values upon first scan. So fill them in.
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1678 * The algorithm below works down from large sizes to small,
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1679 * attempting to pack the assignments optimally. The assumption,
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1680 * enforced in other parts of the code, is that the beginning of
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1681 * the memory-mapped I/O space will be aligned on the largest
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1682 * BAR size.
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1683 */
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1684 static void prepopulate_bars(struct hv_pcibus_device *hbus)
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1685 {
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1686 resource_size_t high_size = 0;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1687 resource_size_t low_size = 0;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1688 resource_size_t high_base = 0;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1689 resource_size_t low_base = 0;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1690 resource_size_t bar_size;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1691 struct hv_pci_dev *hpdev;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1692 unsigned long flags;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1693 u64 bar_val;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1694 u32 command;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1695 bool high;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1696 int i;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1697
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1698 if (hbus->low_mmio_space) {
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1699 low_size = 1ULL << (63 - __builtin_clzll(hbus->low_mmio_space));
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1700 low_base = hbus->low_mmio_res->start;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1701 }
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1702
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1703 if (hbus->high_mmio_space) {
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1704 high_size = 1ULL <<
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1705 (63 - __builtin_clzll(hbus->high_mmio_space));
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1706 high_base = hbus->high_mmio_res->start;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1707 }
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1708
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1709 spin_lock_irqsave(&hbus->device_list_lock, flags);
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1710
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1711 /*
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1712 * Clear the memory enable bit, in case it's already set. This occurs
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1713 * in the suspend path of hibernation, where the device is suspended,
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1714 * resumed and suspended again: see hibernation_snapshot() and
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1715 * hibernation_platform_enter().
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1716 *
c77bfb54174308 drivers/pci/controller/pci-hyperv.c Bjorn Helgaas 2021-01-26 1717 * If the memory enable bit is already set, Hyper-V silently ignores
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1718 * the below BAR updates, and the related PCI device driver can not
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1719 * work, because reading from the device register(s) always returns
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1720 * 0xFFFFFFFF.
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1721 */
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1722 list_for_each_entry(hpdev, &hbus->children, list_entry) {
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1723 _hv_pcifront_read_config(hpdev, PCI_COMMAND, 2, &command);
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1724 command &= ~PCI_COMMAND_MEMORY;
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1725 _hv_pcifront_write_config(hpdev, PCI_COMMAND, 2, command);
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1726 }
ac82fc83270884 drivers/pci/controller/pci-hyperv.c Dexuan Cui 2019-11-24 1727
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1728 /* Pick addresses for the BARs. */
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1729 do {
5b8db8f66e08fa drivers/pci/host/pci-hyperv.c Stephen Hemminger 2018-05-23 1730 list_for_each_entry(hpdev, &hbus->children, list_entry) {
c9c13ba428ef90 drivers/pci/controller/pci-hyperv.c Denis Efremov 2019-09-28 1731 for (i = 0; i < PCI_STD_NUM_BARS; i++) {
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1732 bar_val = hpdev->probed_bar[i];
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1733 if (bar_val == 0)
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1734 continue;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1735 high = bar_val & PCI_BASE_ADDRESS_MEM_TYPE_64;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1736 if (high) {
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1737 bar_val |=
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1738 ((u64)hpdev->probed_bar[i + 1]
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1739 << 32);
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1740 } else {
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1741 bar_val |= 0xffffffffULL << 32;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1742 }
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1743 bar_size = get_bar_size(bar_val);
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1744 if (high) {
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1745 if (high_size != bar_size) {
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1746 i++;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1747 continue;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1748 }
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1749 _hv_pcifront_write_config(hpdev,
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1750 PCI_BASE_ADDRESS_0 + (4 * i),
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1751 4,
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1752 (u32)(high_base & 0xffffff00));
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1753 i++;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1754 _hv_pcifront_write_config(hpdev,
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1755 PCI_BASE_ADDRESS_0 + (4 * i),
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 @1756 4, (u32)(high_base >> 32));
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1757 high_base += bar_size;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1758 } else {
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1759 if (low_size != bar_size)
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1760 continue;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1761 _hv_pcifront_write_config(hpdev,
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1762 PCI_BASE_ADDRESS_0 + (4 * i),
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1763 4,
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1764 (u32)(low_base & 0xffffff00));
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1765 low_base += bar_size;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1766 }
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1767 }
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1768 if (high_size <= 1 && low_size <= 1) {
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1769 /* Set the memory enable bit. */
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1770 _hv_pcifront_read_config(hpdev, PCI_COMMAND, 2,
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1771 &command);
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1772 command |= PCI_COMMAND_MEMORY;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1773 _hv_pcifront_write_config(hpdev, PCI_COMMAND, 2,
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1774 command);
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1775 break;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1776 }
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1777 }
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1778
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1779 high_size >>= 1;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1780 low_size >>= 1;
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1781 } while (high_size || low_size);
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1782
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1783 spin_unlock_irqrestore(&hbus->device_list_lock, flags);
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1784 }
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 1785
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 40247 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
[not found] <20210406232321.12104-1-decui@microsoft.com>
2021-04-07 1:07 ` [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA) Andrew Lunn
2021-04-07 1:30 ` kernel test robot
@ 2021-04-07 1:52 ` kernel test robot
2021-04-07 1:53 ` kernel test robot
` (2 subsequent siblings)
5 siblings, 0 replies; 25+ messages in thread
From: kernel test robot @ 2021-04-07 1:52 UTC (permalink / raw)
To: Dexuan Cui, davem, kuba, kys, haiyangz, sthemmin, wei.liu, liuwe, netdev
Cc: kbuild-all, clang-built-linux, linux-kernel, linux-hyperv
[-- Attachment #1: Type: text/plain, Size: 4268 bytes --]
Hi Dexuan,
I love your patch! Yet something to improve:
[auto build test ERROR on net-next/master]
url: https://github.com/0day-ci/linux/commits/Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git cc0626c2aaed8e475efdd85fa374b497a7192e35
config: powerpc-randconfig-r024-20210406 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project c060945b23a1c54d4b2a053ff4b093a2277b303d)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install powerpc cross compiling tool for clang build
# apt-get install binutils-powerpc-linux-gnu
# https://github.com/0day-ci/linux/commit/f086d8bc693c2686de24a81398e49496ab3747a9
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
git checkout f086d8bc693c2686de24a81398e49496ab3747a9
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All error/warnings (new ones prefixed by >>):
>> drivers/pci/controller/pci-hyperv.c:46:10: fatal error: 'asm/irqdomain.h' file not found
#include <asm/irqdomain.h>
^~~~~~~~~~~~~~~~~
1 error generated.
--
>> drivers/net/ethernet/microsoft/mana/gdma_main.c:18:9: error: implicit declaration of function 'readq' [-Werror,-Wimplicit-function-declaration]
return readq(g->bar0_va + offset);
^
>> drivers/net/ethernet/microsoft/mana/gdma_main.c:259:2: error: implicit declaration of function 'writeq' [-Werror,-Wimplicit-function-declaration]
writeq(e.as_uint64, addr);
^
2 errors generated.
--
>> drivers/net/ethernet/microsoft/mana/hw_channel.c:60:6: warning: variable 'ctx' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (!test_bit(resp_msg->response.hwc_msg_id,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/ethernet/microsoft/mana/hw_channel.c:77:2: note: uninitialized use occurs here
ctx->error = err;
^~~
drivers/net/ethernet/microsoft/mana/hw_channel.c:60:2: note: remove the 'if' if its condition is always false
if (!test_bit(resp_msg->response.hwc_msg_id,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/ethernet/microsoft/mana/hw_channel.c:57:28: note: initialize the variable 'ctx' to silence this warning
struct hwc_caller_ctx *ctx;
^
= NULL
1 warning generated.
Kconfig warnings: (for reference only)
WARNING: unmet direct dependencies detected for PCI_HYPERV
Depends on PCI && X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
Selected by
- MICROSOFT_MANA && NETDEVICES && ETHERNET && NET_VENDOR_MICROSOFT && PCI_MSI
vim +46 drivers/pci/controller/pci-hyperv.c
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 @46 #include <asm/irqdomain.h>
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 47 #include <asm/apic.h>
447ae316670230 drivers/pci/controller/pci-hyperv.c Nicolai Stange 2018-07-29 48 #include <linux/irq.h>
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 49 #include <linux/msi.h>
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 50 #include <linux/hyperv.h>
24196f0c7d4bba drivers/pci/host/pci-hyperv.c Elena Reshetova 2017-04-18 51 #include <linux/refcount.h>
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 52 #include <asm/mshyperv.h>
4daace0d8ce851 drivers/pci/host/pci-hyperv.c Jake Oshins 2016-02-16 53
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 39592 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
[not found] <20210406232321.12104-1-decui@microsoft.com>
` (2 preceding siblings ...)
2021-04-07 1:52 ` kernel test robot
@ 2021-04-07 1:53 ` kernel test robot
2021-04-07 8:10 ` Leon Romanovsky
2021-04-07 13:17 ` Wei Liu
5 siblings, 0 replies; 25+ messages in thread
From: kernel test robot @ 2021-04-07 1:53 UTC (permalink / raw)
To: Dexuan Cui, davem, kuba, kys, haiyangz, sthemmin, wei.liu, liuwe, netdev
Cc: kbuild-all, linux-kernel, linux-hyperv
[-- Attachment #1: Type: text/plain, Size: 4080 bytes --]
Hi Dexuan,
I love your patch! Perhaps something to improve:
[auto build test WARNING on net-next/master]
url: https://github.com/0day-ci/linux/commits/Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git cc0626c2aaed8e475efdd85fa374b497a7192e35
config: arc-allyesconfig (attached as .config)
compiler: arceb-elf-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/f086d8bc693c2686de24a81398e49496ab3747a9
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
git checkout f086d8bc693c2686de24a81398e49496ab3747a9
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
drivers/net/ethernet/microsoft/mana/hw_channel.c: In function 'hwc_post_rx_wqe':
drivers/net/ethernet/microsoft/mana/hw_channel.c:88:17: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
88 | sge->address = (u64)req->buf_sge_addr;
| ^
drivers/net/ethernet/microsoft/mana/hw_channel.c: In function 'hwc_alloc_dma_buf':
>> drivers/net/ethernet/microsoft/mana/hw_channel.c:426:12: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
426 | base_pa = (u8 *)dma_buf->mem_info.dma_handle;
| ^
drivers/net/ethernet/microsoft/mana/hw_channel.c: In function 'hwc_post_tx_wqe':
drivers/net/ethernet/microsoft/mana/hw_channel.c:542:17: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
542 | sge->address = (u64)req->buf_sge_addr;
| ^
Kconfig warnings: (for reference only)
WARNING: unmet direct dependencies detected for PCI_HYPERV
Depends on PCI && X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
Selected by
- MICROSOFT_MANA && NETDEVICES && ETHERNET && NET_VENDOR_MICROSOFT && PCI_MSI
vim +426 drivers/net/ethernet/microsoft/mana/hw_channel.c
394
395 static int hwc_alloc_dma_buf(struct hw_channel_context *hwc, u16 q_depth,
396 u32 max_msg_size, struct hwc_dma_buf **dma_buf_p)
397 {
398 struct gdma_context *gc = gdma_dev_to_context(hwc->gdma_dev);
399 struct gdma_mem_info *gmi;
400 struct hwc_work_request *hwc_wr;
401 struct hwc_dma_buf *dma_buf;
402 u32 buf_size;
403 void *virt_addr;
404 u8 *base_pa;
405 int err;
406 u16 i;
407
408 dma_buf = kzalloc(sizeof(*dma_buf) +
409 q_depth * sizeof(struct hwc_work_request),
410 GFP_KERNEL);
411 if (!dma_buf)
412 return -ENOMEM;
413
414 dma_buf->num_reqs = q_depth;
415
416 buf_size = ALIGN(q_depth * max_msg_size, PAGE_SIZE);
417
418 gmi = &dma_buf->mem_info;
419 err = gdma_alloc_memory(gc, buf_size, gmi);
420 if (err) {
421 pr_err("Failed to allocate dma buffer: %d\n", err);
422 goto out;
423 }
424
425 virt_addr = dma_buf->mem_info.virt_addr;
> 426 base_pa = (u8 *)dma_buf->mem_info.dma_handle;
427
428 for (i = 0; i < q_depth; i++) {
429 hwc_wr = &dma_buf->reqs[i];
430
431 hwc_wr->buf_va = virt_addr + i * max_msg_size;
432 hwc_wr->buf_sge_addr = base_pa + i * max_msg_size;
433
434 hwc_wr->buf_len = max_msg_size;
435 }
436
437 *dma_buf_p = dma_buf;
438 return 0;
439 out:
440 kfree(dma_buf);
441 return err;
442 }
443
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 67540 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 1:07 ` [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA) Andrew Lunn
@ 2021-04-07 8:02 ` Dexuan Cui
2021-04-07 8:15 ` Leon Romanovsky
2021-04-07 12:19 ` Andrew Lunn
0 siblings, 2 replies; 25+ messages in thread
From: Dexuan Cui @ 2021-04-07 8:02 UTC (permalink / raw)
To: Andrew Lunn
Cc: davem, kuba, KY Srinivasan, Haiyang Zhang, Stephen Hemminger,
wei.liu, Wei Liu, netdev, linux-kernel, linux-hyperv
> From: Andrew Lunn <andrew@lunn.ch>
> Sent: Tuesday, April 6, 2021 6:08 PM
> To: Dexuan Cui <decui@microsoft.com>
>
> > +static int gdma_query_max_resources(struct pci_dev *pdev)
> > +{
> > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > + struct gdma_general_req req = { 0 };
> > + struct gdma_query_max_resources_resp resp = { 0 };
> > + int err;
>
> Network drivers need to use reverse christmas tree. I spotted a number
> of functions getting this wrong. Please review the whole driver.
Hi Andrew, thanks for the quick comments!
I think In general the patch follows the reverse christmas tree style.
For the function you pointed out, I didn't follow the reverse
christmas tree style because I wanted to keep the variable defines
more natural, i.e. first define 'req' and then 'resp'.
I can swap the 2 lines here, i.e. define 'resp' first, but this looks a little
strange to me, because in some other functions the 'req' should be
defined first, e.g.
int gdma_test_eq(struct gdma_context *gc, struct gdma_queue *eq)
{
struct gdma_generate_test_event_req req = { 0 };
struct gdma_general_resp resp = { 0 };
And, some variable defines can not follow the reverse christmas tree
style due to dependency, e.g.
static void hwc_init_event_handler(void *ctx, struct gdma_queue *q_self,
struct gdma_event *event)
{
struct hw_channel_context *hwc = ctx;
struct gdma_dev *gd = hwc->gdma_dev;
struct gdma_context *gc = gdma_dev_to_context(gd);
I failed to find the reverse christmas tree rule in the Documentation/
folder. I knew the rule and I thought it was documented there,
but it looks like it's not. So my understanding is that in general we
should follow the style, but there can be exceptions due to
dependencies or logical grouping.
> > + gdma_init_req_hdr(&req.hdr, GDMA_QUERY_MAX_RESOURCES,
> > + sizeof(req), sizeof(resp));
> > +
> > + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> > + if (err || resp.hdr.status) {
> > + pr_err("%s, line %d: err=%d, err=0x%x\n", __func__, __LINE__,
> > + err, resp.hdr.status);
>
> I expect checkpatch complained about this. Don't use pr_err(), use
> dev_err(pdev->dev, ... of netdev_err(ndev, ... You should always have
> access to dev or ndev, so please change all pr_ calls.
I did run scripts/checkpatch.pl and it reported no error/warning. :-)
But I think you're correct. I'll change the pr_* to dev_* or netdev_*.
> > +static unsigned int num_queues = ANA_DEFAULT_NUM_QUEUE;
> > +module_param(num_queues, uint, 0444);
>
> No module parameters please.
>
> Andrew
This parameter was mostly for debugging purpose. I can remove it.
BTW, I do remember some maintainers ask people to not add module
parameters unless that's absolutely necessary, but I don't remember if
the rule applies to only the net subsystem or to the whole drivers/
folder. It looks like the rule was also not documented in the
Documentation/ folder? Please let me know if such a rule is explicitly
defined somewhere.
Thanks,
-- Dexuan
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 1:30 ` kernel test robot
@ 2021-04-07 8:08 ` Dexuan Cui
2021-04-07 13:02 ` Wei Liu
0 siblings, 1 reply; 25+ messages in thread
From: Dexuan Cui @ 2021-04-07 8:08 UTC (permalink / raw)
To: kernel test robot, davem, kuba, KY Srinivasan, Haiyang Zhang,
Stephen Hemminger, wei.liu, Wei Liu, netdev
Cc: kbuild-all, linux-kernel, linux-hyperv
> From: kernel test robot <lkp@intel.com>
> Sent: Tuesday, April 6, 2021 6:31 PM
> ...
> Hi Dexuan,
> I love your patch! Perhaps something to improve:
>
> All warnings (new ones prefixed by >>):
>
> drivers/pci/controller/pci-hyperv.c: In function 'hv_irq_unmask':
> drivers/pci/controller/pci-hyperv.c:1220:2: error: implicit declaration of
> function 'hv_set_msi_entry_from_desc'
> [-Werror=implicit-function-declaration]
> 1220 | hv_set_msi_entry_from_desc(¶ms->int_entry.msi_entry,
> msi_desc);
This build error looks strange, because the patch doesn't even touch the driver
drivers/pci/controller/pci-hyperv.c.
I'll try to repro the build failure, and the other 2 failures in 2 separate
emails, and figure out what's happening.
PS, I tested building the patch in a fresh Ubuntu 20.04 VM and it was successful.
Thanks,
-- Dexuan
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
[not found] <20210406232321.12104-1-decui@microsoft.com>
` (3 preceding siblings ...)
2021-04-07 1:53 ` kernel test robot
@ 2021-04-07 8:10 ` Leon Romanovsky
2021-04-07 8:40 ` Dexuan Cui
2021-04-07 13:17 ` Wei Liu
5 siblings, 1 reply; 25+ messages in thread
From: Leon Romanovsky @ 2021-04-07 8:10 UTC (permalink / raw)
To: Dexuan Cui
Cc: davem, kuba, kys, haiyangz, sthemmin, wei.liu, liuwe, netdev,
linux-kernel, linux-hyperv
On Tue, Apr 06, 2021 at 04:23:21PM -0700, Dexuan Cui wrote:
> Add a VF driver for Microsoft Azure Network Adapter (MANA) that will be
> available in the future.
>
> Co-developed-by: Haiyang Zhang <haiyangz@microsoft.com>
> Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
> Signed-off-by: Dexuan Cui <decui@microsoft.com>
> ---
> MAINTAINERS | 4 +-
> drivers/net/ethernet/Kconfig | 1 +
> drivers/net/ethernet/Makefile | 1 +
> drivers/net/ethernet/microsoft/Kconfig | 29 +
> drivers/net/ethernet/microsoft/Makefile | 5 +
> drivers/net/ethernet/microsoft/mana/Makefile | 6 +
> drivers/net/ethernet/microsoft/mana/gdma.h | 731 +++++++
> .../net/ethernet/microsoft/mana/gdma_main.c | 1500 +++++++++++++
> .../net/ethernet/microsoft/mana/hw_channel.c | 851 ++++++++
> .../net/ethernet/microsoft/mana/hw_channel.h | 181 ++
> drivers/net/ethernet/microsoft/mana/mana.h | 529 +++++
> drivers/net/ethernet/microsoft/mana/mana_en.c | 1861 +++++++++++++++++
> .../ethernet/microsoft/mana/mana_ethtool.c | 276 +++
> .../net/ethernet/microsoft/mana/shm_channel.c | 290 +++
> .../net/ethernet/microsoft/mana/shm_channel.h | 19 +
> 15 files changed, 6283 insertions(+), 1 deletion(-)
> create mode 100644 drivers/net/ethernet/microsoft/Kconfig
> create mode 100644 drivers/net/ethernet/microsoft/Makefile
> create mode 100644 drivers/net/ethernet/microsoft/mana/Makefile
> create mode 100644 drivers/net/ethernet/microsoft/mana/gdma.h
> create mode 100644 drivers/net/ethernet/microsoft/mana/gdma_main.c
> create mode 100644 drivers/net/ethernet/microsoft/mana/hw_channel.c
> create mode 100644 drivers/net/ethernet/microsoft/mana/hw_channel.h
> create mode 100644 drivers/net/ethernet/microsoft/mana/mana.h
> create mode 100644 drivers/net/ethernet/microsoft/mana/mana_en.c
> create mode 100644 drivers/net/ethernet/microsoft/mana/mana_ethtool.c
> create mode 100644 drivers/net/ethernet/microsoft/mana/shm_channel.c
> create mode 100644 drivers/net/ethernet/microsoft/mana/shm_channel.h
<...>
> +int gdma_verify_vf_version(struct pci_dev *pdev)
> +{
> + struct gdma_context *gc = pci_get_drvdata(pdev);
> + struct gdma_verify_ver_req req = { 0 };
> + struct gdma_verify_ver_resp resp = { 0 };
> + int err;
> +
> + gdma_init_req_hdr(&req.hdr, GDMA_VERIFY_VF_DRIVER_VERSION,
> + sizeof(req), sizeof(resp));
> +
> + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> +
> + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> + if (err || resp.hdr.status) {
> + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n", err,
> + resp.hdr.status);
> + return -EPROTO;
> + }
> +
> + return 0;
> +}
<...>
> +static int gdma_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> +{
> + struct gdma_context *gc;
> + void __iomem *bar0_va;
> + int bar = 0;
> + int err;
> +
> + err = pci_enable_device(pdev);
> + if (err)
> + return -ENXIO;
> +
> + pci_set_master(pdev);
> +
> + err = pci_request_regions(pdev, "gdma");
> + if (err)
> + goto disable_dev;
> +
> + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> + if (err)
> + goto release_region;
> +
> + err = -ENOMEM;
> + gc = vzalloc(sizeof(*gc));
> + if (!gc)
> + goto release_region;
> +
> + bar0_va = pci_iomap(pdev, bar, 0);
> + if (!bar0_va)
> + goto free_gc;
> +
> + gc->bar0_va = bar0_va;
> + gc->pci_dev = pdev;
> +
> + pci_set_drvdata(pdev, gc);
> +
> + gdma_init_registers(pdev);
> +
> + shm_channel_init(&gc->shm_channel, gc->shm_base);
> +
> + err = gdma_setup_irqs(pdev);
> + if (err)
> + goto unmap_bar;
> +
> + mutex_init(&gc->eq_test_event_mutex);
> +
> + err = hwc_create_channel(gc);
> + if (err)
> + goto remove_irq;
> +
> + err = gdma_verify_vf_version(pdev);
> + if (err)
> + goto remove_irq;
Will this VF driver be used in the guest VM? What will prevent from users to change it?
I think that such version negotiation scheme is not allowed.
Thanks
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 8:02 ` Dexuan Cui
@ 2021-04-07 8:15 ` Leon Romanovsky
2021-04-07 8:28 ` Dexuan Cui
2021-04-07 12:19 ` Andrew Lunn
1 sibling, 1 reply; 25+ messages in thread
From: Leon Romanovsky @ 2021-04-07 8:15 UTC (permalink / raw)
To: Dexuan Cui
Cc: Andrew Lunn, davem, kuba, KY Srinivasan, Haiyang Zhang,
Stephen Hemminger, wei.liu, Wei Liu, netdev, linux-kernel,
linux-hyperv
On Wed, Apr 07, 2021 at 08:02:17AM +0000, Dexuan Cui wrote:
> > From: Andrew Lunn <andrew@lunn.ch>
> > Sent: Tuesday, April 6, 2021 6:08 PM
> > To: Dexuan Cui <decui@microsoft.com>
> >
> > > +static int gdma_query_max_resources(struct pci_dev *pdev)
> > > +{
> > > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > > + struct gdma_general_req req = { 0 };
> > > + struct gdma_query_max_resources_resp resp = { 0 };
> > > + int err;
> >
> > Network drivers need to use reverse christmas tree. I spotted a number
> > of functions getting this wrong. Please review the whole driver.
>
> Hi Andrew, thanks for the quick comments!
>
> I think In general the patch follows the reverse christmas tree style.
>
> For the function you pointed out, I didn't follow the reverse
> christmas tree style because I wanted to keep the variable defines
> more natural, i.e. first define 'req' and then 'resp'.
>
> I can swap the 2 lines here, i.e. define 'resp' first, but this looks a little
> strange to me, because in some other functions the 'req' should be
> defined first, e.g.
>
> int gdma_test_eq(struct gdma_context *gc, struct gdma_queue *eq)
> {
> struct gdma_generate_test_event_req req = { 0 };
> struct gdma_general_resp resp = { 0 };
BTW, you don't need to write { 0 }, the {} is enough.
Thanks
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 8:15 ` Leon Romanovsky
@ 2021-04-07 8:28 ` Dexuan Cui
2021-04-07 12:44 ` Leon Romanovsky
0 siblings, 1 reply; 25+ messages in thread
From: Dexuan Cui @ 2021-04-07 8:28 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Andrew Lunn, davem, kuba, KY Srinivasan, Haiyang Zhang,
Stephen Hemminger, wei.liu, Wei Liu, netdev, linux-kernel,
linux-hyperv
> From: Leon Romanovsky <leon@kernel.org>
> Sent: Wednesday, April 7, 2021 1:15 AM
> > ...
> > int gdma_test_eq(struct gdma_context *gc, struct gdma_queue *eq)
> > {
> > struct gdma_generate_test_event_req req = { 0 };
> > struct gdma_general_resp resp = { 0 };
>
> BTW, you don't need to write { 0 }, the {} is enough.
Thanks for the suggestion! I'll use {0} in v2.
BTW, looks like both are widely used in the kernel. Maybe someone
should update scripts/checkpatch.pl to add a warning agaist { 0 } in
new code, if {0} is preferred. :-)
dexuan@localhost:~/linux$ grep "= { 0 };" kernel/ -nr | wc -l
4
dexuan@localhost:~/linux$ grep "= {0};" kernel/ -nr | wc -l
4
dexuan@localhost:~/linux$ grep "= { 0 };" Documentation/ -nr
Documentation/networking/page_pool.rst:117: struct page_pool_params pp_params = { 0 };
dexuan@localhost:~/linux$ grep "= { 0 };" drivers/ -nr | wc -l
1085
dexuan@localhost:~/linux$ grep "= {0};" drivers/ -nr | wc -l
1321
dexuan@localhost:~/linux$ grep "= { 0 };" drivers/net/ -nr | wc -l
408
dexuan@localhost:~/linux$ grep "= {0};" drivers/net/ -nr | wc -l
708
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 8:10 ` Leon Romanovsky
@ 2021-04-07 8:40 ` Dexuan Cui
2021-04-07 12:51 ` Leon Romanovsky
0 siblings, 1 reply; 25+ messages in thread
From: Dexuan Cui @ 2021-04-07 8:40 UTC (permalink / raw)
To: Leon Romanovsky
Cc: davem, kuba, KY Srinivasan, Haiyang Zhang, Stephen Hemminger,
wei.liu, Wei Liu, netdev, linux-kernel, linux-hyperv
> From: Leon Romanovsky <leon@kernel.org>
> Sent: Wednesday, April 7, 2021 1:10 AM
>
> <...>
>
> > +int gdma_verify_vf_version(struct pci_dev *pdev)
> > +{
> > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > + struct gdma_verify_ver_req req = { 0 };
> > + struct gdma_verify_ver_resp resp = { 0 };
> > + int err;
> > +
> > + gdma_init_req_hdr(&req.hdr, GDMA_VERIFY_VF_DRIVER_VERSION,
> > + sizeof(req), sizeof(resp));
> > +
> > + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > +
> > + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> > + if (err || resp.hdr.status) {
> > + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n", err,
> > + resp.hdr.status);
> > + return -EPROTO;
> > + }
> > +
> > + return 0;
> > +}
>
> <...>
> > + err = gdma_verify_vf_version(pdev);
> > + if (err)
> > + goto remove_irq;
>
> Will this VF driver be used in the guest VM? What will prevent from users to
> change it?
> I think that such version negotiation scheme is not allowed.
Yes, the VF driver is expected to run in a Linux VM that runs on Azure.
Currently gdma_verify_vf_version() just tells the PF driver that the VF driver
is only able to support GDMA_PROTOCOL_V1, and want to use
GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
enum {
GDMA_PROTOCOL_UNDEFINED = 0,
GDMA_PROTOCOL_V1 = 1,
GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
GDMA_PROTOCOL_VALUE_MAX
};
The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I expect
here gdma_verify_vf_version() should succeed. If a user changes the Linux VF
driver and try to use a protocol version not supported by the PF driver, then
gdma_verify_vf_version() will fail; later, if the VF driver tries to talk to the PF
driver using an unsupported message format, the PF driver will return a failure.
Thanks,
-- Dexuan
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 8:02 ` Dexuan Cui
2021-04-07 8:15 ` Leon Romanovsky
@ 2021-04-07 12:19 ` Andrew Lunn
1 sibling, 0 replies; 25+ messages in thread
From: Andrew Lunn @ 2021-04-07 12:19 UTC (permalink / raw)
To: Dexuan Cui
Cc: davem, kuba, KY Srinivasan, Haiyang Zhang, Stephen Hemminger,
wei.liu, Wei Liu, netdev, linux-kernel, linux-hyperv
> And, some variable defines can not follow the reverse christmas tree
> style due to dependency, e.g.
> static void hwc_init_event_handler(void *ctx, struct gdma_queue *q_self,
> struct gdma_event *event)
> {
> struct hw_channel_context *hwc = ctx;
> struct gdma_dev *gd = hwc->gdma_dev;
> struct gdma_context *gc = gdma_dev_to_context(gd);
>
> I failed to find the reverse christmas tree rule in the Documentation/
> folder. I knew the rule and I thought it was documented there,
> but it looks like it's not. So my understanding is that in general we
> should follow the style, but there can be exceptions due to
> dependencies or logical grouping.
I expect DaveM will tell you to move gdma_dev_to_context(gd) into the
body of the function. Or if you have this pattern a lot, add a macro
gdma_ctx_to_context().
Reverse Christmas tree is not in the main Coding Style documentation,
but it is expected for netdev.
Andrew
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 8:28 ` Dexuan Cui
@ 2021-04-07 12:44 ` Leon Romanovsky
2021-04-07 21:59 ` Dexuan Cui
0 siblings, 1 reply; 25+ messages in thread
From: Leon Romanovsky @ 2021-04-07 12:44 UTC (permalink / raw)
To: Dexuan Cui
Cc: Andrew Lunn, davem, kuba, KY Srinivasan, Haiyang Zhang,
Stephen Hemminger, wei.liu, Wei Liu, netdev, linux-kernel,
linux-hyperv
On Wed, Apr 07, 2021 at 08:28:45AM +0000, Dexuan Cui wrote:
> > From: Leon Romanovsky <leon@kernel.org>
> > Sent: Wednesday, April 7, 2021 1:15 AM
> > > ...
> > > int gdma_test_eq(struct gdma_context *gc, struct gdma_queue *eq)
> > > {
> > > struct gdma_generate_test_event_req req = { 0 };
> > > struct gdma_general_resp resp = { 0 };
> >
> > BTW, you don't need to write { 0 }, the {} is enough.
>
> Thanks for the suggestion! I'll use {0} in v2.
You missed the point, "{ 0 }" change to be "{}" without 0.
Thanks
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 8:40 ` Dexuan Cui
@ 2021-04-07 12:51 ` Leon Romanovsky
2021-04-07 14:41 ` Haiyang Zhang
0 siblings, 1 reply; 25+ messages in thread
From: Leon Romanovsky @ 2021-04-07 12:51 UTC (permalink / raw)
To: Dexuan Cui
Cc: davem, kuba, KY Srinivasan, Haiyang Zhang, Stephen Hemminger,
wei.liu, Wei Liu, netdev, linux-kernel, linux-hyperv
On Wed, Apr 07, 2021 at 08:40:13AM +0000, Dexuan Cui wrote:
> > From: Leon Romanovsky <leon@kernel.org>
> > Sent: Wednesday, April 7, 2021 1:10 AM
> >
> > <...>
> >
> > > +int gdma_verify_vf_version(struct pci_dev *pdev)
> > > +{
> > > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > > + struct gdma_verify_ver_req req = { 0 };
> > > + struct gdma_verify_ver_resp resp = { 0 };
> > > + int err;
> > > +
> > > + gdma_init_req_hdr(&req.hdr, GDMA_VERIFY_VF_DRIVER_VERSION,
> > > + sizeof(req), sizeof(resp));
> > > +
> > > + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > > + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > > +
> > > + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> > > + if (err || resp.hdr.status) {
> > > + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n", err,
> > > + resp.hdr.status);
> > > + return -EPROTO;
> > > + }
> > > +
> > > + return 0;
> > > +}
> >
> > <...>
> > > + err = gdma_verify_vf_version(pdev);
> > > + if (err)
> > > + goto remove_irq;
> >
> > Will this VF driver be used in the guest VM? What will prevent from users to
> > change it?
> > I think that such version negotiation scheme is not allowed.
>
> Yes, the VF driver is expected to run in a Linux VM that runs on Azure.
>
> Currently gdma_verify_vf_version() just tells the PF driver that the VF driver
> is only able to support GDMA_PROTOCOL_V1, and want to use
> GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
>
> enum {
> GDMA_PROTOCOL_UNDEFINED = 0,
> GDMA_PROTOCOL_V1 = 1,
> GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
> GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
> GDMA_PROTOCOL_VALUE_MAX
> };
>
> The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I expect
> here gdma_verify_vf_version() should succeed. If a user changes the Linux VF
> driver and try to use a protocol version not supported by the PF driver, then
> gdma_verify_vf_version() will fail; later, if the VF driver tries to talk to the PF
> driver using an unsupported message format, the PF driver will return a failure.
The worry is not for the current code, but for the future one when you will
support v2, v3 e.t.c. First, your code will look like a spaghetti and
second, users will try and mix vX with "unsupported" commands just for the
fun.
Thanks
>
> Thanks,
> -- Dexuan
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 8:08 ` Dexuan Cui
@ 2021-04-07 13:02 ` Wei Liu
0 siblings, 0 replies; 25+ messages in thread
From: Wei Liu @ 2021-04-07 13:02 UTC (permalink / raw)
To: Dexuan Cui
Cc: kernel test robot, davem, kuba, KY Srinivasan, Haiyang Zhang,
Stephen Hemminger, wei.liu, Wei Liu, netdev, kbuild-all,
linux-kernel, linux-hyperv
On Wed, Apr 07, 2021 at 08:08:59AM +0000, Dexuan Cui wrote:
> > From: kernel test robot <lkp@intel.com>
> > Sent: Tuesday, April 6, 2021 6:31 PM
> > ...
> > Hi Dexuan,
> > I love your patch! Perhaps something to improve:
> >
> > All warnings (new ones prefixed by >>):
> >
> > drivers/pci/controller/pci-hyperv.c: In function 'hv_irq_unmask':
> > drivers/pci/controller/pci-hyperv.c:1220:2: error: implicit declaration of
> > function 'hv_set_msi_entry_from_desc'
> > [-Werror=implicit-function-declaration]
> > 1220 | hv_set_msi_entry_from_desc(¶ms->int_entry.msi_entry,
> > msi_desc);
>
> This build error looks strange, because the patch doesn't even touch the driver
> drivers/pci/controller/pci-hyperv.c.
>
I think this is normal. The bot doesn't restrict itself to the changed
code from my experience.
Wei.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
[not found] <20210406232321.12104-1-decui@microsoft.com>
` (4 preceding siblings ...)
2021-04-07 8:10 ` Leon Romanovsky
@ 2021-04-07 13:17 ` Wei Liu
2021-04-07 14:34 ` Haiyang Zhang
5 siblings, 1 reply; 25+ messages in thread
From: Wei Liu @ 2021-04-07 13:17 UTC (permalink / raw)
To: Dexuan Cui
Cc: davem, kuba, kys, haiyangz, sthemmin, wei.liu, liuwe, netdev,
linux-kernel, linux-hyperv
On Tue, Apr 06, 2021 at 04:23:21PM -0700, Dexuan Cui wrote:
[...]
> +config MICROSOFT_MANA
> + tristate "Microsoft Azure Network Adapter (MANA) support"
> + default m
> + depends on PCI_MSI
> + select PCI_HYPERV
OOI which part of the code requires PCI_HYPERV?
Asking because I can't immediately find code that looks to be Hyper-V
specific (searching for vmbus etc). This device looks like any
other PCI devices to me.
Wei.
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 13:17 ` Wei Liu
@ 2021-04-07 14:34 ` Haiyang Zhang
2021-04-07 15:00 ` Wei Liu
0 siblings, 1 reply; 25+ messages in thread
From: Haiyang Zhang @ 2021-04-07 14:34 UTC (permalink / raw)
To: Wei Liu, Dexuan Cui
Cc: davem, kuba, KY Srinivasan, Stephen Hemminger, Wei Liu, netdev,
linux-kernel, linux-hyperv
> -----Original Message-----
> From: Wei Liu <wei.liu@kernel.org>
> Sent: Wednesday, April 7, 2021 9:17 AM
> To: Dexuan Cui <decui@microsoft.com>
> Cc: davem@davemloft.net; kuba@kernel.org; KY Srinivasan
> <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>; Stephen
> Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Wei Liu
> <liuwe@microsoft.com>; netdev@vger.kernel.org; linux-
> kernel@vger.kernel.org; linux-hyperv@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> Network Adapter (MANA)
>
> On Tue, Apr 06, 2021 at 04:23:21PM -0700, Dexuan Cui wrote:
> [...]
> > +config MICROSOFT_MANA
> > + tristate "Microsoft Azure Network Adapter (MANA) support"
> > + default m
> > + depends on PCI_MSI
> > + select PCI_HYPERV
>
> OOI which part of the code requires PCI_HYPERV?
>
> Asking because I can't immediately find code that looks to be Hyper-V
> specific (searching for vmbus etc). This device looks like any other PCI devices
> to me.
It depends on the VF nic's PCI config space which is presented by the pci_hyperv driver.
Thanks,
- Haiyang
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 12:51 ` Leon Romanovsky
@ 2021-04-07 14:41 ` Haiyang Zhang
2021-04-07 14:55 ` Leon Romanovsky
0 siblings, 1 reply; 25+ messages in thread
From: Haiyang Zhang @ 2021-04-07 14:41 UTC (permalink / raw)
To: Leon Romanovsky, Dexuan Cui
Cc: davem, kuba, KY Srinivasan, Stephen Hemminger, wei.liu, Wei Liu,
netdev, linux-kernel, linux-hyperv
> -----Original Message-----
> From: Leon Romanovsky <leon@kernel.org>
> Sent: Wednesday, April 7, 2021 8:51 AM
> To: Dexuan Cui <decui@microsoft.com>
> Cc: davem@davemloft.net; kuba@kernel.org; KY Srinivasan
> <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>; Stephen
> Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Wei Liu
> <liuwe@microsoft.com>; netdev@vger.kernel.org; linux-
> kernel@vger.kernel.org; linux-hyperv@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> Network Adapter (MANA)
>
> On Wed, Apr 07, 2021 at 08:40:13AM +0000, Dexuan Cui wrote:
> > > From: Leon Romanovsky <leon@kernel.org>
> > > Sent: Wednesday, April 7, 2021 1:10 AM
> > >
> > > <...>
> > >
> > > > +int gdma_verify_vf_version(struct pci_dev *pdev)
> > > > +{
> > > > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > > > + struct gdma_verify_ver_req req = { 0 };
> > > > + struct gdma_verify_ver_resp resp = { 0 };
> > > > + int err;
> > > > +
> > > > + gdma_init_req_hdr(&req.hdr, GDMA_VERIFY_VF_DRIVER_VERSION,
> > > > + sizeof(req), sizeof(resp));
> > > > +
> > > > + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > > > + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > > > +
> > > > + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> > > > + if (err || resp.hdr.status) {
> > > > + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n", err,
> > > > + resp.hdr.status);
> > > > + return -EPROTO;
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > >
> > > <...>
> > > > + err = gdma_verify_vf_version(pdev);
> > > > + if (err)
> > > > + goto remove_irq;
> > >
> > > Will this VF driver be used in the guest VM? What will prevent from users
> to
> > > change it?
> > > I think that such version negotiation scheme is not allowed.
> >
> > Yes, the VF driver is expected to run in a Linux VM that runs on Azure.
> >
> > Currently gdma_verify_vf_version() just tells the PF driver that the VF
> driver
> > is only able to support GDMA_PROTOCOL_V1, and want to use
> > GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
> >
> > enum {
> > GDMA_PROTOCOL_UNDEFINED = 0,
> > GDMA_PROTOCOL_V1 = 1,
> > GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
> > GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
> > GDMA_PROTOCOL_VALUE_MAX
> > };
> >
> > The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I
> expect
> > here gdma_verify_vf_version() should succeed. If a user changes the Linux
> VF
> > driver and try to use a protocol version not supported by the PF driver,
> then
> > gdma_verify_vf_version() will fail; later, if the VF driver tries to talk to the
> PF
> > driver using an unsupported message format, the PF driver will return a
> failure.
>
> The worry is not for the current code, but for the future one when you will
> support v2, v3 e.t.c. First, your code will look like a spaghetti and
> second, users will try and mix vX with "unsupported" commands just for the
> fun.
In the future, if the protocol version updated on the host side, guests need
to support different host versions because not all hosts are updated
(simultaneously). So this negotiation is necessary to know the supported
version, and decide the proper command version to use.
If any user try "unsupported commands just for the fun", the host will deny
and return an error.
Thanks,
- Haiyang
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 14:41 ` Haiyang Zhang
@ 2021-04-07 14:55 ` Leon Romanovsky
2021-04-07 15:05 ` Haiyang Zhang
0 siblings, 1 reply; 25+ messages in thread
From: Leon Romanovsky @ 2021-04-07 14:55 UTC (permalink / raw)
To: Haiyang Zhang
Cc: Dexuan Cui, davem, kuba, KY Srinivasan, Stephen Hemminger,
wei.liu, Wei Liu, netdev, linux-kernel, linux-hyperv
On Wed, Apr 07, 2021 at 02:41:45PM +0000, Haiyang Zhang wrote:
>
>
> > -----Original Message-----
> > From: Leon Romanovsky <leon@kernel.org>
> > Sent: Wednesday, April 7, 2021 8:51 AM
> > To: Dexuan Cui <decui@microsoft.com>
> > Cc: davem@davemloft.net; kuba@kernel.org; KY Srinivasan
> > <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>; Stephen
> > Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Wei Liu
> > <liuwe@microsoft.com>; netdev@vger.kernel.org; linux-
> > kernel@vger.kernel.org; linux-hyperv@vger.kernel.org
> > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> > Network Adapter (MANA)
> >
> > On Wed, Apr 07, 2021 at 08:40:13AM +0000, Dexuan Cui wrote:
> > > > From: Leon Romanovsky <leon@kernel.org>
> > > > Sent: Wednesday, April 7, 2021 1:10 AM
> > > >
> > > > <...>
> > > >
> > > > > +int gdma_verify_vf_version(struct pci_dev *pdev)
> > > > > +{
> > > > > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > > > > + struct gdma_verify_ver_req req = { 0 };
> > > > > + struct gdma_verify_ver_resp resp = { 0 };
> > > > > + int err;
> > > > > +
> > > > > + gdma_init_req_hdr(&req.hdr, GDMA_VERIFY_VF_DRIVER_VERSION,
> > > > > + sizeof(req), sizeof(resp));
> > > > > +
> > > > > + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > > > > + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > > > > +
> > > > > + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> > > > > + if (err || resp.hdr.status) {
> > > > > + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n", err,
> > > > > + resp.hdr.status);
> > > > > + return -EPROTO;
> > > > > + }
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > >
> > > > <...>
> > > > > + err = gdma_verify_vf_version(pdev);
> > > > > + if (err)
> > > > > + goto remove_irq;
> > > >
> > > > Will this VF driver be used in the guest VM? What will prevent from users
> > to
> > > > change it?
> > > > I think that such version negotiation scheme is not allowed.
> > >
> > > Yes, the VF driver is expected to run in a Linux VM that runs on Azure.
> > >
> > > Currently gdma_verify_vf_version() just tells the PF driver that the VF
> > driver
> > > is only able to support GDMA_PROTOCOL_V1, and want to use
> > > GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
> > >
> > > enum {
> > > GDMA_PROTOCOL_UNDEFINED = 0,
> > > GDMA_PROTOCOL_V1 = 1,
> > > GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
> > > GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
> > > GDMA_PROTOCOL_VALUE_MAX
> > > };
> > >
> > > The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I
> > expect
> > > here gdma_verify_vf_version() should succeed. If a user changes the Linux
> > VF
> > > driver and try to use a protocol version not supported by the PF driver,
> > then
> > > gdma_verify_vf_version() will fail; later, if the VF driver tries to talk to the
> > PF
> > > driver using an unsupported message format, the PF driver will return a
> > failure.
> >
> > The worry is not for the current code, but for the future one when you will
> > support v2, v3 e.t.c. First, your code will look like a spaghetti and
> > second, users will try and mix vX with "unsupported" commands just for the
> > fun.
>
> In the future, if the protocol version updated on the host side, guests need
> to support different host versions because not all hosts are updated
> (simultaneously). So this negotiation is necessary to know the supported
> version, and decide the proper command version to use.
And how do other paravirtual drivers solve this negotiation scheme?
>
> If any user try "unsupported commands just for the fun", the host will deny
> and return an error.
>
> Thanks,
> - Haiyang
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 14:34 ` Haiyang Zhang
@ 2021-04-07 15:00 ` Wei Liu
2021-04-07 15:16 ` Haiyang Zhang
0 siblings, 1 reply; 25+ messages in thread
From: Wei Liu @ 2021-04-07 15:00 UTC (permalink / raw)
To: Haiyang Zhang
Cc: Wei Liu, Dexuan Cui, davem, kuba, KY Srinivasan,
Stephen Hemminger, Wei Liu, netdev, linux-kernel, linux-hyperv
On Wed, Apr 07, 2021 at 02:34:01PM +0000, Haiyang Zhang wrote:
>
>
> > -----Original Message-----
> > From: Wei Liu <wei.liu@kernel.org>
> > Sent: Wednesday, April 7, 2021 9:17 AM
> > To: Dexuan Cui <decui@microsoft.com>
> > Cc: davem@davemloft.net; kuba@kernel.org; KY Srinivasan
> > <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>; Stephen
> > Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Wei Liu
> > <liuwe@microsoft.com>; netdev@vger.kernel.org; linux-
> > kernel@vger.kernel.org; linux-hyperv@vger.kernel.org
> > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> > Network Adapter (MANA)
> >
> > On Tue, Apr 06, 2021 at 04:23:21PM -0700, Dexuan Cui wrote:
> > [...]
> > > +config MICROSOFT_MANA
> > > + tristate "Microsoft Azure Network Adapter (MANA) support"
> > > + default m
> > > + depends on PCI_MSI
> > > + select PCI_HYPERV
> >
> > OOI which part of the code requires PCI_HYPERV?
> >
> > Asking because I can't immediately find code that looks to be Hyper-V
> > specific (searching for vmbus etc). This device looks like any other PCI devices
> > to me.
>
> It depends on the VF nic's PCI config space which is presented by the pci_hyperv driver.
I think all it matters is the PCI bus is able to handle the
configuration space access, right? Assuming there is an emulated PCI
root complex which exposes the config space to the driver, will this
driver still work?
I'm trying to understand how tightly coupled with Hyper-V PCI this
driver is. In an alternative universe, Microsft may suddenly decide to
sell this hardware and someone wants to passthrough an VF via VFIO. I
don't see how this driver wouldn't work, hence the original question.
There is no need to change the code. I'm just curious about a tiny
detail in the implementation.
Wei.
>
> Thanks,
> - Haiyang
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 14:55 ` Leon Romanovsky
@ 2021-04-07 15:05 ` Haiyang Zhang
2021-04-07 17:07 ` Leon Romanovsky
0 siblings, 1 reply; 25+ messages in thread
From: Haiyang Zhang @ 2021-04-07 15:05 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Dexuan Cui, davem, kuba, KY Srinivasan, Stephen Hemminger,
wei.liu, Wei Liu, netdev, linux-kernel, linux-hyperv
> -----Original Message-----
> From: Leon Romanovsky <leon@kernel.org>
> Sent: Wednesday, April 7, 2021 10:55 AM
> To: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Dexuan Cui <decui@microsoft.com>; davem@davemloft.net;
> kuba@kernel.org; KY Srinivasan <kys@microsoft.com>; Stephen Hemminger
> <sthemmin@microsoft.com>; wei.liu@kernel.org; Wei Liu
> <liuwe@microsoft.com>; netdev@vger.kernel.org; linux-
> kernel@vger.kernel.org; linux-hyperv@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> Network Adapter (MANA)
>
> On Wed, Apr 07, 2021 at 02:41:45PM +0000, Haiyang Zhang wrote:
> >
> >
> > > -----Original Message-----
> > > From: Leon Romanovsky <leon@kernel.org>
> > > Sent: Wednesday, April 7, 2021 8:51 AM
> > > To: Dexuan Cui <decui@microsoft.com>
> > > Cc: davem@davemloft.net; kuba@kernel.org; KY Srinivasan
> > > <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>;
> Stephen
> > > Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Wei Liu
> > > <liuwe@microsoft.com>; netdev@vger.kernel.org; linux-
> > > kernel@vger.kernel.org; linux-hyperv@vger.kernel.org
> > > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft
> > > Azure Network Adapter (MANA)
> > >
> > > On Wed, Apr 07, 2021 at 08:40:13AM +0000, Dexuan Cui wrote:
> > > > > From: Leon Romanovsky <leon@kernel.org>
> > > > > Sent: Wednesday, April 7, 2021 1:10 AM
> > > > >
> > > > > <...>
> > > > >
> > > > > > +int gdma_verify_vf_version(struct pci_dev *pdev) {
> > > > > > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > > > > > + struct gdma_verify_ver_req req = { 0 };
> > > > > > + struct gdma_verify_ver_resp resp = { 0 };
> > > > > > + int err;
> > > > > > +
> > > > > > + gdma_init_req_hdr(&req.hdr,
> GDMA_VERIFY_VF_DRIVER_VERSION,
> > > > > > + sizeof(req), sizeof(resp));
> > > > > > +
> > > > > > + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > > > > > + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > > > > > +
> > > > > > + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp),
> &resp);
> > > > > > + if (err || resp.hdr.status) {
> > > > > > + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n",
> err,
> > > > > > + resp.hdr.status);
> > > > > > + return -EPROTO;
> > > > > > + }
> > > > > > +
> > > > > > + return 0;
> > > > > > +}
> > > > >
> > > > > <...>
> > > > > > + err = gdma_verify_vf_version(pdev);
> > > > > > + if (err)
> > > > > > + goto remove_irq;
> > > > >
> > > > > Will this VF driver be used in the guest VM? What will prevent
> > > > > from users
> > > to
> > > > > change it?
> > > > > I think that such version negotiation scheme is not allowed.
> > > >
> > > > Yes, the VF driver is expected to run in a Linux VM that runs on Azure.
> > > >
> > > > Currently gdma_verify_vf_version() just tells the PF driver that
> > > > the VF
> > > driver
> > > > is only able to support GDMA_PROTOCOL_V1, and want to use
> > > > GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
> > > >
> > > > enum {
> > > > GDMA_PROTOCOL_UNDEFINED = 0,
> > > > GDMA_PROTOCOL_V1 = 1,
> > > > GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
> > > > GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
> > > > GDMA_PROTOCOL_VALUE_MAX
> > > > };
> > > >
> > > > The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I
> > > expect
> > > > here gdma_verify_vf_version() should succeed. If a user changes
> > > > the Linux
> > > VF
> > > > driver and try to use a protocol version not supported by the PF
> > > > driver,
> > > then
> > > > gdma_verify_vf_version() will fail; later, if the VF driver tries
> > > > to talk to the
> > > PF
> > > > driver using an unsupported message format, the PF driver will
> > > > return a
> > > failure.
> > >
> > > The worry is not for the current code, but for the future one when
> > > you will support v2, v3 e.t.c. First, your code will look like a
> > > spaghetti and second, users will try and mix vX with "unsupported"
> > > commands just for the fun.
> >
> > In the future, if the protocol version updated on the host side,
> > guests need to support different host versions because not all hosts
> > are updated (simultaneously). So this negotiation is necessary to know
> > the supported version, and decide the proper command version to use.
>
> And how do other paravirtual drivers solve this negotiation scheme?
I saw some other drivers used version negotiation too, for example:
/**
* ixgbevf_negotiate_api_version_vf - Negotiate supported API version
* @hw: pointer to the HW structure
* @api: integer containing requested API version
**/
static int ixgbevf_negotiate_api_version_vf(struct ixgbe_hw *hw, int api)
{
Thanks,
- Haiyang
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 15:00 ` Wei Liu
@ 2021-04-07 15:16 ` Haiyang Zhang
0 siblings, 0 replies; 25+ messages in thread
From: Haiyang Zhang @ 2021-04-07 15:16 UTC (permalink / raw)
To: Wei Liu
Cc: Dexuan Cui, davem, kuba, KY Srinivasan, Stephen Hemminger,
Wei Liu, netdev, linux-kernel, linux-hyperv
> -----Original Message-----
> From: Wei Liu <wei.liu@kernel.org>
> Sent: Wednesday, April 7, 2021 11:01 AM
> To: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Wei Liu <wei.liu@kernel.org>; Dexuan Cui <decui@microsoft.com>;
> davem@davemloft.net; kuba@kernel.org; KY Srinivasan
> <kys@microsoft.com>; Stephen Hemminger <sthemmin@microsoft.com>;
> Wei Liu <liuwe@microsoft.com>; netdev@vger.kernel.org; linux-
> kernel@vger.kernel.org; linux-hyperv@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> Network Adapter (MANA)
>
> On Wed, Apr 07, 2021 at 02:34:01PM +0000, Haiyang Zhang wrote:
> >
> >
> > > -----Original Message-----
> > > From: Wei Liu <wei.liu@kernel.org>
> > > Sent: Wednesday, April 7, 2021 9:17 AM
> > > To: Dexuan Cui <decui@microsoft.com>
> > > Cc: davem@davemloft.net; kuba@kernel.org; KY Srinivasan
> > > <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>;
> Stephen
> > > Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Wei Liu
> > > <liuwe@microsoft.com>; netdev@vger.kernel.org; linux-
> > > kernel@vger.kernel.org; linux-hyperv@vger.kernel.org
> > > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft
> > > Azure Network Adapter (MANA)
> > >
> > > On Tue, Apr 06, 2021 at 04:23:21PM -0700, Dexuan Cui wrote:
> > > [...]
> > > > +config MICROSOFT_MANA
> > > > + tristate "Microsoft Azure Network Adapter (MANA) support"
> > > > + default m
> > > > + depends on PCI_MSI
> > > > + select PCI_HYPERV
> > >
> > > OOI which part of the code requires PCI_HYPERV?
> > >
> > > Asking because I can't immediately find code that looks to be
> > > Hyper-V specific (searching for vmbus etc). This device looks like
> > > any other PCI devices to me.
> >
> > It depends on the VF nic's PCI config space which is presented by the
> pci_hyperv driver.
>
> I think all it matters is the PCI bus is able to handle the configuration space
> access, right? Assuming there is an emulated PCI root complex which
> exposes the config space to the driver, will this driver still work?
>
> I'm trying to understand how tightly coupled with Hyper-V PCI this driver is.
> In an alternative universe, Microsft may suddenly decide to sell this
> hardware and someone wants to passthrough an VF via VFIO. I don't see
> how this driver wouldn't work, hence the original question.
>
> There is no need to change the code. I'm just curious about a tiny detail in
> the implementation.
Currently, the PCI config space only comes from pci_hyperv, so we have this
dependency.
If the pci config space is presented from other ways in an "alternative universe",
we may need to add other dependencies. And yes, the VF should continue to work:)
Thanks,
- Haiyang
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 15:05 ` Haiyang Zhang
@ 2021-04-07 17:07 ` Leon Romanovsky
0 siblings, 0 replies; 25+ messages in thread
From: Leon Romanovsky @ 2021-04-07 17:07 UTC (permalink / raw)
To: Haiyang Zhang
Cc: Dexuan Cui, davem, kuba, KY Srinivasan, Stephen Hemminger,
wei.liu, Wei Liu, netdev, linux-kernel, linux-hyperv
On Wed, Apr 07, 2021 at 03:05:26PM +0000, Haiyang Zhang wrote:
>
>
> > -----Original Message-----
> > From: Leon Romanovsky <leon@kernel.org>
> > Sent: Wednesday, April 7, 2021 10:55 AM
> > To: Haiyang Zhang <haiyangz@microsoft.com>
> > Cc: Dexuan Cui <decui@microsoft.com>; davem@davemloft.net;
> > kuba@kernel.org; KY Srinivasan <kys@microsoft.com>; Stephen Hemminger
> > <sthemmin@microsoft.com>; wei.liu@kernel.org; Wei Liu
> > <liuwe@microsoft.com>; netdev@vger.kernel.org; linux-
> > kernel@vger.kernel.org; linux-hyperv@vger.kernel.org
> > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> > Network Adapter (MANA)
> >
> > On Wed, Apr 07, 2021 at 02:41:45PM +0000, Haiyang Zhang wrote:
> > >
> > >
> > > > -----Original Message-----
> > > > From: Leon Romanovsky <leon@kernel.org>
> > > > Sent: Wednesday, April 7, 2021 8:51 AM
> > > > To: Dexuan Cui <decui@microsoft.com>
> > > > Cc: davem@davemloft.net; kuba@kernel.org; KY Srinivasan
> > > > <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>;
> > Stephen
> > > > Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Wei Liu
> > > > <liuwe@microsoft.com>; netdev@vger.kernel.org; linux-
> > > > kernel@vger.kernel.org; linux-hyperv@vger.kernel.org
> > > > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft
> > > > Azure Network Adapter (MANA)
> > > >
> > > > On Wed, Apr 07, 2021 at 08:40:13AM +0000, Dexuan Cui wrote:
> > > > > > From: Leon Romanovsky <leon@kernel.org>
> > > > > > Sent: Wednesday, April 7, 2021 1:10 AM
> > > > > >
> > > > > > <...>
> > > > > >
> > > > > > > +int gdma_verify_vf_version(struct pci_dev *pdev) {
> > > > > > > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > > > > > > + struct gdma_verify_ver_req req = { 0 };
> > > > > > > + struct gdma_verify_ver_resp resp = { 0 };
> > > > > > > + int err;
> > > > > > > +
> > > > > > > + gdma_init_req_hdr(&req.hdr,
> > GDMA_VERIFY_VF_DRIVER_VERSION,
> > > > > > > + sizeof(req), sizeof(resp));
> > > > > > > +
> > > > > > > + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > > > > > > + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > > > > > > +
> > > > > > > + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp),
> > &resp);
> > > > > > > + if (err || resp.hdr.status) {
> > > > > > > + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n",
> > err,
> > > > > > > + resp.hdr.status);
> > > > > > > + return -EPROTO;
> > > > > > > + }
> > > > > > > +
> > > > > > > + return 0;
> > > > > > > +}
> > > > > >
> > > > > > <...>
> > > > > > > + err = gdma_verify_vf_version(pdev);
> > > > > > > + if (err)
> > > > > > > + goto remove_irq;
> > > > > >
> > > > > > Will this VF driver be used in the guest VM? What will prevent
> > > > > > from users
> > > > to
> > > > > > change it?
> > > > > > I think that such version negotiation scheme is not allowed.
> > > > >
> > > > > Yes, the VF driver is expected to run in a Linux VM that runs on Azure.
> > > > >
> > > > > Currently gdma_verify_vf_version() just tells the PF driver that
> > > > > the VF
> > > > driver
> > > > > is only able to support GDMA_PROTOCOL_V1, and want to use
> > > > > GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
> > > > >
> > > > > enum {
> > > > > GDMA_PROTOCOL_UNDEFINED = 0,
> > > > > GDMA_PROTOCOL_V1 = 1,
> > > > > GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
> > > > > GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
> > > > > GDMA_PROTOCOL_VALUE_MAX
> > > > > };
> > > > >
> > > > > The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I
> > > > expect
> > > > > here gdma_verify_vf_version() should succeed. If a user changes
> > > > > the Linux
> > > > VF
> > > > > driver and try to use a protocol version not supported by the PF
> > > > > driver,
> > > > then
> > > > > gdma_verify_vf_version() will fail; later, if the VF driver tries
> > > > > to talk to the
> > > > PF
> > > > > driver using an unsupported message format, the PF driver will
> > > > > return a
> > > > failure.
> > > >
> > > > The worry is not for the current code, but for the future one when
> > > > you will support v2, v3 e.t.c. First, your code will look like a
> > > > spaghetti and second, users will try and mix vX with "unsupported"
> > > > commands just for the fun.
> > >
> > > In the future, if the protocol version updated on the host side,
> > > guests need to support different host versions because not all hosts
> > > are updated (simultaneously). So this negotiation is necessary to know
> > > the supported version, and decide the proper command version to use.
> >
> > And how do other paravirtual drivers solve this negotiation scheme?
>
> I saw some other drivers used version negotiation too, for example:
I see, thanks.
>
> /**
> * ixgbevf_negotiate_api_version_vf - Negotiate supported API version
> * @hw: pointer to the HW structure
> * @api: integer containing requested API version
> **/
> static int ixgbevf_negotiate_api_version_vf(struct ixgbe_hw *hw, int api)
> {
>
> Thanks,
> - Haiyang
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 12:44 ` Leon Romanovsky
@ 2021-04-07 21:59 ` Dexuan Cui
2021-04-07 22:37 ` Bernd Petrovitsch
2021-04-08 7:30 ` Leon Romanovsky
0 siblings, 2 replies; 25+ messages in thread
From: Dexuan Cui @ 2021-04-07 21:59 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Andrew Lunn, davem, kuba, KY Srinivasan, Haiyang Zhang,
Stephen Hemminger, wei.liu, Wei Liu, netdev, linux-kernel,
linux-hyperv
> From: Leon Romanovsky <leon@kernel.org>
> Sent: Wednesday, April 7, 2021 5:45 AM
> > >
> > > BTW, you don't need to write { 0 }, the {} is enough.
> >
> > Thanks for the suggestion! I'll use {0} in v2.
>
> You missed the point, "{ 0 }" change to be "{}" without 0.
Got it. Will make the suggested change.
FWIW, {0} and { 0 } are still widely used, but it looks like
{} is indeed more preferred:
$ grep "= {};" drivers/net/ -nr | wc -l
829
$ grep "= {0};" drivers/net/ -nr | wc -l
708
$ grep "= {};" kernel/ -nr | wc -l
29
$ grep "= {0};" kernel/ -nr | wc -l
4
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 21:59 ` Dexuan Cui
@ 2021-04-07 22:37 ` Bernd Petrovitsch
2021-04-08 7:30 ` Leon Romanovsky
1 sibling, 0 replies; 25+ messages in thread
From: Bernd Petrovitsch @ 2021-04-07 22:37 UTC (permalink / raw)
To: Dexuan Cui, Leon Romanovsky
Cc: Andrew Lunn, davem, kuba, KY Srinivasan, Haiyang Zhang,
Stephen Hemminger, wei.liu, Wei Liu, netdev, linux-kernel,
linux-hyperv
Hi all!
On 07/04/2021 23:59, Dexuan Cui wrote:
[...]
> FWIW, {0} and { 0 } are still widely used, but it looks like
> {} is indeed more preferred:
>
> $ grep "= {};" drivers/net/ -nr | wc -l
> 829
$ egrep -nr "=[[:space:]]*{[[:space:]]*};" drivers/net/ | wc -l
872
> $ grep "= {0};" drivers/net/ -nr | wc -l
> 708
$ egrep -nr "=[[:space:]]*{[[:space:]]*0[[:space:]]*};" drivers/net/ |
wc -l
1078
> $ grep "= {};" kernel/ -nr | wc -l
> 29
$ egrep -nr "=[[:space:]]*{[[:space:]]*};" kernel/ | wc -l
45
> $ grep "= {0};" kernel/ -nr | wc -l
> 4
$ egrep -nr "=[[:space:]]*{[[:space:]]*0[[:space:]]*};" kernel | wc -l
8
MfG,
Bernd
--
Bernd Petrovitsch Email : bernd@petrovitsch.priv.at
There is NO CLOUD, just other people's computers. - FSFE
LUGA : http://www.luga.at
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)
2021-04-07 21:59 ` Dexuan Cui
2021-04-07 22:37 ` Bernd Petrovitsch
@ 2021-04-08 7:30 ` Leon Romanovsky
1 sibling, 0 replies; 25+ messages in thread
From: Leon Romanovsky @ 2021-04-08 7:30 UTC (permalink / raw)
To: Dexuan Cui
Cc: Andrew Lunn, davem, kuba, KY Srinivasan, Haiyang Zhang,
Stephen Hemminger, wei.liu, Wei Liu, netdev, linux-kernel,
linux-hyperv
On Wed, Apr 07, 2021 at 09:59:52PM +0000, Dexuan Cui wrote:
> > From: Leon Romanovsky <leon@kernel.org>
> > Sent: Wednesday, April 7, 2021 5:45 AM
> > > >
> > > > BTW, you don't need to write { 0 }, the {} is enough.
> > >
> > > Thanks for the suggestion! I'll use {0} in v2.
> >
> > You missed the point, "{ 0 }" change to be "{}" without 0.
>
> Got it. Will make the suggested change.
The numbers are not important, if you are curious, read this thread, it
talks about {}, {0}, memset(0,..) and padding :)
https://lore.kernel.org/linux-rdma/20200730192026.110246-1-yepeilin.cs@gmail.com/
>
> FWIW, {0} and { 0 } are still widely used, but it looks like
> {} is indeed more preferred:
>
> $ grep "= {};" drivers/net/ -nr | wc -l
> 829
>
> $ grep "= {0};" drivers/net/ -nr | wc -l
> 708
>
> $ grep "= {};" kernel/ -nr | wc -l
> 29
>
> $ grep "= {0};" kernel/ -nr | wc -l
> 4
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2021-04-08 7:30 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20210406232321.12104-1-decui@microsoft.com>
2021-04-07 1:07 ` [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA) Andrew Lunn
2021-04-07 8:02 ` Dexuan Cui
2021-04-07 8:15 ` Leon Romanovsky
2021-04-07 8:28 ` Dexuan Cui
2021-04-07 12:44 ` Leon Romanovsky
2021-04-07 21:59 ` Dexuan Cui
2021-04-07 22:37 ` Bernd Petrovitsch
2021-04-08 7:30 ` Leon Romanovsky
2021-04-07 12:19 ` Andrew Lunn
2021-04-07 1:30 ` kernel test robot
2021-04-07 8:08 ` Dexuan Cui
2021-04-07 13:02 ` Wei Liu
2021-04-07 1:52 ` kernel test robot
2021-04-07 1:53 ` kernel test robot
2021-04-07 8:10 ` Leon Romanovsky
2021-04-07 8:40 ` Dexuan Cui
2021-04-07 12:51 ` Leon Romanovsky
2021-04-07 14:41 ` Haiyang Zhang
2021-04-07 14:55 ` Leon Romanovsky
2021-04-07 15:05 ` Haiyang Zhang
2021-04-07 17:07 ` Leon Romanovsky
2021-04-07 13:17 ` Wei Liu
2021-04-07 14:34 ` Haiyang Zhang
2021-04-07 15:00 ` Wei Liu
2021-04-07 15:16 ` Haiyang Zhang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).