stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 4.14 000/119] 4.14.151-stable review
@ 2019-10-27 20:59 Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 001/119] scsi: ufs: skip shutdown if hba is not powered Greg Kroah-Hartman
                   ` (123 more replies)
  0 siblings, 124 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, torvalds, akpm, linux, shuah, patches,
	ben.hutchings, lkft-triage, stable

This is the start of the stable review cycle for the 4.14.151 release.
There are 119 patches in this series, all will be posted as a response
to this one.  If anyone has any issues with these being applied, please
let me know.

Responses should be made by Tue 29 Oct 2019 08:27:02 PM UTC.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
	https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.151-rc1.gz
or in the git tree and branch at:
	git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Linux 4.14.151-rc1

Greg KH <gregkh@linuxfoundation.org>
    RDMA/cxgb4: Do not dma memory off of the stack

Jim Mattson <jmattson@google.com>
    kvm: vmx: Basic APIC virtualization controls have three settings

Junaid Shahid <junaids@google.com>
    kvm: apic: Flush TLB after APIC mode/address change if VPIDs are in use

Jim Mattson <jmattson@google.com>
    kvm: vmx: Introduce lapic_mode enumeration

Wanpeng Li <wanpeng.li@hotmail.com>
    KVM: X86: introduce invalidate_gpa argument to tlb flush

Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    PCI: PM: Fix pci_power_up()

Juergen Gross <jgross@suse.com>
    xen/netback: fix error path of xenvif_connect_data()

Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    cpufreq: Avoid cpufreq_suspend() deadlock on system shutdown

Christophe JAILLET <christophe.jaillet@wanadoo.fr>
    memstick: jmb38x_ms: Fix an error handling path in 'jmb38x_ms_probe()'

Qu Wenruo <wqu@suse.com>
    btrfs: block-group: Fix a memory leak due to missing btrfs_put_block_group()

Patrick Williams <alpawi@amazon.com>
    pinctrl: armada-37xx: swap polarity on LED group

Patrick Williams <alpawi@amazon.com>
    pinctrl: armada-37xx: fix control of pins 32 and up

Steve Wahl <steve.wahl@hpe.com>
    x86/boot/64: Make level2_kernel_pgt pages invalid outside kernel area

Roberto Bergantinos Corpas <rbergant@redhat.com>
    CIFS: avoid using MID 0xFFFF

Helge Deller <deller@gmx.de>
    parisc: Fix vmap memory leak in ioremap()/iounmap()

Max Filippov <jcmvbkbc@gmail.com>
    xtensa: drop EXPORT_SYMBOL for outs*/ins*

David Hildenbrand <david@redhat.com>
    hugetlbfs: don't access uninitialized memmaps in pfn_range_valid_gigantic()

Qian Cai <cai@lca.pw>
    mm/page_owner: don't access uninitialized memmaps when reading /proc/pagetypeinfo

Qian Cai <cai@lca.pw>
    mm/slub: fix a deadlock in show_slab_objects()

Steffen Maier <maier@linux.ibm.com>
    scsi: zfcp: fix reaction on bit error threshold notification

David Hildenbrand <david@redhat.com>
    fs/proc/page.c: don't access uninitialized memmaps in fs/proc/page.c

David Hildenbrand <david@redhat.com>
    drivers/base/memory.c: don't access uninitialized memmaps in soft_offline_page_store()

Hans de Goede <hdegoede@redhat.com>
    drm/amdgpu: Bail earlier when amdgpu.cik_/si_support is not set to 1

Kai-Heng Feng <kai.heng.feng@canonical.com>
    drm/edid: Add 6 bpc quirk for SDC panel in Lenovo G50

Will Deacon <will@kernel.org>
    mac80211: Reject malformed SSID elements

Will Deacon <will@kernel.org>
    cfg80211: wext: avoid copying malformed SSIDs

Junya Monden <jmonden@jp.adit-jv.com>
    ASoC: rsnd: Reinitialize bit clock inversion flag for every format setting

Evan Green <evgreen@chromium.org>
    Input: synaptics-rmi4 - avoid processing unknown IRQs

Marco Felsch <m.felsch@pengutronix.de>
    Input: da9063 - fix capability and drop KEY_SLEEP

Bart Van Assche <bvanassche@acm.org>
    scsi: ch: Make it possible to open a ch device multiple times again

Yufen Yu <yuyufen@huawei.com>
    scsi: core: try to get module before removing device

Damien Le Moal <damien.lemoal@wdc.com>
    scsi: core: save/restore command resid for error handling

Oliver Neukum <oneukum@suse.com>
    scsi: sd: Ignore a failure to sync cache due to lack of authorization

Colin Ian King <colin.king@canonical.com>
    staging: wlan-ng: fix exit return when sme->key_idx >= NUM_WEPKEYS

Paul Burton <paulburton@kernel.org>
    MIPS: tlbex: Fix build_restore_pagemask KScratch restore

Josh Poimboeuf <jpoimboe@redhat.com>
    arm64/speculation: Support 'mitigations=' cmdline option

Marc Zyngier <marc.zyngier@arm.com>
    arm64: Use firmware to detect CPUs that are not affected by Spectre-v2

Marc Zyngier <marc.zyngier@arm.com>
    arm64: Force SSBS on context switch

Will Deacon <will.deacon@arm.com>
    arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB

Jeremy Linton <jeremy.linton@arm.com>
    arm64: add sysfs vulnerability show for speculative store bypass

Jeremy Linton <jeremy.linton@arm.com>
    arm64: add sysfs vulnerability show for spectre-v2

Jeremy Linton <jeremy.linton@arm.com>
    arm64: Always enable spectre-v2 vulnerability detection

Marc Zyngier <marc.zyngier@arm.com>
    arm64: Advertise mitigation of Spectre-v2, or lack thereof

Jeremy Linton <jeremy.linton@arm.com>
    arm64: Provide a command line to disable spectre_v2 mitigation

Jeremy Linton <jeremy.linton@arm.com>
    arm64: Always enable ssb vulnerability detection

Mian Yousaf Kaukab <ykaukab@suse.de>
    arm64: enable generic CPU vulnerabilites support

Jeremy Linton <jeremy.linton@arm.com>
    arm64: add sysfs vulnerability show for meltdown

Mian Yousaf Kaukab <ykaukab@suse.de>
    arm64: Add sysfs vulnerability show for spectre-v1

Mark Rutland <mark.rutland@arm.com>
    arm64: fix SSBS sanitization

Will Deacon <will.deacon@arm.com>
    KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe

Will Deacon <will.deacon@arm.com>
    arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3

Will Deacon <will.deacon@arm.com>
    arm64: cpufeature: Detect SSBS and advertise to userspace

Marc Zyngier <marc.zyngier@arm.com>
    arm64: Get rid of __smccc_workaround_1_hvc_*

Mark Rutland <mark.rutland@arm.com>
    arm64: don't zero DIT on signal return

Shanker Donthineni <shankerd@codeaurora.org>
    arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Add support for checks based on a list of MIDRs

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: Add helpers for checking CPU MIDR against a range

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Clean up midr range helpers

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Change scope of VHE to Boot CPU feature

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Add support for features enabled early

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Restrict KPTI detection to boot-time CPUs

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Introduce weak features based on local CPU

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Group handling of features and errata workarounds

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Allow features based on local CPU scope

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Split the processing of errata work arounds

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Prepare for grouping features and errata work arounds

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Filter the entries based on a given mask

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Unify the verification

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Add flags to handle the conflicts on late CPU

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Prepare for fine grained capabilities

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Move errata processing code

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: capabilities: Move errata work around check on boot CPU

Dave Martin <dave.martin@arm.com>
    arm64: capabilities: Update prototype for enable call back

Mark Rutland <mark.rutland@arm.com>
    arm64: Introduce sysreg_clear_set()

Mark Rutland <mark.rutland@arm.com>
    arm64: add PSR_AA32_* definitions

Mark Rutland <mark.rutland@arm.com>
    arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: Expose Arm v8.4 features

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: Documentation: cpu-feature-registers: Remove RES0 fields

Dongjiu Geng <gengdongjiu@huawei.com>
    arm64: v8.4: Support for new floating point multiplication instructions

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: Fix the feature type for ID register fields

Suzuki K Poulose <suzuki.poulose@arm.com>
    arm64: Expose support for optional ARMv8-A features

James Morse <james.morse@arm.com>
    arm64: sysreg: Move to use definitions for all the SCTLR bits

Johan Hovold <johan@kernel.org>
    USB: ldusb: fix read info leaks

Johan Hovold <johan@kernel.org>
    USB: usblp: fix use-after-free on disconnect

Johan Hovold <johan@kernel.org>
    USB: ldusb: fix memleak on disconnect

Johan Hovold <johan@kernel.org>
    USB: serial: ti_usb_3410_5052: fix port-close races

Gustavo A. R. Silva <gustavo@embeddedor.com>
    usb: udc: lpc32xx: fix bad bit shift operation

Kailang Yang <kailang@realtek.com>
    ALSA: hda/realtek - Add support for ALC711

Johan Hovold <johan@kernel.org>
    USB: legousbtower: fix memleak on disconnect

Matthew Wilcox (Oracle) <willy@infradead.org>
    memfd: Fix locking when tagging pins

Alessio Balsini <balsini@android.com>
    loop: Add LOOP_SET_DIRECT_IO to compat ioctl

Jiaxun Yang <jiaxun.yang@flygoat.com>
    MIPS: elf_hwcap: Export userspace ASEs

Jiaxun Yang <jiaxun.yang@flygoat.com>
    MIPS: Treat Loongson Extensions as ASEs

Eric Dumazet <edumazet@google.com>
    net: avoid potential infinite loop in tc_ctl_action()

Xin Long <lucien.xin@gmail.com>
    sctp: change sctp_prot .no_autobind with true

Biao Huang <biao.huang@mediatek.com>
    net: stmmac: disable/enable ptp_ref_clk in suspend/resume flow

Thomas Bogendoerfer <tbogendoerfer@suse.de>
    net: i82596: fix dma_alloc_attr for sni_82596

Florian Fainelli <f.fainelli@gmail.com>
    net: bcmgenet: Set phydev->dev_flags only for internal PHYs

Florian Fainelli <f.fainelli@gmail.com>
    net: bcmgenet: Fix RGMII_MODE_EN value for GENET v1/2/3

Stefano Brivio <sbrivio@redhat.com>
    ipv4: Return -ENETUNREACH if we can't create route but saddr is valid

Yi Li <yilikernel@gmail.com>
    ocfs2: fix panic due to ocfs2_wq is null

Alex Deucher <alexander.deucher@amd.com>
    Revert "drm/radeon: Fix EEH during kexec"

Song Liu <songliubraving@fb.com>
    md/raid0: fix warning message for parameter default_layout

Jacob Keller <jacob.e.keller@intel.com>
    namespace: fix namespace.pl script to support relative paths

Kai-Heng Feng <kai.heng.feng@canonical.com>
    r8152: Set macpassthru in reset_resume callback

Yizhuo <yzhai003@ucr.edu>
    net: hisilicon: Fix usage of uninitialized variable in function mdio_sc_cfg_reg_write()

Christophe JAILLET <christophe.jaillet@wanadoo.fr>
    mips: Loongson: Fix the link time qualifier of 'serial_exit()'

Miaoqing Pan <miaoqing@codeaurora.org>
    mac80211: fix txq null pointer dereference

Miaoqing Pan <miaoqing@codeaurora.org>
    nl80211: fix null pointer dereference

Ross Lagerwall <ross.lagerwall@citrix.com>
    xen/efi: Set nonblocking callbacks

Oleksij Rempel <o.rempel@pengutronix.de>
    MIPS: dts: ar9331: fix interrupt-controller size

Michal Vokáč <michal.vokac@ysoft.com>
    net: dsa: qca8k: Use up to 7 ports for all operations

Peter Ujfalusi <peter.ujfalusi@ti.com>
    ARM: dts: am4372: Set memory bandwidth limit for DISPC

Navid Emamdoost <navid.emamdoost@gmail.com>
    ieee802154: ca8210: prevent memory leak

Tony Lindgren <tony@atomide.com>
    ARM: OMAP2+: Fix missing reset done flag for am3 and am43

Quinn Tran <qutran@marvell.com>
    scsi: qla2xxx: Fix unbound sleep in fcport delete path.

Xiang Chen <chenxiang66@hisilicon.com>
    scsi: megaraid: disable device when probe failed after enabled device

Stanley Chu <stanley.chu@mediatek.com>
    scsi: ufs: skip shutdown if hba is not powered


-------------

Diffstat:

 Documentation/admin-guide/kernel-parameters.txt    |  16 +-
 Documentation/arm64/cpu-feature-registers.txt      |  26 +-
 Makefile                                           |   4 +-
 arch/arm/boot/dts/am4372.dtsi                      |   2 +
 .../mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c |   3 +-
 arch/arm/xen/efi.c                                 |   2 +
 arch/arm64/Kconfig                                 |   1 +
 arch/arm64/include/asm/cpucaps.h                   |   6 +-
 arch/arm64/include/asm/cpufeature.h                | 250 +++++++++-
 arch/arm64/include/asm/cputype.h                   |  43 ++
 arch/arm64/include/asm/kvm_asm.h                   |   2 -
 arch/arm64/include/asm/kvm_host.h                  |  11 +
 arch/arm64/include/asm/processor.h                 |  22 +-
 arch/arm64/include/asm/ptrace.h                    |  58 ++-
 arch/arm64/include/asm/sysreg.h                    |  95 +++-
 arch/arm64/include/asm/virt.h                      |   6 -
 arch/arm64/include/uapi/asm/hwcap.h                |  12 +
 arch/arm64/include/uapi/asm/ptrace.h               |   1 +
 arch/arm64/kernel/bpi.S                            |  19 +-
 arch/arm64/kernel/cpu_errata.c                     | 495 ++++++++++++--------
 arch/arm64/kernel/cpufeature.c                     | 517 +++++++++++++++------
 arch/arm64/kernel/cpuinfo.c                        |  12 +
 arch/arm64/kernel/fpsimd.c                         |   1 +
 arch/arm64/kernel/head.S                           |  13 +-
 arch/arm64/kernel/process.c                        |  31 ++
 arch/arm64/kernel/ptrace.c                         |  13 +-
 arch/arm64/kernel/smp.c                            |  44 --
 arch/arm64/kernel/ssbd.c                           |  22 +
 arch/arm64/kernel/traps.c                          |   4 +-
 arch/arm64/kvm/hyp/entry.S                         |  12 -
 arch/arm64/kvm/hyp/switch.c                        |  10 -
 arch/arm64/kvm/hyp/sysreg-sr.c                     |  11 +
 arch/arm64/mm/fault.c                              |   3 +-
 arch/arm64/mm/proc.S                               |  24 +-
 arch/mips/boot/dts/qca/ar9331.dtsi                 |   2 +-
 arch/mips/include/asm/cpu-features.h               |  16 +
 arch/mips/include/asm/cpu.h                        |   4 +
 arch/mips/include/uapi/asm/hwcap.h                 |  11 +
 arch/mips/kernel/cpu-probe.c                       |  37 ++
 arch/mips/kernel/proc.c                            |   4 +
 arch/mips/loongson64/common/serial.c               |   2 +-
 arch/mips/mm/tlbex.c                               |  23 +-
 arch/parisc/mm/ioremap.c                           |  12 +-
 arch/x86/include/asm/kvm_host.h                    |   4 +-
 arch/x86/kernel/head64.c                           |  22 +-
 arch/x86/kvm/lapic.c                               |  12 +-
 arch/x86/kvm/lapic.h                               |  14 +
 arch/x86/kvm/svm.c                                 |  18 +-
 arch/x86/kvm/vmx.c                                 |  79 ++--
 arch/x86/kvm/x86.c                                 |  32 +-
 arch/x86/xen/efi.c                                 |   2 +
 arch/xtensa/kernel/xtensa_ksyms.c                  |   7 -
 drivers/base/core.c                                |   3 +
 drivers/base/memory.c                              |   3 +
 drivers/block/loop.c                               |   1 +
 drivers/cpufreq/cpufreq.c                          |  10 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |  35 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c            |  35 --
 drivers/gpu/drm/drm_edid.c                         |   3 +
 drivers/gpu/drm/radeon/radeon_drv.c                |   8 -
 drivers/infiniband/hw/cxgb4/mem.c                  |  28 +-
 drivers/input/misc/da9063_onkey.c                  |   5 +-
 drivers/input/rmi4/rmi_driver.c                    |   6 +-
 drivers/md/raid0.c                                 |   2 +-
 drivers/memstick/host/jmb38x_ms.c                  |   2 +-
 drivers/net/dsa/qca8k.c                            |   4 +-
 drivers/net/ethernet/broadcom/genet/bcmgenet.h     |   1 +
 drivers/net/ethernet/broadcom/genet/bcmmii.c       |  11 +-
 drivers/net/ethernet/hisilicon/hns_mdio.c          |   6 +-
 drivers/net/ethernet/i825xx/lasi_82596.c           |   4 +-
 drivers/net/ethernet/i825xx/lib82596.c             |   4 +-
 drivers/net/ethernet/i825xx/sni_82596.c            |   4 +-
 drivers/net/ethernet/stmicro/stmmac/stmmac_main.c  |  12 +-
 drivers/net/ieee802154/ca8210.c                    |   2 +-
 drivers/net/usb/r8152.c                            |   3 +-
 drivers/net/xen-netback/interface.c                |   1 -
 drivers/pci/pci.c                                  |  24 +-
 drivers/pinctrl/mvebu/pinctrl-armada-37xx.c        |  26 +-
 drivers/s390/scsi/zfcp_fsf.c                       |  16 +-
 drivers/scsi/ch.c                                  |   1 -
 drivers/scsi/megaraid.c                            |   4 +-
 drivers/scsi/qla2xxx/qla_target.c                  |   4 +
 drivers/scsi/scsi_error.c                          |   3 +
 drivers/scsi/scsi_sysfs.c                          |  11 +-
 drivers/scsi/sd.c                                  |   3 +-
 drivers/scsi/ufs/ufshcd.c                          |   3 +
 drivers/staging/wlan-ng/cfg80211.c                 |   6 +-
 drivers/usb/class/usblp.c                          |   4 +-
 drivers/usb/gadget/udc/lpc32xx_udc.c               |   6 +-
 drivers/usb/misc/ldusb.c                           |  20 +-
 drivers/usb/misc/legousbtower.c                    |   5 +-
 drivers/usb/serial/ti_usb_3410_5052.c              |  10 +-
 fs/btrfs/extent-tree.c                             |   1 +
 fs/cifs/smb1ops.c                                  |   3 +
 fs/ocfs2/journal.c                                 |   3 +-
 fs/ocfs2/localalloc.c                              |   3 +-
 fs/proc/page.c                                     |  28 +-
 include/scsi/scsi_eh.h                             |   1 +
 mm/hugetlb.c                                       |   5 +-
 mm/page_owner.c                                    |   5 +-
 mm/shmem.c                                         |  18 +-
 mm/slub.c                                          |  13 +-
 net/ipv4/route.c                                   |   9 +-
 net/mac80211/debugfs_netdev.c                      |  11 +-
 net/mac80211/mlme.c                                |   5 +-
 net/sched/act_api.c                                |  13 +-
 net/sctp/socket.c                                  |   4 +-
 net/wireless/nl80211.c                             |   3 +
 net/wireless/wext-sme.c                            |   8 +-
 scripts/namespace.pl                               |  13 +-
 sound/pci/hda/patch_realtek.c                      |   3 +
 sound/soc/sh/rcar/core.c                           |   1 +
 112 files changed, 1773 insertions(+), 808 deletions(-)



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 001/119] scsi: ufs: skip shutdown if hba is not powered
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 002/119] scsi: megaraid: disable device when probe failed after enabled device Greg Kroah-Hartman
                   ` (122 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Stanley Chu, Bean Huo,
	Martin K. Petersen, Sasha Levin

From: Stanley Chu <stanley.chu@mediatek.com>

[ Upstream commit f51913eef23f74c3bd07899dc7f1ed6df9e521d8 ]

In some cases, hba may go through shutdown flow without successful
initialization and then make system hang.

For example, if ufshcd_change_power_mode() gets error and leads to
ufshcd_hba_exit() to release resources of the host, future shutdown flow
may hang the system since the host register will be accessed in unpowered
state.

To solve this issue, simply add checking to skip shutdown for above kind of
situation.

Link: https://lore.kernel.org/r/1568780438-28753-1-git-send-email-stanley.chu@mediatek.com
Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
Acked-by: Bean Huo <beanhuo@micron.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/scsi/ufs/ufshcd.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 60c9184bad3be..07cae5ea608c7 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -7755,6 +7755,9 @@ int ufshcd_shutdown(struct ufs_hba *hba)
 {
 	int ret = 0;
 
+	if (!hba->is_powered)
+		goto out;
+
 	if (ufshcd_is_ufs_dev_poweroff(hba) && ufshcd_is_link_off(hba))
 		goto out;
 
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 002/119] scsi: megaraid: disable device when probe failed after enabled device
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 001/119] scsi: ufs: skip shutdown if hba is not powered Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 003/119] scsi: qla2xxx: Fix unbound sleep in fcport delete path Greg Kroah-Hartman
                   ` (121 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Xiang Chen, John Garry,
	Martin K. Petersen, Sasha Levin

From: Xiang Chen <chenxiang66@hisilicon.com>

[ Upstream commit 70054aa39a013fa52eff432f2223b8bd5c0048f8 ]

For pci device, need to disable device when probe failed after enabled
device.

Link: https://lore.kernel.org/r/1567818450-173315-1-git-send-email-chenxiang66@hisilicon.com
Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Reviewed-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/scsi/megaraid.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
index 9b6f5d024dbae..f5c09bbf93741 100644
--- a/drivers/scsi/megaraid.c
+++ b/drivers/scsi/megaraid.c
@@ -4221,11 +4221,11 @@ megaraid_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 		 */
 		if (pdev->subsystem_vendor == PCI_VENDOR_ID_COMPAQ &&
 		    pdev->subsystem_device == 0xC000)
-		   	return -ENODEV;
+			goto out_disable_device;
 		/* Now check the magic signature byte */
 		pci_read_config_word(pdev, PCI_CONF_AMISIG, &magic);
 		if (magic != HBA_SIGNATURE_471 && magic != HBA_SIGNATURE)
-			return -ENODEV;
+			goto out_disable_device;
 		/* Ok it is probably a megaraid */
 	}
 
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 003/119] scsi: qla2xxx: Fix unbound sleep in fcport delete path.
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 001/119] scsi: ufs: skip shutdown if hba is not powered Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 002/119] scsi: megaraid: disable device when probe failed after enabled device Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 004/119] ARM: OMAP2+: Fix missing reset done flag for am3 and am43 Greg Kroah-Hartman
                   ` (120 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Quinn Tran, Himanshu Madhani,
	Martin K. Petersen, Sasha Levin

From: Quinn Tran <qutran@marvell.com>

[ Upstream commit c3b6a1d397420a0fdd97af2f06abfb78adc370df ]

There are instances, though rare, where a LOGO request cannot be sent out
and the thread in free session done can wait indefinitely. Fix this by
putting an upper bound to sleep.

Link: https://lore.kernel.org/r/20190912180918.6436-3-hmadhani@marvell.com
Signed-off-by: Quinn Tran <qutran@marvell.com>
Signed-off-by: Himanshu Madhani <hmadhani@marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/scsi/qla2xxx/qla_target.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
index 87e04c4a49821..11753ed3433ca 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -996,6 +996,7 @@ static void qlt_free_session_done(struct work_struct *work)
 
 	if (logout_started) {
 		bool traced = false;
+		u16 cnt = 0;
 
 		while (!ACCESS_ONCE(sess->logout_completed)) {
 			if (!traced) {
@@ -1005,6 +1006,9 @@ static void qlt_free_session_done(struct work_struct *work)
 				traced = true;
 			}
 			msleep(100);
+			cnt++;
+			if (cnt > 200)
+				break;
 		}
 
 		ql_dbg(ql_dbg_disc, vha, 0xf087,
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 004/119] ARM: OMAP2+: Fix missing reset done flag for am3 and am43
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (2 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 003/119] scsi: qla2xxx: Fix unbound sleep in fcport delete path Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 005/119] ieee802154: ca8210: prevent memory leak Greg Kroah-Hartman
                   ` (119 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Tony Lindgren, Sasha Levin

From: Tony Lindgren <tony@atomide.com>

[ Upstream commit 8ad8041b98c665b6147e607b749586d6e20ba73a ]

For ti,sysc-omap4 compatible devices with no sysstatus register, we do have
reset done status available in the SOFTRESET bit that clears when the reset
is done. This is documented for example in am437x TRM for DMTIMER_TIOCP_CFG
register. The am335x TRM just says that SOFTRESET bit value 1 means reset is
ongoing, but it behaves the same way clearing after reset is done.

With the ti-sysc driver handling this automatically based on no sysstatus
register defined, we see warnings if SYSC_HAS_RESET_STATUS is missing in the
legacy platform data:

ti-sysc 48042000.target-module: sysc_flags 00000222 != 00000022
ti-sysc 48044000.target-module: sysc_flags 00000222 != 00000022
ti-sysc 48046000.target-module: sysc_flags 00000222 != 00000022
...

Let's fix these warnings by adding SYSC_HAS_RESET_STATUS. Let's also
remove the useless parentheses while at it.

If it turns out we do have ti,sysc-omap4 compatible devices without a
working SOFTRESET bit we can set up additional quirk handling for it.

Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c b/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c
index de06a1d5ffab5..e61c14f590634 100644
--- a/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c
+++ b/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c
@@ -966,7 +966,8 @@ static struct omap_hwmod_class_sysconfig am33xx_timer_sysc = {
 	.rev_offs	= 0x0000,
 	.sysc_offs	= 0x0010,
 	.syss_offs	= 0x0014,
-	.sysc_flags	= (SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET),
+	.sysc_flags	= SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET |
+			  SYSC_HAS_RESET_STATUS,
 	.idlemodes	= (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
 			  SIDLE_SMART_WKUP),
 	.sysc_fields	= &omap_hwmod_sysc_type2,
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 005/119] ieee802154: ca8210: prevent memory leak
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (3 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 004/119] ARM: OMAP2+: Fix missing reset done flag for am3 and am43 Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 006/119] ARM: dts: am4372: Set memory bandwidth limit for DISPC Greg Kroah-Hartman
                   ` (118 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Navid Emamdoost, Stefan Schmidt, Sasha Levin

From: Navid Emamdoost <navid.emamdoost@gmail.com>

[ Upstream commit 6402939ec86eaf226c8b8ae00ed983936b164908 ]

In ca8210_probe the allocated pdata needs to be assigned to
spi_device->dev.platform_data before calling ca8210_get_platform_data.
Othrwise when ca8210_get_platform_data fails pdata cannot be released.

Signed-off-by: Navid Emamdoost <navid.emamdoost@gmail.com>
Link: https://lore.kernel.org/r/20190917224713.26371-1-navid.emamdoost@gmail.com
Signed-off-by: Stefan Schmidt <stefan@datenfreihafen.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/ieee802154/ca8210.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
index dcd10dba08c72..3a58962babd41 100644
--- a/drivers/net/ieee802154/ca8210.c
+++ b/drivers/net/ieee802154/ca8210.c
@@ -3153,12 +3153,12 @@ static int ca8210_probe(struct spi_device *spi_device)
 		goto error;
 	}
 
+	priv->spi->dev.platform_data = pdata;
 	ret = ca8210_get_platform_data(priv->spi, pdata);
 	if (ret) {
 		dev_crit(&spi_device->dev, "ca8210_get_platform_data failed\n");
 		goto error;
 	}
-	priv->spi->dev.platform_data = pdata;
 
 	ret = ca8210_dev_com_init(priv);
 	if (ret) {
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 006/119] ARM: dts: am4372: Set memory bandwidth limit for DISPC
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (4 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 005/119] ieee802154: ca8210: prevent memory leak Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 007/119] net: dsa: qca8k: Use up to 7 ports for all operations Greg Kroah-Hartman
                   ` (117 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Peter Ujfalusi, Tomi Valkeinen,
	Tony Lindgren, Sasha Levin

From: Peter Ujfalusi <peter.ujfalusi@ti.com>

[ Upstream commit f90ec6cdf674248dcad85bf9af6e064bf472b841 ]

Set memory bandwidth limit to filter out resolutions above 720p@60Hz to
avoid underflow errors due to the bandwidth needs of higher resolutions.

am43xx can not provide enough bandwidth to DISPC to correctly handle
'high' resolutions.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/arm/boot/dts/am4372.dtsi | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm/boot/dts/am4372.dtsi b/arch/arm/boot/dts/am4372.dtsi
index 4714a59fd86df..345c117bd5ef5 100644
--- a/arch/arm/boot/dts/am4372.dtsi
+++ b/arch/arm/boot/dts/am4372.dtsi
@@ -1118,6 +1118,8 @@
 				ti,hwmods = "dss_dispc";
 				clocks = <&disp_clk>;
 				clock-names = "fck";
+
+				max-memory-bandwidth = <230000000>;
 			};
 
 			rfbi: rfbi@4832a800 {
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 007/119] net: dsa: qca8k: Use up to 7 ports for all operations
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (5 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 006/119] ARM: dts: am4372: Set memory bandwidth limit for DISPC Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 008/119] MIPS: dts: ar9331: fix interrupt-controller size Greg Kroah-Hartman
                   ` (116 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Michal Vokáč,
	Andrew Lunn, David S. Miller, Sasha Levin

From: Michal Vokáč <michal.vokac@ysoft.com>

[ Upstream commit 7ae6d93c8f052b7a77ba56ed0f654e22a2876739 ]

The QCA8K family supports up to 7 ports. So use the existing
QCA8K_NUM_PORTS define to allocate the switch structure and limit all
operations with the switch ports.

This was not an issue until commit 0394a63acfe2 ("net: dsa: enable and
disable all ports") disabled all unused ports. Since the unused ports 7-11
are outside of the correct register range on this switch some registers
were rewritten with invalid content.

Fixes: 6b93fb46480a ("net-next: dsa: add new driver for qca8xxx family")
Fixes: a0c02161ecfc ("net: dsa: variable number of ports")
Fixes: 0394a63acfe2 ("net: dsa: enable and disable all ports")
Signed-off-by: Michal Vokáč <michal.vokac@ysoft.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/dsa/qca8k.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
index c3c9d7e33bd6c..8e49974ffa0ed 100644
--- a/drivers/net/dsa/qca8k.c
+++ b/drivers/net/dsa/qca8k.c
@@ -551,7 +551,7 @@ qca8k_setup(struct dsa_switch *ds)
 		    BIT(0) << QCA8K_GLOBAL_FW_CTRL1_UC_DP_S);
 
 	/* Setup connection between CPU port & user ports */
-	for (i = 0; i < DSA_MAX_PORTS; i++) {
+	for (i = 0; i < QCA8K_NUM_PORTS; i++) {
 		/* CPU port gets connected to all user ports of the switch */
 		if (dsa_is_cpu_port(ds, i)) {
 			qca8k_rmw(priv, QCA8K_PORT_LOOKUP_CTRL(QCA8K_CPU_PORT),
@@ -900,7 +900,7 @@ qca8k_sw_probe(struct mdio_device *mdiodev)
 	if (id != QCA8K_ID_QCA8337)
 		return -ENODEV;
 
-	priv->ds = dsa_switch_alloc(&mdiodev->dev, DSA_MAX_PORTS);
+	priv->ds = dsa_switch_alloc(&mdiodev->dev, QCA8K_NUM_PORTS);
 	if (!priv->ds)
 		return -ENOMEM;
 
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 008/119] MIPS: dts: ar9331: fix interrupt-controller size
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (6 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 007/119] net: dsa: qca8k: Use up to 7 ports for all operations Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 009/119] xen/efi: Set nonblocking callbacks Greg Kroah-Hartman
                   ` (115 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Oleksij Rempel, Paul Burton,
	Rob Herring, Mark Rutland, Pengutronix Kernel Team, Ralf Baechle,
	James Hogan, devicetree, linux-mips, Sasha Levin

From: Oleksij Rempel <o.rempel@pengutronix.de>

[ Upstream commit 0889d07f3e4b171c453b2aaf2b257f9074cdf624 ]

It is two registers each of 4 byte.

Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Pengutronix Kernel Team <kernel@pengutronix.de>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: James Hogan <jhogan@kernel.org>
Cc: devicetree@vger.kernel.org
Cc: linux-mips@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/mips/boot/dts/qca/ar9331.dtsi | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/mips/boot/dts/qca/ar9331.dtsi b/arch/mips/boot/dts/qca/ar9331.dtsi
index efd5f07222060..39b6269610d41 100644
--- a/arch/mips/boot/dts/qca/ar9331.dtsi
+++ b/arch/mips/boot/dts/qca/ar9331.dtsi
@@ -99,7 +99,7 @@
 
 			miscintc: interrupt-controller@18060010 {
 				compatible = "qca,ar7240-misc-intc";
-				reg = <0x18060010 0x4>;
+				reg = <0x18060010 0x8>;
 
 				interrupt-parent = <&cpuintc>;
 				interrupts = <6>;
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 009/119] xen/efi: Set nonblocking callbacks
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (7 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 008/119] MIPS: dts: ar9331: fix interrupt-controller size Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 010/119] nl80211: fix null pointer dereference Greg Kroah-Hartman
                   ` (114 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Ross Lagerwall, Juergen Gross, Sasha Levin

From: Ross Lagerwall <ross.lagerwall@citrix.com>

[ Upstream commit df359f0d09dc029829b66322707a2f558cb720f7 ]

Other parts of the kernel expect these nonblocking EFI callbacks to
exist and crash when running under Xen. Since the implementations of
xen_efi_set_variable() and xen_efi_query_variable_info() do not take any
locks, use them for the nonblocking callbacks too.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/arm/xen/efi.c | 2 ++
 arch/x86/xen/efi.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/arm/xen/efi.c b/arch/arm/xen/efi.c
index b4d78959cadf0..bc9a37b3cecd6 100644
--- a/arch/arm/xen/efi.c
+++ b/arch/arm/xen/efi.c
@@ -31,7 +31,9 @@ void __init xen_efi_runtime_setup(void)
 	efi.get_variable             = xen_efi_get_variable;
 	efi.get_next_variable        = xen_efi_get_next_variable;
 	efi.set_variable             = xen_efi_set_variable;
+	efi.set_variable_nonblocking = xen_efi_set_variable;
 	efi.query_variable_info      = xen_efi_query_variable_info;
+	efi.query_variable_info_nonblocking = xen_efi_query_variable_info;
 	efi.update_capsule           = xen_efi_update_capsule;
 	efi.query_capsule_caps       = xen_efi_query_capsule_caps;
 	efi.get_next_high_mono_count = xen_efi_get_next_high_mono_count;
diff --git a/arch/x86/xen/efi.c b/arch/x86/xen/efi.c
index a18703be9ead9..4769a069d5bd8 100644
--- a/arch/x86/xen/efi.c
+++ b/arch/x86/xen/efi.c
@@ -77,7 +77,9 @@ static efi_system_table_t __init *xen_efi_probe(void)
 	efi.get_variable             = xen_efi_get_variable;
 	efi.get_next_variable        = xen_efi_get_next_variable;
 	efi.set_variable             = xen_efi_set_variable;
+	efi.set_variable_nonblocking = xen_efi_set_variable;
 	efi.query_variable_info      = xen_efi_query_variable_info;
+	efi.query_variable_info_nonblocking = xen_efi_query_variable_info;
 	efi.update_capsule           = xen_efi_update_capsule;
 	efi.query_capsule_caps       = xen_efi_query_capsule_caps;
 	efi.get_next_high_mono_count = xen_efi_get_next_high_mono_count;
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 010/119] nl80211: fix null pointer dereference
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (8 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 009/119] xen/efi: Set nonblocking callbacks Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 011/119] mac80211: fix txq " Greg Kroah-Hartman
                   ` (113 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Miaoqing Pan, Johannes Berg, Sasha Levin

From: Miaoqing Pan <miaoqing@codeaurora.org>

[ Upstream commit b501426cf86e70649c983c52f4c823b3c40d72a3 ]

If the interface is not in MESH mode, the command 'iw wlanx mpath del'
will cause kernel panic.

The root cause is null pointer access in mpp_flush_by_proxy(), as the
pointer 'sdata->u.mesh.mpp_paths' is NULL for non MESH interface.

Unable to handle kernel NULL pointer dereference at virtual address 00000068
[...]
PC is at _raw_spin_lock_bh+0x20/0x5c
LR is at mesh_path_del+0x1c/0x17c [mac80211]
[...]
Process iw (pid: 4537, stack limit = 0xd83e0238)
[...]
[<c021211c>] (_raw_spin_lock_bh) from [<bf8c7648>] (mesh_path_del+0x1c/0x17c [mac80211])
[<bf8c7648>] (mesh_path_del [mac80211]) from [<bf6cdb7c>] (extack_doit+0x20/0x68 [compat])
[<bf6cdb7c>] (extack_doit [compat]) from [<c05c309c>] (genl_rcv_msg+0x274/0x30c)
[<c05c309c>] (genl_rcv_msg) from [<c05c25d8>] (netlink_rcv_skb+0x58/0xac)
[<c05c25d8>] (netlink_rcv_skb) from [<c05c2e14>] (genl_rcv+0x20/0x34)
[<c05c2e14>] (genl_rcv) from [<c05c1f90>] (netlink_unicast+0x11c/0x204)
[<c05c1f90>] (netlink_unicast) from [<c05c2420>] (netlink_sendmsg+0x30c/0x370)
[<c05c2420>] (netlink_sendmsg) from [<c05886d0>] (sock_sendmsg+0x70/0x84)
[<c05886d0>] (sock_sendmsg) from [<c0589f4c>] (___sys_sendmsg.part.3+0x188/0x228)
[<c0589f4c>] (___sys_sendmsg.part.3) from [<c058add4>] (__sys_sendmsg+0x4c/0x70)
[<c058add4>] (__sys_sendmsg) from [<c0208c80>] (ret_fast_syscall+0x0/0x44)
Code: e2822c02 e2822001 e5832004 f590f000 (e1902f9f)
---[ end trace bbd717600f8f884d ]---

Signed-off-by: Miaoqing Pan <miaoqing@codeaurora.org>
Link: https://lore.kernel.org/r/1569485810-761-1-git-send-email-miaoqing@codeaurora.org
[trim useless data from commit message]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 net/wireless/nl80211.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index ec504c4a397b4..ff31feeee8e3b 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -5504,6 +5504,9 @@ static int nl80211_del_mpath(struct sk_buff *skb, struct genl_info *info)
 	if (!rdev->ops->del_mpath)
 		return -EOPNOTSUPP;
 
+	if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_MESH_POINT)
+		return -EOPNOTSUPP;
+
 	return rdev_del_mpath(rdev, dev, dst);
 }
 
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 011/119] mac80211: fix txq null pointer dereference
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (9 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 010/119] nl80211: fix null pointer dereference Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 012/119] mips: Loongson: Fix the link time qualifier of serial_exit() Greg Kroah-Hartman
                   ` (112 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Miaoqing Pan,
	Toke Høiland-Jørgensen, Johannes Berg, Sasha Levin

From: Miaoqing Pan <miaoqing@codeaurora.org>

[ Upstream commit 8ed31a264065ae92058ce54aa3cc8da8d81dc6d7 ]

If the interface type is P2P_DEVICE or NAN, read the file of
'/sys/kernel/debug/ieee80211/phyx/netdev:wlanx/aqm' will get a
NULL pointer dereference. As for those interface type, the
pointer sdata->vif.txq is NULL.

Unable to handle kernel NULL pointer dereference at virtual address 00000011
CPU: 1 PID: 30936 Comm: cat Not tainted 4.14.104 #1
task: ffffffc0337e4880 task.stack: ffffff800cd20000
PC is at ieee80211_if_fmt_aqm+0x34/0xa0 [mac80211]
LR is at ieee80211_if_fmt_aqm+0x34/0xa0 [mac80211]
[...]
Process cat (pid: 30936, stack limit = 0xffffff800cd20000)
[...]
[<ffffff8000b7cd00>] ieee80211_if_fmt_aqm+0x34/0xa0 [mac80211]
[<ffffff8000b7c414>] ieee80211_if_read+0x60/0xbc [mac80211]
[<ffffff8000b7ccc4>] ieee80211_if_read_aqm+0x28/0x30 [mac80211]
[<ffffff80082eff94>] full_proxy_read+0x2c/0x48
[<ffffff80081eef00>] __vfs_read+0x2c/0xd4
[<ffffff80081ef084>] vfs_read+0x8c/0x108
[<ffffff80081ef494>] SyS_read+0x40/0x7c

Signed-off-by: Miaoqing Pan <miaoqing@codeaurora.org>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/r/1569549796-8223-1-git-send-email-miaoqing@codeaurora.org
[trim useless data from commit message]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 net/mac80211/debugfs_netdev.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
index c813207bb1236..928b6b0464b82 100644
--- a/net/mac80211/debugfs_netdev.c
+++ b/net/mac80211/debugfs_netdev.c
@@ -490,9 +490,14 @@ static ssize_t ieee80211_if_fmt_aqm(
 	const struct ieee80211_sub_if_data *sdata, char *buf, int buflen)
 {
 	struct ieee80211_local *local = sdata->local;
-	struct txq_info *txqi = to_txq_info(sdata->vif.txq);
+	struct txq_info *txqi;
 	int len;
 
+	if (!sdata->vif.txq)
+		return 0;
+
+	txqi = to_txq_info(sdata->vif.txq);
+
 	spin_lock_bh(&local->fq.lock);
 	rcu_read_lock();
 
@@ -659,7 +664,9 @@ static void add_common_files(struct ieee80211_sub_if_data *sdata)
 	DEBUGFS_ADD(rc_rateidx_vht_mcs_mask_5ghz);
 	DEBUGFS_ADD(hw_queues);
 
-	if (sdata->local->ops->wake_tx_queue)
+	if (sdata->local->ops->wake_tx_queue &&
+	    sdata->vif.type != NL80211_IFTYPE_P2P_DEVICE &&
+	    sdata->vif.type != NL80211_IFTYPE_NAN)
 		DEBUGFS_ADD(aqm);
 }
 
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 012/119] mips: Loongson: Fix the link time qualifier of serial_exit()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (10 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 011/119] mac80211: fix txq " Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 013/119] net: hisilicon: Fix usage of uninitialized variable in function mdio_sc_cfg_reg_write() Greg Kroah-Hartman
                   ` (111 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Christophe JAILLET, Paul Burton,
	chenhc, ralf, jhogan, linux-mips, kernel-janitors, Sasha Levin

From: Christophe JAILLET <christophe.jaillet@wanadoo.fr>

[ Upstream commit 25b69a889b638b0b7e51e2c4fe717a66bec0e566 ]

'exit' functions should be marked as __exit, not __init.

Fixes: 85cc028817ef ("mips: make loongsoon serial driver explicitly modular")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: chenhc@lemote.com
Cc: ralf@linux-mips.org
Cc: jhogan@kernel.org
Cc: linux-mips@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: kernel-janitors@vger.kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/mips/loongson64/common/serial.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/mips/loongson64/common/serial.c b/arch/mips/loongson64/common/serial.c
index ffefc1cb26121..98c3a7feb10f8 100644
--- a/arch/mips/loongson64/common/serial.c
+++ b/arch/mips/loongson64/common/serial.c
@@ -110,7 +110,7 @@ static int __init serial_init(void)
 }
 module_init(serial_init);
 
-static void __init serial_exit(void)
+static void __exit serial_exit(void)
 {
 	platform_device_unregister(&uart8250_device);
 }
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 013/119] net: hisilicon: Fix usage of uninitialized variable in function mdio_sc_cfg_reg_write()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (11 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 012/119] mips: Loongson: Fix the link time qualifier of serial_exit() Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 014/119] r8152: Set macpassthru in reset_resume callback Greg Kroah-Hartman
                   ` (110 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Yizhuo, David S. Miller, Sasha Levin

From: Yizhuo <yzhai003@ucr.edu>

[ Upstream commit 53de429f4e88f538f7a8ec2b18be8c0cd9b2c8e1 ]

In function mdio_sc_cfg_reg_write(), variable "reg_value" could be
uninitialized if regmap_read() fails. However, "reg_value" is used
to decide the control flow later in the if statement, which is
potentially unsafe.

Signed-off-by: Yizhuo <yzhai003@ucr.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/ethernet/hisilicon/hns_mdio.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/hisilicon/hns_mdio.c b/drivers/net/ethernet/hisilicon/hns_mdio.c
index baf5cc251f329..9a3bc0994a1db 100644
--- a/drivers/net/ethernet/hisilicon/hns_mdio.c
+++ b/drivers/net/ethernet/hisilicon/hns_mdio.c
@@ -156,11 +156,15 @@ static int mdio_sc_cfg_reg_write(struct hns_mdio_device *mdio_dev,
 {
 	u32 time_cnt;
 	u32 reg_value;
+	int ret;
 
 	regmap_write(mdio_dev->subctrl_vbase, cfg_reg, set_val);
 
 	for (time_cnt = MDIO_TIMEOUT; time_cnt; time_cnt--) {
-		regmap_read(mdio_dev->subctrl_vbase, st_reg, &reg_value);
+		ret = regmap_read(mdio_dev->subctrl_vbase, st_reg, &reg_value);
+		if (ret)
+			return ret;
+
 		reg_value &= st_msk;
 		if ((!!check_st) == (!!reg_value))
 			break;
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 014/119] r8152: Set macpassthru in reset_resume callback
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (12 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 013/119] net: hisilicon: Fix usage of uninitialized variable in function mdio_sc_cfg_reg_write() Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 015/119] namespace: fix namespace.pl script to support relative paths Greg Kroah-Hartman
                   ` (109 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Kai-Heng Feng, David S. Miller, Sasha Levin

From: Kai-Heng Feng <kai.heng.feng@canonical.com>

[ Upstream commit a54cdeeb04fc719e4c7f19d6e28dba7ea86cee5b ]

r8152 may fail to establish network connection after resume from system
suspend.

If the USB port connects to r8152 lost its power during system suspend,
the MAC address was written before is lost. The reason is that The MAC
address doesn't get written again in its reset_resume callback.

So let's set MAC address again in reset_resume callback. Also remove
unnecessary lock as no other locking attempt will happen during
reset_resume.

Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/usb/r8152.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
index 455eec3c46942..c0964281ab983 100644
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -4465,10 +4465,9 @@ static int rtl8152_reset_resume(struct usb_interface *intf)
 	struct r8152 *tp = usb_get_intfdata(intf);
 
 	clear_bit(SELECTIVE_SUSPEND, &tp->flags);
-	mutex_lock(&tp->control);
 	tp->rtl_ops.init(tp);
 	queue_delayed_work(system_long_wq, &tp->hw_phy_work, 0);
-	mutex_unlock(&tp->control);
+	set_ethernet_addr(tp);
 	return rtl8152_resume(intf);
 }
 
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 015/119] namespace: fix namespace.pl script to support relative paths
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (13 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 014/119] r8152: Set macpassthru in reset_resume callback Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 016/119] md/raid0: fix warning message for parameter default_layout Greg Kroah-Hartman
                   ` (108 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Jacob Keller, Randy Dunlap,
	Masahiro Yamada, Sasha Levin

From: Jacob Keller <jacob.e.keller@intel.com>

[ Upstream commit 82fdd12b95727640c9a8233c09d602e4518e71f7 ]

The namespace.pl script does not work properly if objtree is not set to
an absolute path. The do_nm function is run from within the find
function, which changes directories.

Because of this, appending objtree, $File::Find::dir, and $source, will
return a path which is not valid from the current directory.

This used to work when objtree was set to an absolute path when using
"make namespacecheck". It appears to have not worked when calling
./scripts/namespace.pl directly.

This behavior was changed in 7e1c04779efd ("kbuild: Use relative path
for $(objtree)", 2014-05-14)

Rather than fixing the Makefile to set objtree to an absolute path, just
fix namespace.pl to work when srctree and objtree are relative. Also fix
the script to use an absolute path for these by default.

Use the File::Spec module for this purpose. It's been part of perl
5 since 5.005.

The curdir() function is used to get the current directory when the
objtree and srctree aren't set in the environment.

rel2abs() is used to convert possibly relative objtree and srctree
environment variables to absolute paths.

Finally, the catfile() function is used instead of string appending
paths together, since this is more robust when joining paths together.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Tested-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 scripts/namespace.pl | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/scripts/namespace.pl b/scripts/namespace.pl
index 729c547fc9e1e..30c43e639db8a 100755
--- a/scripts/namespace.pl
+++ b/scripts/namespace.pl
@@ -65,13 +65,14 @@
 use warnings;
 use strict;
 use File::Find;
+use File::Spec;
 
 my $nm = ($ENV{'NM'} || "nm") . " -p";
 my $objdump = ($ENV{'OBJDUMP'} || "objdump") . " -s -j .comment";
-my $srctree = "";
-my $objtree = "";
-$srctree = "$ENV{'srctree'}/" if (exists($ENV{'srctree'}));
-$objtree = "$ENV{'objtree'}/" if (exists($ENV{'objtree'}));
+my $srctree = File::Spec->curdir();
+my $objtree = File::Spec->curdir();
+$srctree = File::Spec->rel2abs($ENV{'srctree'}) if (exists($ENV{'srctree'}));
+$objtree = File::Spec->rel2abs($ENV{'objtree'}) if (exists($ENV{'objtree'}));
 
 if ($#ARGV != -1) {
 	print STDERR "usage: $0 takes no parameters\n";
@@ -231,9 +232,9 @@ sub do_nm
 	}
 	($source = $basename) =~ s/\.o$//;
 	if (-e "$source.c" || -e "$source.S") {
-		$source = "$objtree$File::Find::dir/$source";
+		$source = File::Spec->catfile($objtree, $File::Find::dir, $source)
 	} else {
-		$source = "$srctree$File::Find::dir/$source";
+		$source = File::Spec->catfile($srctree, $File::Find::dir, $source)
 	}
 	if (! -e "$source.c" && ! -e "$source.S") {
 		# No obvious source, exclude the object if it is conglomerate
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 016/119] md/raid0: fix warning message for parameter default_layout
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (14 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 015/119] namespace: fix namespace.pl script to support relative paths Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 017/119] Revert "drm/radeon: Fix EEH during kexec" Greg Kroah-Hartman
                   ` (107 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, NeilBrown, Ivan Topolsky, Song Liu,
	Sasha Levin

From: Song Liu <songliubraving@fb.com>

[ Upstream commit 3874d73e06c9b9dc15de0b7382fc223986d75571 ]

The message should match the parameter, i.e. raid0.default_layout.

Fixes: c84a1372df92 ("md/raid0: avoid RAID0 data corruption due to layout confusion.")
Cc: NeilBrown <neilb@suse.de>
Reported-by: Ivan Topolsky <doktor.yak@gmail.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/md/raid0.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 28fb717217706..449c4dd060fcd 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -158,7 +158,7 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
 	} else {
 		pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n",
 		       mdname(mddev));
-		pr_err("md/raid0: please set raid.default_layout to 1 or 2\n");
+		pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n");
 		err = -ENOTSUPP;
 		goto abort;
 	}
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 017/119] Revert "drm/radeon: Fix EEH during kexec"
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (15 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 016/119] md/raid0: fix warning message for parameter default_layout Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 018/119] ocfs2: fix panic due to ocfs2_wq is null Greg Kroah-Hartman
                   ` (106 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Alex Deucher, Sasha Levin

From: Alex Deucher <alexander.deucher@amd.com>

[ Upstream commit 8d13c187c42e110625d60094668a8f778c092879 ]

This reverts commit 6f7fe9a93e6c09bf988c5059403f5f88e17e21e6.

This breaks some boards.  Maybe just enable this on PPC for
now?

Bug: https://bugzilla.kernel.org/show_bug.cgi?id=205147
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/gpu/drm/radeon/radeon_drv.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
index 54d97dd5780a1..f4becad0a78c0 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -368,19 +368,11 @@ radeon_pci_remove(struct pci_dev *pdev)
 static void
 radeon_pci_shutdown(struct pci_dev *pdev)
 {
-	struct drm_device *ddev = pci_get_drvdata(pdev);
-
 	/* if we are running in a VM, make sure the device
 	 * torn down properly on reboot/shutdown
 	 */
 	if (radeon_device_is_virtual())
 		radeon_pci_remove(pdev);
-
-	/* Some adapters need to be suspended before a
-	* shutdown occurs in order to prevent an error
-	* during kexec.
-	*/
-	radeon_suspend_kms(ddev, true, true, false);
 }
 
 static int radeon_pmops_suspend(struct device *dev)
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 018/119] ocfs2: fix panic due to ocfs2_wq is null
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (16 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 017/119] Revert "drm/radeon: Fix EEH during kexec" Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 019/119] ipv4: Return -ENETUNREACH if we cant create route but saddr is valid Greg Kroah-Hartman
                   ` (105 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Yi Li, Joseph Qi, Mark Fasheh,
	Joel Becker, Junxiao Bi, Changwei Ge, Gang He, Jun Piao,
	Andrew Morton, Linus Torvalds

From: Yi Li <yilikernel@gmail.com>

commit b918c43021baaa3648de09e19a4a3dd555a45f40 upstream.

mount.ocfs2 failed when reading ocfs2 filesystem superblock encounters
an error.  ocfs2_initialize_super() returns before allocating ocfs2_wq.
ocfs2_dismount_volume() triggers the following panic.

  Oct 15 16:09:27 cnwarekv-205120 kernel: On-disk corruption discovered.Please run fsck.ocfs2 once the filesystem is unmounted.
  Oct 15 16:09:27 cnwarekv-205120 kernel: (mount.ocfs2,22804,44): ocfs2_read_locked_inode:537 ERROR: status = -30
  Oct 15 16:09:27 cnwarekv-205120 kernel: (mount.ocfs2,22804,44): ocfs2_init_global_system_inodes:458 ERROR: status = -30
  Oct 15 16:09:27 cnwarekv-205120 kernel: (mount.ocfs2,22804,44): ocfs2_init_global_system_inodes:491 ERROR: status = -30
  Oct 15 16:09:27 cnwarekv-205120 kernel: (mount.ocfs2,22804,44): ocfs2_initialize_super:2313 ERROR: status = -30
  Oct 15 16:09:27 cnwarekv-205120 kernel: (mount.ocfs2,22804,44): ocfs2_fill_super:1033 ERROR: status = -30
  ------------[ cut here ]------------
  Oops: 0002 [#1] SMP NOPTI
  CPU: 1 PID: 11753 Comm: mount.ocfs2 Tainted: G  E
        4.14.148-200.ckv.x86_64 #1
  Hardware name: Sugon H320-G30/35N16-US, BIOS 0SSDX017 12/21/2018
  task: ffff967af0520000 task.stack: ffffa5f05484000
  RIP: 0010:mutex_lock+0x19/0x20
  Call Trace:
    flush_workqueue+0x81/0x460
    ocfs2_shutdown_local_alloc+0x47/0x440 [ocfs2]
    ocfs2_dismount_volume+0x84/0x400 [ocfs2]
    ocfs2_fill_super+0xa4/0x1270 [ocfs2]
    ? ocfs2_initialize_super.isa.211+0xf20/0xf20 [ocfs2]
    mount_bdev+0x17f/0x1c0
    mount_fs+0x3a/0x160

Link: http://lkml.kernel.org/r/1571139611-24107-1-git-send-email-yili@winhong.com
Signed-off-by: Yi Li <yilikernel@gmail.com>
Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>
Cc: Mark Fasheh <mark@fasheh.com>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Changwei Ge <gechangwei@live.cn>
Cc: Gang He <ghe@suse.com>
Cc: Jun Piao <piaojun@huawei.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 fs/ocfs2/journal.c    |    3 ++-
 fs/ocfs2/localalloc.c |    3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

--- a/fs/ocfs2/journal.c
+++ b/fs/ocfs2/journal.c
@@ -231,7 +231,8 @@ void ocfs2_recovery_exit(struct ocfs2_su
 	/* At this point, we know that no more recovery threads can be
 	 * launched, so wait for any recovery completion work to
 	 * complete. */
-	flush_workqueue(osb->ocfs2_wq);
+	if (osb->ocfs2_wq)
+		flush_workqueue(osb->ocfs2_wq);
 
 	/*
 	 * Now that recovery is shut down, and the osb is about to be
--- a/fs/ocfs2/localalloc.c
+++ b/fs/ocfs2/localalloc.c
@@ -391,7 +391,8 @@ void ocfs2_shutdown_local_alloc(struct o
 	struct ocfs2_dinode *alloc = NULL;
 
 	cancel_delayed_work(&osb->la_enable_wq);
-	flush_workqueue(osb->ocfs2_wq);
+	if (osb->ocfs2_wq)
+		flush_workqueue(osb->ocfs2_wq);
 
 	if (osb->local_alloc_state == OCFS2_LA_UNUSED)
 		goto out;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 019/119] ipv4: Return -ENETUNREACH if we cant create route but saddr is valid
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (17 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 018/119] ocfs2: fix panic due to ocfs2_wq is null Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 020/119] net: bcmgenet: Fix RGMII_MODE_EN value for GENET v1/2/3 Greg Kroah-Hartman
                   ` (104 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Stefan Walter, Stefano Brivio,
	David S. Miller, Benjamin Coddington, Gonzalo Siero

From: Stefano Brivio <sbrivio@redhat.com>

[ Upstream commit 595e0651d0296bad2491a4a29a7a43eae6328b02 ]

...instead of -EINVAL. An issue was found with older kernel versions
while unplugging a NFS client with pending RPCs, and the wrong error
code here prevented it from recovering once link is back up with a
configured address.

Incidentally, this is not an issue anymore since commit 4f8943f80883
("SUNRPC: Replace direct task wakeups from softirq context"), included
in 5.2-rc7, had the effect of decoupling the forwarding of this error
by using SO_ERROR in xs_wake_error(), as pointed out by Benjamin
Coddington.

To the best of my knowledge, this isn't currently causing any further
issue, but the error code doesn't look appropriate anyway, and we
might hit this in other paths as well.

In detail, as analysed by Gonzalo Siero, once the route is deleted
because the interface is down, and can't be resolved and we return
-EINVAL here, this ends up, courtesy of inet_sk_rebuild_header(),
as the socket error seen by tcp_write_err(), called by
tcp_retransmit_timer().

In turn, tcp_write_err() indirectly calls xs_error_report(), which
wakes up the RPC pending tasks with a status of -EINVAL. This is then
seen by call_status() in the SUN RPC implementation, which aborts the
RPC call calling rpc_exit(), instead of handling this as a
potentially temporary condition, i.e. as a timeout.

Return -EINVAL only if the input parameters passed to
ip_route_output_key_hash_rcu() are actually invalid (this is the case
if the specified source address is multicast, limited broadcast or
all zeroes), but return -ENETUNREACH in all cases where, at the given
moment, the given source address doesn't allow resolving the route.

While at it, drop the initialisation of err to -ENETUNREACH, which
was added to __ip_route_output_key() back then by commit
0315e3827048 ("net: Fix behaviour of unreachable, blackhole and
prohibit routes"), but actually had no effect, as it was, and is,
overwritten by the fib_lookup() return code assignment, and anyway
ignored in all other branches, including the if (fl4->saddr) one:
I find this rather confusing, as it would look like -ENETUNREACH is
the "default" error, while that statement has no effect.

Also note that after commit fc75fc8339e7 ("ipv4: dont create routes
on down devices"), we would get -ENETUNREACH if the device is down,
but -EINVAL if the source address is specified and we can't resolve
the route, and this appears to be rather inconsistent.

Reported-by: Stefan Walter <walteste@inf.ethz.ch>
Analysed-by: Benjamin Coddington <bcodding@redhat.com>
Analysed-by: Gonzalo Siero <gsierohu@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 net/ipv4/route.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -2351,14 +2351,17 @@ struct rtable *ip_route_output_key_hash_
 	int orig_oif = fl4->flowi4_oif;
 	unsigned int flags = 0;
 	struct rtable *rth;
-	int err = -ENETUNREACH;
+	int err;
 
 	if (fl4->saddr) {
-		rth = ERR_PTR(-EINVAL);
 		if (ipv4_is_multicast(fl4->saddr) ||
 		    ipv4_is_lbcast(fl4->saddr) ||
-		    ipv4_is_zeronet(fl4->saddr))
+		    ipv4_is_zeronet(fl4->saddr)) {
+			rth = ERR_PTR(-EINVAL);
 			goto out;
+		}
+
+		rth = ERR_PTR(-ENETUNREACH);
 
 		/* I removed check for oif == dev_out->oif here.
 		   It was wrong for two reasons:



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 020/119] net: bcmgenet: Fix RGMII_MODE_EN value for GENET v1/2/3
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (18 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 019/119] ipv4: Return -ENETUNREACH if we cant create route but saddr is valid Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 021/119] net: bcmgenet: Set phydev->dev_flags only for internal PHYs Greg Kroah-Hartman
                   ` (103 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Florian Fainelli, Doug Berger,
	David S. Miller

From: Florian Fainelli <f.fainelli@gmail.com>

[ Upstream commit efb86fede98cdc70b674692ff617b1162f642c49 ]

The RGMII_MODE_EN bit value was 0 for GENET versions 1 through 3, and
became 6 for GENET v4 and above, account for that difference.

Fixes: aa09677cba42 ("net: bcmgenet: add MDIO routines")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Doug Berger <opendmb@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 drivers/net/ethernet/broadcom/genet/bcmgenet.h |    1 +
 drivers/net/ethernet/broadcom/genet/bcmmii.c   |    6 +++++-
 2 files changed, 6 insertions(+), 1 deletion(-)

--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -368,6 +368,7 @@ struct bcmgenet_mib_counters {
 #define  EXT_PWR_DOWN_PHY_EN		(1 << 20)
 
 #define EXT_RGMII_OOB_CTRL		0x0C
+#define  RGMII_MODE_EN_V123		(1 << 0)
 #define  RGMII_LINK			(1 << 4)
 #define  OOB_DISABLE			(1 << 5)
 #define  RGMII_MODE_EN			(1 << 6)
--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
@@ -277,7 +277,11 @@ int bcmgenet_mii_config(struct net_devic
 	 */
 	if (priv->ext_phy) {
 		reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
-		reg |= RGMII_MODE_EN | id_mode_dis;
+		reg |= id_mode_dis;
+		if (GENET_IS_V1(priv) || GENET_IS_V2(priv) || GENET_IS_V3(priv))
+			reg |= RGMII_MODE_EN_V123;
+		else
+			reg |= RGMII_MODE_EN;
 		bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
 	}
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 021/119] net: bcmgenet: Set phydev->dev_flags only for internal PHYs
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (19 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 020/119] net: bcmgenet: Fix RGMII_MODE_EN value for GENET v1/2/3 Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 20:59 ` [PATCH 4.14 022/119] net: i82596: fix dma_alloc_attr for sni_82596 Greg Kroah-Hartman
                   ` (102 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Florian Fainelli, Doug Berger,
	David S. Miller

From: Florian Fainelli <f.fainelli@gmail.com>

[ Upstream commit 92696286f3bb37ba50e4bd8d1beb24afb759a799 ]

phydev->dev_flags is entirely dependent on the PHY device driver which
is going to be used, setting the internal GENET PHY revision in those
bits only makes sense when drivers/net/phy/bcm7xxx.c is the PHY driver
being used.

Fixes: 487320c54143 ("net: bcmgenet: communicate integrated PHY revision to PHY driver")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Doug Berger <opendmb@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 drivers/net/ethernet/broadcom/genet/bcmmii.c |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
@@ -296,11 +296,12 @@ int bcmgenet_mii_probe(struct net_device
 	struct bcmgenet_priv *priv = netdev_priv(dev);
 	struct device_node *dn = priv->pdev->dev.of_node;
 	struct phy_device *phydev;
-	u32 phy_flags;
+	u32 phy_flags = 0;
 	int ret;
 
 	/* Communicate the integrated PHY revision */
-	phy_flags = priv->gphy_rev;
+	if (priv->internal_phy)
+		phy_flags = priv->gphy_rev;
 
 	/* Initialize link state variables that bcmgenet_mii_setup() uses */
 	priv->old_link = -1;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 022/119] net: i82596: fix dma_alloc_attr for sni_82596
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (20 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 021/119] net: bcmgenet: Set phydev->dev_flags only for internal PHYs Greg Kroah-Hartman
@ 2019-10-27 20:59 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 023/119] net: stmmac: disable/enable ptp_ref_clk in suspend/resume flow Greg Kroah-Hartman
                   ` (101 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 20:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Thomas Bogendoerfer, David S. Miller

From: Thomas Bogendoerfer <tbogendoerfer@suse.de>

[ Upstream commit 61c1d33daf7b5146f44d4363b3322f8cda6a6c43 ]

Commit 7f683b920479 ("i825xx: switch to switch to dma_alloc_attrs")
switched dma allocation over to dma_alloc_attr, but didn't convert
the SNI part to request consistent DMA memory. This broke sni_82596
since driver doesn't do dma_cache_sync for performance reasons.
Fix this by using different DMA_ATTRs for lasi_82596 and sni_82596.

Fixes: 7f683b920479 ("i825xx: switch to switch to dma_alloc_attrs")
Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 drivers/net/ethernet/i825xx/lasi_82596.c |    4 +++-
 drivers/net/ethernet/i825xx/lib82596.c   |    4 ++--
 drivers/net/ethernet/i825xx/sni_82596.c  |    4 +++-
 3 files changed, 8 insertions(+), 4 deletions(-)

--- a/drivers/net/ethernet/i825xx/lasi_82596.c
+++ b/drivers/net/ethernet/i825xx/lasi_82596.c
@@ -96,6 +96,8 @@
 
 #define OPT_SWAP_PORT	0x0001	/* Need to wordswp on the MPU port */
 
+#define LIB82596_DMA_ATTR	DMA_ATTR_NON_CONSISTENT
+
 #define DMA_WBACK(ndev, addr, len) \
 	do { dma_cache_sync((ndev)->dev.parent, (void *)addr, len, DMA_TO_DEVICE); } while (0)
 
@@ -199,7 +201,7 @@ static int __exit lan_remove_chip(struct
 
 	unregister_netdev (dev);
 	dma_free_attrs(&pdev->dev, sizeof(struct i596_private), lp->dma,
-		       lp->dma_addr, DMA_ATTR_NON_CONSISTENT);
+		       lp->dma_addr, LIB82596_DMA_ATTR);
 	free_netdev (dev);
 	return 0;
 }
--- a/drivers/net/ethernet/i825xx/lib82596.c
+++ b/drivers/net/ethernet/i825xx/lib82596.c
@@ -1065,7 +1065,7 @@ static int i82596_probe(struct net_devic
 
 	dma = dma_alloc_attrs(dev->dev.parent, sizeof(struct i596_dma),
 			      &lp->dma_addr, GFP_KERNEL,
-			      DMA_ATTR_NON_CONSISTENT);
+			      LIB82596_DMA_ATTR);
 	if (!dma) {
 		printk(KERN_ERR "%s: Couldn't get shared memory\n", __FILE__);
 		return -ENOMEM;
@@ -1087,7 +1087,7 @@ static int i82596_probe(struct net_devic
 	i = register_netdev(dev);
 	if (i) {
 		dma_free_attrs(dev->dev.parent, sizeof(struct i596_dma),
-			       dma, lp->dma_addr, DMA_ATTR_NON_CONSISTENT);
+			       dma, lp->dma_addr, LIB82596_DMA_ATTR);
 		return i;
 	}
 
--- a/drivers/net/ethernet/i825xx/sni_82596.c
+++ b/drivers/net/ethernet/i825xx/sni_82596.c
@@ -23,6 +23,8 @@
 
 static const char sni_82596_string[] = "snirm_82596";
 
+#define LIB82596_DMA_ATTR	0
+
 #define DMA_WBACK(priv, addr, len)     do { } while (0)
 #define DMA_INV(priv, addr, len)       do { } while (0)
 #define DMA_WBACK_INV(priv, addr, len) do { } while (0)
@@ -151,7 +153,7 @@ static int sni_82596_driver_remove(struc
 
 	unregister_netdev(dev);
 	dma_free_attrs(dev->dev.parent, sizeof(struct i596_private), lp->dma,
-		       lp->dma_addr, DMA_ATTR_NON_CONSISTENT);
+		       lp->dma_addr, LIB82596_DMA_ATTR);
 	iounmap(lp->ca);
 	iounmap(lp->mpu_port);
 	free_netdev (dev);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 023/119] net: stmmac: disable/enable ptp_ref_clk in suspend/resume flow
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (21 preceding siblings ...)
  2019-10-27 20:59 ` [PATCH 4.14 022/119] net: i82596: fix dma_alloc_attr for sni_82596 Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 024/119] sctp: change sctp_prot .no_autobind with true Greg Kroah-Hartman
                   ` (100 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Biao Huang, David S. Miller

From: Biao Huang <biao.huang@mediatek.com>

[ Upstream commit e497c20e203680aba9ccf7bb475959595908ca7e ]

disable ptp_ref_clk in suspend flow, and enable it in resume flow.

Fixes: f573c0b9c4e0 ("stmmac: move stmmac_clk, pclk, clk_ptp_ref and stmmac_rst to platform structure")
Signed-off-by: Biao Huang <biao.huang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 drivers/net/ethernet/stmicro/stmmac/stmmac_main.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -4402,8 +4402,10 @@ int stmmac_suspend(struct device *dev)
 		priv->hw->mac->set_mac(priv->ioaddr, false);
 		pinctrl_pm_select_sleep_state(priv->device);
 		/* Disable clock in case of PWM is off */
-		clk_disable(priv->plat->pclk);
-		clk_disable(priv->plat->stmmac_clk);
+		if (priv->plat->clk_ptp_ref)
+			clk_disable_unprepare(priv->plat->clk_ptp_ref);
+		clk_disable_unprepare(priv->plat->pclk);
+		clk_disable_unprepare(priv->plat->stmmac_clk);
 	}
 	spin_unlock_irqrestore(&priv->lock, flags);
 
@@ -4468,8 +4470,10 @@ int stmmac_resume(struct device *dev)
 	} else {
 		pinctrl_pm_select_default_state(priv->device);
 		/* enable the clk previously disabled */
-		clk_enable(priv->plat->stmmac_clk);
-		clk_enable(priv->plat->pclk);
+		clk_prepare_enable(priv->plat->stmmac_clk);
+		clk_prepare_enable(priv->plat->pclk);
+		if (priv->plat->clk_ptp_ref)
+			clk_prepare_enable(priv->plat->clk_ptp_ref);
 		/* reset the phy so that it's ready */
 		if (priv->mii)
 			stmmac_mdio_reset(priv->mii);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 024/119] sctp: change sctp_prot .no_autobind with true
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (22 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 023/119] net: stmmac: disable/enable ptp_ref_clk in suspend/resume flow Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-31  7:54   ` Rantala, Tommi T. (Nokia - FI/Espoo)
  2019-10-27 21:00 ` [PATCH 4.14 025/119] net: avoid potential infinite loop in tc_ctl_action() Greg Kroah-Hartman
                   ` (99 subsequent siblings)
  123 siblings, 1 reply; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, syzbot+d44f7bbebdea49dbc84a,
	Xin Long, Marcelo Ricardo Leitner, David S. Miller

From: Xin Long <lucien.xin@gmail.com>

[ Upstream commit 63dfb7938b13fa2c2fbcb45f34d065769eb09414 ]

syzbot reported a memory leak:

  BUG: memory leak, unreferenced object 0xffff888120b3d380 (size 64):
  backtrace:

    [...] slab_alloc mm/slab.c:3319 [inline]
    [...] kmem_cache_alloc+0x13f/0x2c0 mm/slab.c:3483
    [...] sctp_bucket_create net/sctp/socket.c:8523 [inline]
    [...] sctp_get_port_local+0x189/0x5a0 net/sctp/socket.c:8270
    [...] sctp_do_bind+0xcc/0x200 net/sctp/socket.c:402
    [...] sctp_bindx_add+0x4b/0xd0 net/sctp/socket.c:497
    [...] sctp_setsockopt_bindx+0x156/0x1b0 net/sctp/socket.c:1022
    [...] sctp_setsockopt net/sctp/socket.c:4641 [inline]
    [...] sctp_setsockopt+0xaea/0x2dc0 net/sctp/socket.c:4611
    [...] sock_common_setsockopt+0x38/0x50 net/core/sock.c:3147
    [...] __sys_setsockopt+0x10f/0x220 net/socket.c:2084
    [...] __do_sys_setsockopt net/socket.c:2100 [inline]

It was caused by when sending msgs without binding a port, in the path:
inet_sendmsg() -> inet_send_prepare() -> inet_autobind() ->
.get_port/sctp_get_port(), sp->bind_hash will be set while bp->port is
not. Later when binding another port by sctp_setsockopt_bindx(), a new
bucket will be created as bp->port is not set.

sctp's autobind is supposed to call sctp_autobind() where it does all
things including setting bp->port. Since sctp_autobind() is called in
sctp_sendmsg() if the sk is not yet bound, it should have skipped the
auto bind.

THis patch is to avoid calling inet_autobind() in inet_send_prepare()
by changing sctp_prot .no_autobind with true, also remove the unused
.get_port.

Reported-by: syzbot+d44f7bbebdea49dbc84a@syzkaller.appspotmail.com
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 net/sctp/socket.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -8313,7 +8313,7 @@ struct proto sctp_prot = {
 	.backlog_rcv =	sctp_backlog_rcv,
 	.hash        =	sctp_hash,
 	.unhash      =	sctp_unhash,
-	.get_port    =	sctp_get_port,
+	.no_autobind =	true,
 	.obj_size    =  sizeof(struct sctp_sock),
 	.sysctl_mem  =  sysctl_sctp_mem,
 	.sysctl_rmem =  sysctl_sctp_rmem,
@@ -8352,7 +8352,7 @@ struct proto sctpv6_prot = {
 	.backlog_rcv	= sctp_backlog_rcv,
 	.hash		= sctp_hash,
 	.unhash		= sctp_unhash,
-	.get_port	= sctp_get_port,
+	.no_autobind	= true,
 	.obj_size	= sizeof(struct sctp6_sock),
 	.sysctl_mem	= sysctl_sctp_mem,
 	.sysctl_rmem	= sysctl_sctp_rmem,



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 025/119] net: avoid potential infinite loop in tc_ctl_action()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (23 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 024/119] sctp: change sctp_prot .no_autobind with true Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 026/119] MIPS: Treat Loongson Extensions as ASEs Greg Kroah-Hartman
                   ` (98 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Eric Dumazet,
	syzbot+cf0adbb9c28c8866c788, David S. Miller

From: Eric Dumazet <edumazet@google.com>

[ Upstream commit 39f13ea2f61b439ebe0060393e9c39925c9ee28c ]

tc_ctl_action() has the ability to loop forever if tcf_action_add()
returns -EAGAIN.

This special case has been done in case a module needed to be loaded,
but it turns out that tcf_add_notify() could also return -EAGAIN
if the socket sk_rcvbuf limit is hit.

We need to separate the two cases, and only loop for the module
loading case.

While we are at it, add a limit of 10 attempts since unbounded
loops are always scary.

syzbot repro was something like :

socket(PF_NETLINK, SOCK_RAW|SOCK_NONBLOCK, NETLINK_ROUTE) = 3
write(3, ..., 38) = 38
setsockopt(3, SOL_SOCKET, SO_RCVBUF, [0], 4) = 0
sendmsg(3, {msg_name(0)=NULL, msg_iov(1)=[{..., 388}], msg_controllen=0, msg_flags=0x10}, ...)

NMI backtrace for cpu 0
CPU: 0 PID: 1054 Comm: khungtaskd Not tainted 5.4.0-rc1+ #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x172/0x1f0 lib/dump_stack.c:113
 nmi_cpu_backtrace.cold+0x70/0xb2 lib/nmi_backtrace.c:101
 nmi_trigger_cpumask_backtrace+0x23b/0x28b lib/nmi_backtrace.c:62
 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:205 [inline]
 watchdog+0x9d0/0xef0 kernel/hung_task.c:289
 kthread+0x361/0x430 kernel/kthread.c:255
 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 8859 Comm: syz-executor910 Not tainted 5.4.0-rc1+ #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:arch_local_save_flags arch/x86/include/asm/paravirt.h:751 [inline]
RIP: 0010:lockdep_hardirqs_off+0x1df/0x2e0 kernel/locking/lockdep.c:3453
Code: 5c 08 00 00 5b 41 5c 41 5d 5d c3 48 c7 c0 58 1d f3 88 48 ba 00 00 00 00 00 fc ff df 48 c1 e8 03 80 3c 10 00 0f 85 d3 00 00 00 <48> 83 3d 21 9e 99 07 00 0f 84 b9 00 00 00 9c 58 0f 1f 44 00 00 f6
RSP: 0018:ffff8880a6f3f1b8 EFLAGS: 00000046
RAX: 1ffffffff11e63ab RBX: ffff88808c9c6080 RCX: 0000000000000000
RDX: dffffc0000000000 RSI: 0000000000000000 RDI: ffff88808c9c6914
RBP: ffff8880a6f3f1d0 R08: ffff88808c9c6080 R09: fffffbfff16be5d1
R10: fffffbfff16be5d0 R11: 0000000000000003 R12: ffffffff8746591f
R13: ffff88808c9c6080 R14: ffffffff8746591f R15: 0000000000000003
FS:  00000000011e4880(0000) GS:ffff8880ae900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffffffff600400 CR3: 00000000a8920000 CR4: 00000000001406e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 trace_hardirqs_off+0x62/0x240 kernel/trace/trace_preemptirq.c:45
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:108 [inline]
 _raw_spin_lock_irqsave+0x6f/0xcd kernel/locking/spinlock.c:159
 __wake_up_common_lock+0xc8/0x150 kernel/sched/wait.c:122
 __wake_up+0xe/0x10 kernel/sched/wait.c:142
 netlink_unlock_table net/netlink/af_netlink.c:466 [inline]
 netlink_unlock_table net/netlink/af_netlink.c:463 [inline]
 netlink_broadcast_filtered+0x705/0xb80 net/netlink/af_netlink.c:1514
 netlink_broadcast+0x3a/0x50 net/netlink/af_netlink.c:1534
 rtnetlink_send+0xdd/0x110 net/core/rtnetlink.c:714
 tcf_add_notify net/sched/act_api.c:1343 [inline]
 tcf_action_add+0x243/0x370 net/sched/act_api.c:1362
 tc_ctl_action+0x3b5/0x4bc net/sched/act_api.c:1410
 rtnetlink_rcv_msg+0x463/0xb00 net/core/rtnetlink.c:5386
 netlink_rcv_skb+0x177/0x450 net/netlink/af_netlink.c:2477
 rtnetlink_rcv+0x1d/0x30 net/core/rtnetlink.c:5404
 netlink_unicast_kernel net/netlink/af_netlink.c:1302 [inline]
 netlink_unicast+0x531/0x710 net/netlink/af_netlink.c:1328
 netlink_sendmsg+0x8a5/0xd60 net/netlink/af_netlink.c:1917
 sock_sendmsg_nosec net/socket.c:637 [inline]
 sock_sendmsg+0xd7/0x130 net/socket.c:657
 ___sys_sendmsg+0x803/0x920 net/socket.c:2311
 __sys_sendmsg+0x105/0x1d0 net/socket.c:2356
 __do_sys_sendmsg net/socket.c:2365 [inline]
 __se_sys_sendmsg net/socket.c:2363 [inline]
 __x64_sys_sendmsg+0x78/0xb0 net/socket.c:2363
 do_syscall_64+0xfa/0x760 arch/x86/entry/common.c:290
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x440939

Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot+cf0adbb9c28c8866c788@syzkaller.appspotmail.com
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 net/sched/act_api.c |   13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

--- a/net/sched/act_api.c
+++ b/net/sched/act_api.c
@@ -1072,10 +1072,16 @@ tcf_add_notify(struct net *net, struct n
 static int tcf_action_add(struct net *net, struct nlattr *nla,
 			  struct nlmsghdr *n, u32 portid, int ovr)
 {
-	int ret = 0;
+	int loop, ret;
 	LIST_HEAD(actions);
 
-	ret = tcf_action_init(net, NULL, nla, NULL, NULL, ovr, 0, &actions);
+	for (loop = 0; loop < 10; loop++) {
+		ret = tcf_action_init(net, NULL, nla, NULL, NULL, ovr, 0,
+				      &actions);
+		if (ret != -EAGAIN)
+			break;
+	}
+
 	if (ret)
 		return ret;
 
@@ -1122,10 +1128,7 @@ static int tc_ctl_action(struct sk_buff
 		 */
 		if (n->nlmsg_flags & NLM_F_REPLACE)
 			ovr = 1;
-replay:
 		ret = tcf_action_add(net, tca[TCA_ACT_TAB], n, portid, ovr);
-		if (ret == -EAGAIN)
-			goto replay;
 		break;
 	case RTM_DELACTION:
 		ret = tca_action_gd(net, tca[TCA_ACT_TAB], n,



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 026/119] MIPS: Treat Loongson Extensions as ASEs
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (24 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 025/119] net: avoid potential infinite loop in tc_ctl_action() Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 027/119] MIPS: elf_hwcap: Export userspace ASEs Greg Kroah-Hartman
                   ` (97 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Jiaxun Yang, Huacai Chen,
	Yunqiang Su, Paul Burton, linux-mips, Sasha Levin

From: Jiaxun Yang <jiaxun.yang@flygoat.com>

[ Upstream commit d2f965549006acb865c4638f1f030ebcefdc71f6 ]

Recently, binutils had split Loongson-3 Extensions into four ASEs:
MMI, CAM, EXT, EXT2. This patch do the samething in kernel and expose
them in cpuinfo so applications can probe supported ASEs at runtime.

Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Yunqiang Su <ysu@wavecomp.com>
Cc: stable@vger.kernel.org # v4.14+
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/mips/include/asm/cpu-features.h | 16 ++++++++++++++++
 arch/mips/include/asm/cpu.h          |  4 ++++
 arch/mips/kernel/cpu-probe.c         |  4 ++++
 arch/mips/kernel/proc.c              |  4 ++++
 4 files changed, 28 insertions(+)

diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
index 721b698bfe3cf..1befd483d5a3b 100644
--- a/arch/mips/include/asm/cpu-features.h
+++ b/arch/mips/include/asm/cpu-features.h
@@ -348,6 +348,22 @@
 #define cpu_has_dsp3		(cpu_data[0].ases & MIPS_ASE_DSP3)
 #endif
 
+#ifndef cpu_has_loongson_mmi
+#define cpu_has_loongson_mmi		__ase(MIPS_ASE_LOONGSON_MMI)
+#endif
+
+#ifndef cpu_has_loongson_cam
+#define cpu_has_loongson_cam		__ase(MIPS_ASE_LOONGSON_CAM)
+#endif
+
+#ifndef cpu_has_loongson_ext
+#define cpu_has_loongson_ext		__ase(MIPS_ASE_LOONGSON_EXT)
+#endif
+
+#ifndef cpu_has_loongson_ext2
+#define cpu_has_loongson_ext2		__ase(MIPS_ASE_LOONGSON_EXT2)
+#endif
+
 #ifndef cpu_has_mipsmt
 #define cpu_has_mipsmt		(cpu_data[0].ases & MIPS_ASE_MIPSMT)
 #endif
diff --git a/arch/mips/include/asm/cpu.h b/arch/mips/include/asm/cpu.h
index d39324c4adf13..a6fdf13585916 100644
--- a/arch/mips/include/asm/cpu.h
+++ b/arch/mips/include/asm/cpu.h
@@ -433,5 +433,9 @@ enum cpu_type_enum {
 #define MIPS_ASE_MSA		0x00000100 /* MIPS SIMD Architecture */
 #define MIPS_ASE_DSP3		0x00000200 /* Signal Processing ASE Rev 3*/
 #define MIPS_ASE_MIPS16E2	0x00000400 /* MIPS16e2 */
+#define MIPS_ASE_LOONGSON_MMI	0x00000800 /* Loongson MultiMedia extensions Instructions */
+#define MIPS_ASE_LOONGSON_CAM	0x00001000 /* Loongson CAM */
+#define MIPS_ASE_LOONGSON_EXT	0x00002000 /* Loongson EXTensions */
+#define MIPS_ASE_LOONGSON_EXT2	0x00004000 /* Loongson EXTensions R2 */
 
 #endif /* _ASM_CPU_H */
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index cf3fd549e16d0..3007ae1bb616a 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -1478,6 +1478,7 @@ static inline void cpu_probe_legacy(struct cpuinfo_mips *c, unsigned int cpu)
 			__cpu_name[cpu] = "ICT Loongson-3";
 			set_elf_platform(cpu, "loongson3a");
 			set_isa(c, MIPS_CPU_ISA_M64R1);
+			c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_EXT);
 			break;
 		case PRID_REV_LOONGSON3B_R1:
 		case PRID_REV_LOONGSON3B_R2:
@@ -1485,6 +1486,7 @@ static inline void cpu_probe_legacy(struct cpuinfo_mips *c, unsigned int cpu)
 			__cpu_name[cpu] = "ICT Loongson-3";
 			set_elf_platform(cpu, "loongson3b");
 			set_isa(c, MIPS_CPU_ISA_M64R1);
+			c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_EXT);
 			break;
 		}
 
@@ -1845,6 +1847,8 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
 		decode_configs(c);
 		c->options |= MIPS_CPU_FTLB | MIPS_CPU_TLBINV | MIPS_CPU_LDPTE;
 		c->writecombine = _CACHE_UNCACHED_ACCELERATED;
+		c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
+			MIPS_ASE_LOONGSON_EXT | MIPS_ASE_LOONGSON_EXT2);
 		break;
 	default:
 		panic("Unknown Loongson Processor ID!");
diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c
index b2de408a259e4..f8d36710cd581 100644
--- a/arch/mips/kernel/proc.c
+++ b/arch/mips/kernel/proc.c
@@ -124,6 +124,10 @@ static int show_cpuinfo(struct seq_file *m, void *v)
 	if (cpu_has_eva)	seq_printf(m, "%s", " eva");
 	if (cpu_has_htw)	seq_printf(m, "%s", " htw");
 	if (cpu_has_xpa)	seq_printf(m, "%s", " xpa");
+	if (cpu_has_loongson_mmi)	seq_printf(m, "%s", " loongson-mmi");
+	if (cpu_has_loongson_cam)	seq_printf(m, "%s", " loongson-cam");
+	if (cpu_has_loongson_ext)	seq_printf(m, "%s", " loongson-ext");
+	if (cpu_has_loongson_ext2)	seq_printf(m, "%s", " loongson-ext2");
 	seq_printf(m, "\n");
 
 	if (cpu_has_mmips) {
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 027/119] MIPS: elf_hwcap: Export userspace ASEs
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (25 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 026/119] MIPS: Treat Loongson Extensions as ASEs Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-29 10:50   ` Jiaxun Yang
  2019-10-27 21:00 ` [PATCH 4.14 028/119] loop: Add LOOP_SET_DIRECT_IO to compat ioctl Greg Kroah-Hartman
                   ` (96 subsequent siblings)
  123 siblings, 1 reply; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Meng Zhuo, Jiaxun Yang, linux-mips,
	Paul Burton, Sasha Levin

From: Jiaxun Yang <jiaxun.yang@flygoat.com>

[ Upstream commit 38dffe1e4dde1d3174fdce09d67370412843ebb5 ]

A Golang developer reported MIPS hwcap isn't reflecting instructions
that the processor actually supported so programs can't apply optimized
code at runtime.

Thus we export the ASEs that can be used in userspace programs.

Reported-by: Meng Zhuo <mengzhuo1203@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-mips@vger.kernel.org
Cc: Paul Burton <paul.burton@mips.com>
Cc: <stable@vger.kernel.org> # 4.14+
Signed-off-by: Paul Burton <paul.burton@mips.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/mips/include/uapi/asm/hwcap.h | 11 ++++++++++
 arch/mips/kernel/cpu-probe.c       | 33 ++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/arch/mips/include/uapi/asm/hwcap.h b/arch/mips/include/uapi/asm/hwcap.h
index 600ad8fd68356..2475294c3d185 100644
--- a/arch/mips/include/uapi/asm/hwcap.h
+++ b/arch/mips/include/uapi/asm/hwcap.h
@@ -5,5 +5,16 @@
 /* HWCAP flags */
 #define HWCAP_MIPS_R6		(1 << 0)
 #define HWCAP_MIPS_MSA		(1 << 1)
+#define HWCAP_MIPS_MIPS16	(1 << 3)
+#define HWCAP_MIPS_MDMX     (1 << 4)
+#define HWCAP_MIPS_MIPS3D   (1 << 5)
+#define HWCAP_MIPS_SMARTMIPS (1 << 6)
+#define HWCAP_MIPS_DSP      (1 << 7)
+#define HWCAP_MIPS_DSP2     (1 << 8)
+#define HWCAP_MIPS_DSP3     (1 << 9)
+#define HWCAP_MIPS_MIPS16E2 (1 << 10)
+#define HWCAP_LOONGSON_MMI  (1 << 11)
+#define HWCAP_LOONGSON_EXT  (1 << 12)
+#define HWCAP_LOONGSON_EXT2 (1 << 13)
 
 #endif /* _UAPI_ASM_HWCAP_H */
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index 3007ae1bb616a..c38cd62879f4e 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -2080,6 +2080,39 @@ void cpu_probe(void)
 		elf_hwcap |= HWCAP_MIPS_MSA;
 	}
 
+	if (cpu_has_mips16)
+		elf_hwcap |= HWCAP_MIPS_MIPS16;
+
+	if (cpu_has_mdmx)
+		elf_hwcap |= HWCAP_MIPS_MDMX;
+
+	if (cpu_has_mips3d)
+		elf_hwcap |= HWCAP_MIPS_MIPS3D;
+
+	if (cpu_has_smartmips)
+		elf_hwcap |= HWCAP_MIPS_SMARTMIPS;
+
+	if (cpu_has_dsp)
+		elf_hwcap |= HWCAP_MIPS_DSP;
+
+	if (cpu_has_dsp2)
+		elf_hwcap |= HWCAP_MIPS_DSP2;
+
+	if (cpu_has_dsp3)
+		elf_hwcap |= HWCAP_MIPS_DSP3;
+
+	if (cpu_has_loongson_mmi)
+		elf_hwcap |= HWCAP_LOONGSON_MMI;
+
+	if (cpu_has_loongson_mmi)
+		elf_hwcap |= HWCAP_LOONGSON_CAM;
+
+	if (cpu_has_loongson_ext)
+		elf_hwcap |= HWCAP_LOONGSON_EXT;
+
+	if (cpu_has_loongson_ext)
+		elf_hwcap |= HWCAP_LOONGSON_EXT2;
+
 	if (cpu_has_vz)
 		cpu_probe_vz(c);
 
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 028/119] loop: Add LOOP_SET_DIRECT_IO to compat ioctl
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (26 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 027/119] MIPS: elf_hwcap: Export userspace ASEs Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 029/119] memfd: Fix locking when tagging pins Greg Kroah-Hartman
                   ` (95 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Jens Axboe, Alessio Balsini, Sasha Levin

From: Alessio Balsini <balsini@android.com>

[ Upstream commit fdbe4eeeb1aac219b14f10c0ed31ae5d1123e9b8 ]

Enabling Direct I/O with loop devices helps reducing memory usage by
avoiding double caching.  32 bit applications running on 64 bits systems
are currently not able to request direct I/O because is missing from the
lo_compat_ioctl.

This patch fixes the compatibility issue mentioned above by exporting
LOOP_SET_DIRECT_IO as additional lo_compat_ioctl() entry.
The input argument for this ioctl is a single long converted to a 1-bit
boolean, so compatibility is preserved.

Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Alessio Balsini <balsini@android.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/block/loop.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 87d7c42affbc4..ec61dd873c93d 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1605,6 +1605,7 @@ static int lo_compat_ioctl(struct block_device *bdev, fmode_t mode,
 		arg = (unsigned long) compat_ptr(arg);
 	case LOOP_SET_FD:
 	case LOOP_CHANGE_FD:
+	case LOOP_SET_DIRECT_IO:
 		err = lo_ioctl(bdev, mode, cmd, arg);
 		break;
 	default:
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 029/119] memfd: Fix locking when tagging pins
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (27 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 028/119] loop: Add LOOP_SET_DIRECT_IO to compat ioctl Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 030/119] USB: legousbtower: fix memleak on disconnect Greg Kroah-Hartman
                   ` (94 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, zhong jiang, Matthew Wilcox (Oracle),
	Sasha Levin

From: Matthew Wilcox (Oracle) <willy@infradead.org>

The RCU lock is insufficient to protect the radix tree iteration as
a deletion from the tree can occur before we take the spinlock to
tag the entry.  In 4.19, this has manifested as a bug with the following
trace:

kernel BUG at lib/radix-tree.c:1429!
invalid opcode: 0000 [#1] SMP KASAN PTI
CPU: 7 PID: 6935 Comm: syz-executor.2 Not tainted 4.19.36 #25
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
RIP: 0010:radix_tree_tag_set+0x200/0x2f0 lib/radix-tree.c:1429
Code: 00 00 5b 5d 41 5c 41 5d 41 5e 41 5f c3 48 89 44 24 10 e8 a3 29 7e fe 48 8b 44 24 10 48 0f ab 03 e9 d2 fe ff ff e8 90 29 7e fe <0f> 0b 48 c7 c7 e0 5a 87 84 e8 f0 e7 08 ff 4c 89 ef e8 4a ff ac fe
RSP: 0018:ffff88837b13fb60 EFLAGS: 00010016
RAX: 0000000000040000 RBX: ffff8883c5515d58 RCX: ffffffff82cb2ef0
RDX: 0000000000000b72 RSI: ffffc90004cf2000 RDI: ffff8883c5515d98
RBP: ffff88837b13fb98 R08: ffffed106f627f7e R09: ffffed106f627f7e
R10: 0000000000000001 R11: ffffed106f627f7d R12: 0000000000000004
R13: ffffea000d7fea80 R14: 1ffff1106f627f6f R15: 0000000000000002
FS:  00007fa1b8df2700(0000) GS:ffff8883e2fc0000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fa1b8df1db8 CR3: 000000037d4d2001 CR4: 0000000000160ee0
Call Trace:
 memfd_tag_pins mm/memfd.c:51 [inline]
 memfd_wait_for_pins+0x2c5/0x12d0 mm/memfd.c:81
 memfd_add_seals mm/memfd.c:215 [inline]
 memfd_fcntl+0x33d/0x4a0 mm/memfd.c:247
 do_fcntl+0x589/0xeb0 fs/fcntl.c:421
 __do_sys_fcntl fs/fcntl.c:463 [inline]
 __se_sys_fcntl fs/fcntl.c:448 [inline]
 __x64_sys_fcntl+0x12d/0x180 fs/fcntl.c:448
 do_syscall_64+0xc8/0x580 arch/x86/entry/common.c:293

The problem does not occur in mainline due to the XArray rewrite which
changed the locking to exclude modification of the tree during iteration.
At the time, nobody realised this was a bugfix.  Backport the locking
changes to stable.

Cc: stable@vger.kernel.org
Reported-by: zhong jiang <zhongjiang@huawei.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 mm/shmem.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 037e2ee9ccacc..5b2cc9f9b1f1d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2657,11 +2657,12 @@ static void shmem_tag_pins(struct address_space *mapping)
 	void **slot;
 	pgoff_t start;
 	struct page *page;
+	unsigned int tagged = 0;
 
 	lru_add_drain();
 	start = 0;
-	rcu_read_lock();
 
+	spin_lock_irq(&mapping->tree_lock);
 	radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
 		page = radix_tree_deref_slot(slot);
 		if (!page || radix_tree_exception(page)) {
@@ -2670,18 +2671,19 @@ static void shmem_tag_pins(struct address_space *mapping)
 				continue;
 			}
 		} else if (page_count(page) - page_mapcount(page) > 1) {
-			spin_lock_irq(&mapping->tree_lock);
 			radix_tree_tag_set(&mapping->page_tree, iter.index,
 					   SHMEM_TAG_PINNED);
-			spin_unlock_irq(&mapping->tree_lock);
 		}
 
-		if (need_resched()) {
-			slot = radix_tree_iter_resume(slot, &iter);
-			cond_resched_rcu();
-		}
+		if (++tagged % 1024)
+			continue;
+
+		slot = radix_tree_iter_resume(slot, &iter);
+		spin_unlock_irq(&mapping->tree_lock);
+		cond_resched();
+		spin_lock_irq(&mapping->tree_lock);
 	}
-	rcu_read_unlock();
+	spin_unlock_irq(&mapping->tree_lock);
 }
 
 /*
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 030/119] USB: legousbtower: fix memleak on disconnect
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (28 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 029/119] memfd: Fix locking when tagging pins Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 031/119] ALSA: hda/realtek - Add support for ALC711 Greg Kroah-Hartman
                   ` (93 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Johan Hovold

From: Johan Hovold <johan@kernel.org>

commit b6c03e5f7b463efcafd1ce141bd5a8fc4e583ae2 upstream.

If disconnect() races with release() after a process has been
interrupted, release() could end up returning early and the driver would
fail to free its driver data.

Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
Link: https://lore.kernel.org/r/20191010125835.27031-3-johan@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/usb/misc/legousbtower.c |    5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

--- a/drivers/usb/misc/legousbtower.c
+++ b/drivers/usb/misc/legousbtower.c
@@ -423,10 +423,7 @@ static int tower_release (struct inode *
 		goto exit;
 	}
 
-	if (mutex_lock_interruptible(&dev->lock)) {
-	        retval = -ERESTARTSYS;
-		goto exit;
-	}
+	mutex_lock(&dev->lock);
 
 	if (dev->open_count != 1) {
 		dev_dbg(&dev->udev->dev, "%s: device not opened exactly once\n",



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 031/119] ALSA: hda/realtek - Add support for ALC711
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (29 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 030/119] USB: legousbtower: fix memleak on disconnect Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 032/119] usb: udc: lpc32xx: fix bad bit shift operation Greg Kroah-Hartman
                   ` (92 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Kailang Yang, Takashi Iwai

From: Kailang Yang <kailang@realtek.com>

commit 83629532ce45ef9df1f297b419b9ea112045685d upstream.

Support new codec ALC711.

Signed-off-by: Kailang Yang <kailang@realtek.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 sound/pci/hda/patch_realtek.c |    3 +++
 1 file changed, 3 insertions(+)

--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -359,6 +359,7 @@ static void alc_fill_eapd_coef(struct hd
 	case 0x10ec0700:
 	case 0x10ec0701:
 	case 0x10ec0703:
+	case 0x10ec0711:
 		alc_update_coef_idx(codec, 0x10, 1<<15, 0);
 		break;
 	case 0x10ec0662:
@@ -7272,6 +7273,7 @@ static int patch_alc269(struct hda_codec
 	case 0x10ec0700:
 	case 0x10ec0701:
 	case 0x10ec0703:
+	case 0x10ec0711:
 		spec->codec_variant = ALC269_TYPE_ALC700;
 		spec->gen.mixer_nid = 0; /* ALC700 does not have any loopback mixer path */
 		alc_update_coef_idx(codec, 0x4a, 1 << 15, 0); /* Combo jack auto trigger control */
@@ -8365,6 +8367,7 @@ static const struct hda_device_id snd_hd
 	HDA_CODEC_ENTRY(0x10ec0700, "ALC700", patch_alc269),
 	HDA_CODEC_ENTRY(0x10ec0701, "ALC701", patch_alc269),
 	HDA_CODEC_ENTRY(0x10ec0703, "ALC703", patch_alc269),
+	HDA_CODEC_ENTRY(0x10ec0711, "ALC711", patch_alc269),
 	HDA_CODEC_ENTRY(0x10ec0867, "ALC891", patch_alc662),
 	HDA_CODEC_ENTRY(0x10ec0880, "ALC880", patch_alc880),
 	HDA_CODEC_ENTRY(0x10ec0882, "ALC882", patch_alc882),



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 032/119] usb: udc: lpc32xx: fix bad bit shift operation
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (30 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 031/119] ALSA: hda/realtek - Add support for ALC711 Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 033/119] USB: serial: ti_usb_3410_5052: fix port-close races Greg Kroah-Hartman
                   ` (91 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Gustavo A. R. Silva

From: Gustavo A. R. Silva <gustavo@embeddedor.com>

commit b987b66ac3a2bc2f7b03a0ba48a07dc553100c07 upstream.

It seems that the right variable to use in this case is *i*, instead of
*n*, otherwise there is an undefined behavior when right shifiting by more
than 31 bits when multiplying n by 8; notice that *n* can take values
equal or greater than 4 (4, 8, 16, ...).

Also, notice that under the current conditions (bl = 3), we are skiping
the handling of bytes 3, 7, 31... So, fix this by updating this logic
and limit *bl* up to 4 instead of up to 3.

This fix is based on function udc_stuff_fifo().

Addresses-Coverity-ID: 1454834 ("Bad bit shift operation")
Fixes: 24a28e428351 ("USB: gadget driver for LPC32xx")
Cc: stable@vger.kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Link: https://lore.kernel.org/r/20191014191830.GA10721@embeddedor
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/usb/gadget/udc/lpc32xx_udc.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- a/drivers/usb/gadget/udc/lpc32xx_udc.c
+++ b/drivers/usb/gadget/udc/lpc32xx_udc.c
@@ -1178,11 +1178,11 @@ static void udc_pop_fifo(struct lpc32xx_
 			tmp = readl(USBD_RXDATA(udc->udp_baseaddr));
 
 			bl = bytes - n;
-			if (bl > 3)
-				bl = 3;
+			if (bl > 4)
+				bl = 4;
 
 			for (i = 0; i < bl; i++)
-				data[n + i] = (u8) ((tmp >> (n * 8)) & 0xFF);
+				data[n + i] = (u8) ((tmp >> (i * 8)) & 0xFF);
 		}
 		break;
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 033/119] USB: serial: ti_usb_3410_5052: fix port-close races
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (31 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 032/119] usb: udc: lpc32xx: fix bad bit shift operation Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 034/119] USB: ldusb: fix memleak on disconnect Greg Kroah-Hartman
                   ` (90 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Johan Hovold

From: Johan Hovold <johan@kernel.org>

commit 6f1d1dc8d540a9aa6e39b9cb86d3a67bbc1c8d8d upstream.

Fix races between closing a port and opening or closing another port on
the same device which could lead to a failure to start or stop the
shared interrupt URB. The latter could potentially cause a
use-after-free or worse in the completion handler on driver unbind.

Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/usb/serial/ti_usb_3410_5052.c |   10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

--- a/drivers/usb/serial/ti_usb_3410_5052.c
+++ b/drivers/usb/serial/ti_usb_3410_5052.c
@@ -780,7 +780,6 @@ static void ti_close(struct usb_serial_p
 	struct ti_port *tport;
 	int port_number;
 	int status;
-	int do_unlock;
 	unsigned long flags;
 
 	tdev = usb_get_serial_data(port->serial);
@@ -804,16 +803,13 @@ static void ti_close(struct usb_serial_p
 			"%s - cannot send close port command, %d\n"
 							, __func__, status);
 
-	/* if mutex_lock is interrupted, continue anyway */
-	do_unlock = !mutex_lock_interruptible(&tdev->td_open_close_lock);
+	mutex_lock(&tdev->td_open_close_lock);
 	--tport->tp_tdev->td_open_port_count;
-	if (tport->tp_tdev->td_open_port_count <= 0) {
+	if (tport->tp_tdev->td_open_port_count == 0) {
 		/* last port is closed, shut down interrupt urb */
 		usb_kill_urb(port->serial->port[0]->interrupt_in_urb);
-		tport->tp_tdev->td_open_port_count = 0;
 	}
-	if (do_unlock)
-		mutex_unlock(&tdev->td_open_close_lock);
+	mutex_unlock(&tdev->td_open_close_lock);
 }
 
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 034/119] USB: ldusb: fix memleak on disconnect
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (32 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 033/119] USB: serial: ti_usb_3410_5052: fix port-close races Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 035/119] USB: usblp: fix use-after-free " Greg Kroah-Hartman
                   ` (89 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Johan Hovold

From: Johan Hovold <johan@kernel.org>

commit b14a39048c1156cfee76228bf449852da2f14df8 upstream.

If disconnect() races with release() after a process has been
interrupted, release() could end up returning early and the driver would
fail to free its driver data.

Fixes: 2824bd250f0b ("[PATCH] USB: add ldusb driver")
Cc: stable <stable@vger.kernel.org>     # 2.6.13
Signed-off-by: Johan Hovold <johan@kernel.org>
Link: https://lore.kernel.org/r/20191010125835.27031-2-johan@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/usb/misc/ldusb.c |    5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

--- a/drivers/usb/misc/ldusb.c
+++ b/drivers/usb/misc/ldusb.c
@@ -383,10 +383,7 @@ static int ld_usb_release(struct inode *
 		goto exit;
 	}
 
-	if (mutex_lock_interruptible(&dev->mutex)) {
-		retval = -ERESTARTSYS;
-		goto exit;
-	}
+	mutex_lock(&dev->mutex);
 
 	if (dev->open_count != 1) {
 		retval = -ENODEV;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 035/119] USB: usblp: fix use-after-free on disconnect
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (33 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 034/119] USB: ldusb: fix memleak on disconnect Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 036/119] USB: ldusb: fix read info leaks Greg Kroah-Hartman
                   ` (88 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, syzbot+cd24df4d075c319ebfc5, Johan Hovold

From: Johan Hovold <johan@kernel.org>

commit 7a759197974894213621aa65f0571b51904733d6 upstream.

A recent commit addressing a runtime PM use-count regression, introduced
a use-after-free by not making sure we held a reference to the struct
usb_interface for the lifetime of the driver data.

Fixes: 9a31535859bf ("USB: usblp: fix runtime PM after driver unbind")
Cc: stable <stable@vger.kernel.org>
Reported-by: syzbot+cd24df4d075c319ebfc5@syzkaller.appspotmail.com
Signed-off-by: Johan Hovold <johan@kernel.org>
Link: https://lore.kernel.org/r/20191015175522.18490-1-johan@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/usb/class/usblp.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/drivers/usb/class/usblp.c
+++ b/drivers/usb/class/usblp.c
@@ -458,6 +458,7 @@ static void usblp_cleanup(struct usblp *
 	kfree(usblp->readbuf);
 	kfree(usblp->device_id_string);
 	kfree(usblp->statusbuf);
+	usb_put_intf(usblp->intf);
 	kfree(usblp);
 }
 
@@ -1120,7 +1121,7 @@ static int usblp_probe(struct usb_interf
 	init_waitqueue_head(&usblp->wwait);
 	init_usb_anchor(&usblp->urbs);
 	usblp->ifnum = intf->cur_altsetting->desc.bInterfaceNumber;
-	usblp->intf = intf;
+	usblp->intf = usb_get_intf(intf);
 
 	/* Malloc device ID string buffer to the largest expected length,
 	 * since we can re-query it on an ioctl and a dynamic string
@@ -1209,6 +1210,7 @@ abort:
 	kfree(usblp->readbuf);
 	kfree(usblp->statusbuf);
 	kfree(usblp->device_id_string);
+	usb_put_intf(usblp->intf);
 	kfree(usblp);
 abort_ret:
 	return retval;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 036/119] USB: ldusb: fix read info leaks
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (34 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 035/119] USB: usblp: fix use-after-free " Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 037/119] arm64: sysreg: Move to use definitions for all the SCTLR bits Greg Kroah-Hartman
                   ` (87 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, syzbot+6fe95b826644f7f12b0b, Johan Hovold

From: Johan Hovold <johan@kernel.org>

commit 7a6f22d7479b7a0b68eadd308a997dd64dda7dae upstream.

Fix broken read implementation, which could be used to trigger slab info
leaks.

The driver failed to check if the custom ring buffer was still empty
when waking up after having waited for more data. This would happen on
every interrupt-in completion, even if no data had been added to the
ring buffer (e.g. on disconnect events).

Due to missing sanity checks and uninitialised (kmalloced) ring-buffer
entries, this meant that huge slab info leaks could easily be triggered.

Note that the empty-buffer check after wakeup is enough to fix the info
leak on disconnect, but let's clear the buffer on allocation and add a
sanity check to read() to prevent further leaks.

Fixes: 2824bd250f0b ("[PATCH] USB: add ldusb driver")
Cc: stable <stable@vger.kernel.org>     # 2.6.13
Reported-by: syzbot+6fe95b826644f7f12b0b@syzkaller.appspotmail.com
Signed-off-by: Johan Hovold <johan@kernel.org>
Link: https://lore.kernel.org/r/20191018151955.25135-2-johan@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/usb/misc/ldusb.c |   15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

--- a/drivers/usb/misc/ldusb.c
+++ b/drivers/usb/misc/ldusb.c
@@ -467,7 +467,7 @@ static ssize_t ld_usb_read(struct file *
 
 	/* wait for data */
 	spin_lock_irq(&dev->rbsl);
-	if (dev->ring_head == dev->ring_tail) {
+	while (dev->ring_head == dev->ring_tail) {
 		dev->interrupt_in_done = 0;
 		spin_unlock_irq(&dev->rbsl);
 		if (file->f_flags & O_NONBLOCK) {
@@ -477,12 +477,17 @@ static ssize_t ld_usb_read(struct file *
 		retval = wait_event_interruptible(dev->read_wait, dev->interrupt_in_done);
 		if (retval < 0)
 			goto unlock_exit;
-	} else {
-		spin_unlock_irq(&dev->rbsl);
+
+		spin_lock_irq(&dev->rbsl);
 	}
+	spin_unlock_irq(&dev->rbsl);
 
 	/* actual_buffer contains actual_length + interrupt_in_buffer */
 	actual_buffer = (size_t *)(dev->ring_buffer + dev->ring_tail * (sizeof(size_t)+dev->interrupt_in_endpoint_size));
+	if (*actual_buffer > dev->interrupt_in_endpoint_size) {
+		retval = -EIO;
+		goto unlock_exit;
+	}
 	bytes_to_read = min(count, *actual_buffer);
 	if (bytes_to_read < *actual_buffer)
 		dev_warn(&dev->intf->dev, "Read buffer overflow, %zd bytes dropped\n",
@@ -693,7 +698,9 @@ static int ld_usb_probe(struct usb_inter
 		dev_warn(&intf->dev, "Interrupt out endpoint not found (using control endpoint instead)\n");
 
 	dev->interrupt_in_endpoint_size = usb_endpoint_maxp(dev->interrupt_in_endpoint);
-	dev->ring_buffer = kmalloc(ring_buffer_size*(sizeof(size_t)+dev->interrupt_in_endpoint_size), GFP_KERNEL);
+	dev->ring_buffer = kcalloc(ring_buffer_size,
+			sizeof(size_t) + dev->interrupt_in_endpoint_size,
+			GFP_KERNEL);
 	if (!dev->ring_buffer)
 		goto error;
 	dev->interrupt_in_buffer = kmalloc(dev->interrupt_in_endpoint_size, GFP_KERNEL);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 037/119] arm64: sysreg: Move to use definitions for all the SCTLR bits
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (35 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 036/119] USB: ldusb: fix read info leaks Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 038/119] arm64: Expose support for optional ARMv8-A features Greg Kroah-Hartman
                   ` (86 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, James Morse, Catalin Marinas, Ard Biesheuvel

From: James Morse <james.morse@arm.com>

[ Upstream commit 7a00d68ebe5f07cb1db17e7fedfd031f0d87e8bb ]

__cpu_setup() configures SCTLR_EL1 using some hard coded hex masks,
and el2_setup() duplicates some this when setting RES1 bits.

Lets make this the same as KVM's hyp_init, which uses named bits.

First, we add definitions for all the SCTLR_EL{1,2} bits, the RES{1,0}
bits, and those we want to set or clear.

Add a build_bug checks to ensures all bits are either set or clear.
This means we don't need to preserve endian-ness configuration
generated elsewhere.

Finally, move the head.S and proc.S users of these hard-coded masks
over to the macro versions.

Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/sysreg.h |   65 ++++++++++++++++++++++++++++++++++++++--
 arch/arm64/kernel/head.S        |   13 +-------
 arch/arm64/mm/proc.S            |   24 --------------
 3 files changed, 67 insertions(+), 35 deletions(-)

--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -20,6 +20,7 @@
 #ifndef __ASM_SYSREG_H
 #define __ASM_SYSREG_H
 
+#include <asm/compiler.h>
 #include <linux/stringify.h>
 
 /*
@@ -297,25 +298,81 @@
 
 /* Common SCTLR_ELx flags. */
 #define SCTLR_ELx_EE    (1 << 25)
+#define SCTLR_ELx_WXN	(1 << 19)
 #define SCTLR_ELx_I	(1 << 12)
 #define SCTLR_ELx_SA	(1 << 3)
 #define SCTLR_ELx_C	(1 << 2)
 #define SCTLR_ELx_A	(1 << 1)
 #define SCTLR_ELx_M	1
 
+#define SCTLR_ELx_FLAGS	(SCTLR_ELx_M | SCTLR_ELx_A | SCTLR_ELx_C | \
+			 SCTLR_ELx_SA | SCTLR_ELx_I)
+
+/* SCTLR_EL2 specific flags. */
 #define SCTLR_EL2_RES1	((1 << 4)  | (1 << 5)  | (1 << 11) | (1 << 16) | \
 			 (1 << 18) | (1 << 22) | (1 << 23) | (1 << 28) | \
 			 (1 << 29))
+#define SCTLR_EL2_RES0	((1 << 6)  | (1 << 7)  | (1 << 8)  | (1 << 9)  | \
+			 (1 << 10) | (1 << 13) | (1 << 14) | (1 << 15) | \
+			 (1 << 17) | (1 << 20) | (1 << 21) | (1 << 24) | \
+			 (1 << 26) | (1 << 27) | (1 << 30) | (1 << 31))
+
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#define ENDIAN_SET_EL2		SCTLR_ELx_EE
+#define ENDIAN_CLEAR_EL2	0
+#else
+#define ENDIAN_SET_EL2		0
+#define ENDIAN_CLEAR_EL2	SCTLR_ELx_EE
+#endif
+
+/* SCTLR_EL2 value used for the hyp-stub */
+#define SCTLR_EL2_SET	(ENDIAN_SET_EL2   | SCTLR_EL2_RES1)
+#define SCTLR_EL2_CLEAR	(SCTLR_ELx_M      | SCTLR_ELx_A    | SCTLR_ELx_C   | \
+			 SCTLR_ELx_SA     | SCTLR_ELx_I    | SCTLR_ELx_WXN | \
+			 ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
+
+/* Check all the bits are accounted for */
+#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
 
-#define SCTLR_ELx_FLAGS	(SCTLR_ELx_M | SCTLR_ELx_A | SCTLR_ELx_C | \
-			 SCTLR_ELx_SA | SCTLR_ELx_I)
 
 /* SCTLR_EL1 specific flags. */
 #define SCTLR_EL1_UCI		(1 << 26)
+#define SCTLR_EL1_E0E		(1 << 24)
 #define SCTLR_EL1_SPAN		(1 << 23)
+#define SCTLR_EL1_NTWE		(1 << 18)
+#define SCTLR_EL1_NTWI		(1 << 16)
 #define SCTLR_EL1_UCT		(1 << 15)
+#define SCTLR_EL1_DZE		(1 << 14)
+#define SCTLR_EL1_UMA		(1 << 9)
 #define SCTLR_EL1_SED		(1 << 8)
+#define SCTLR_EL1_ITD		(1 << 7)
 #define SCTLR_EL1_CP15BEN	(1 << 5)
+#define SCTLR_EL1_SA0		(1 << 4)
+
+#define SCTLR_EL1_RES1	((1 << 11) | (1 << 20) | (1 << 22) | (1 << 28) | \
+			 (1 << 29))
+#define SCTLR_EL1_RES0  ((1 << 6)  | (1 << 10) | (1 << 13) | (1 << 17) | \
+			 (1 << 21) | (1 << 27) | (1 << 30) | (1 << 31))
+
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#define ENDIAN_SET_EL1		(SCTLR_EL1_E0E | SCTLR_ELx_EE)
+#define ENDIAN_CLEAR_EL1	0
+#else
+#define ENDIAN_SET_EL1		0
+#define ENDIAN_CLEAR_EL1	(SCTLR_EL1_E0E | SCTLR_ELx_EE)
+#endif
+
+#define SCTLR_EL1_SET	(SCTLR_ELx_M    | SCTLR_ELx_C    | SCTLR_ELx_SA   |\
+			 SCTLR_EL1_SA0  | SCTLR_EL1_SED  | SCTLR_ELx_I    |\
+			 SCTLR_EL1_DZE  | SCTLR_EL1_UCT  | SCTLR_EL1_NTWI |\
+			 SCTLR_EL1_NTWE | SCTLR_EL1_SPAN | ENDIAN_SET_EL1 |\
+			 SCTLR_EL1_UCI  | SCTLR_EL1_RES1)
+#define SCTLR_EL1_CLEAR	(SCTLR_ELx_A   | SCTLR_EL1_CP15BEN | SCTLR_EL1_ITD    |\
+			 SCTLR_EL1_UMA | SCTLR_ELx_WXN     | ENDIAN_CLEAR_EL1 |\
+			 SCTLR_EL1_RES0)
+
+/* Check all the bits are accounted for */
+#define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
 
 /* id_aa64isar0 */
 #define ID_AA64ISAR0_RDM_SHIFT		28
@@ -463,6 +520,7 @@
 
 #else
 
+#include <linux/build_bug.h>
 #include <linux/types.h>
 
 asm(
@@ -519,6 +577,9 @@ static inline void config_sctlr_el1(u32
 {
 	u32 val;
 
+	SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS;
+	SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS;
+
 	val = read_sysreg(sctlr_el1);
 	val &= ~clear;
 	val |= set;
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -388,17 +388,13 @@ ENTRY(el2_setup)
 	mrs	x0, CurrentEL
 	cmp	x0, #CurrentEL_EL2
 	b.eq	1f
-	mrs	x0, sctlr_el1
-CPU_BE(	orr	x0, x0, #(3 << 24)	)	// Set the EE and E0E bits for EL1
-CPU_LE(	bic	x0, x0, #(3 << 24)	)	// Clear the EE and E0E bits for EL1
+	mov_q	x0, (SCTLR_EL1_RES1 | ENDIAN_SET_EL1)
 	msr	sctlr_el1, x0
 	mov	w0, #BOOT_CPU_MODE_EL1		// This cpu booted in EL1
 	isb
 	ret
 
-1:	mrs	x0, sctlr_el2
-CPU_BE(	orr	x0, x0, #(1 << 25)	)	// Set the EE bit for EL2
-CPU_LE(	bic	x0, x0, #(1 << 25)	)	// Clear the EE bit for EL2
+1:	mov_q	x0, (SCTLR_EL2_RES1 | ENDIAN_SET_EL2)
 	msr	sctlr_el2, x0
 
 #ifdef CONFIG_ARM64_VHE
@@ -505,10 +501,7 @@ install_el2_stub:
 	 * requires no configuration, and all non-hyp-specific EL2 setup
 	 * will be done via the _EL1 system register aliases in __cpu_setup.
 	 */
-	/* sctlr_el1 */
-	mov	x0, #0x0800			// Set/clear RES{1,0} bits
-CPU_BE(	movk	x0, #0x33d0, lsl #16	)	// Set EE and E0E on BE systems
-CPU_LE(	movk	x0, #0x30d0, lsl #16	)	// Clear EE and E0E on LE systems
+	mov_q	x0, (SCTLR_EL1_RES1 | ENDIAN_SET_EL1)
 	msr	sctlr_el1, x0
 
 	/* Coprocessor traps. */
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -430,11 +430,7 @@ ENTRY(__cpu_setup)
 	/*
 	 * Prepare SCTLR
 	 */
-	adr	x5, crval
-	ldp	w5, w6, [x5]
-	mrs	x0, sctlr_el1
-	bic	x0, x0, x5			// clear bits
-	orr	x0, x0, x6			// set bits
+	mov_q	x0, SCTLR_EL1_SET
 	/*
 	 * Set/prepare TCR and TTBR. We use 512GB (39-bit) address range for
 	 * both user and kernel.
@@ -470,21 +466,3 @@ ENTRY(__cpu_setup)
 	msr	tcr_el1, x10
 	ret					// return to head.S
 ENDPROC(__cpu_setup)
-
-	/*
-	 * We set the desired value explicitly, including those of the
-	 * reserved bits. The values of bits EE & E0E were set early in
-	 * el2_setup, which are left untouched below.
-	 *
-	 *                 n n            T
-	 *       U E      WT T UD     US IHBS
-	 *       CE0      XWHW CZ     ME TEEA S
-	 * .... .IEE .... NEAI TE.I ..AD DEN0 ACAM
-	 * 0011 0... 1101 ..0. ..0. 10.. .0.. .... < hardware reserved
-	 * .... .1.. .... 01.1 11.1 ..01 0.01 1101 < software settings
-	 */
-	.type	crval, #object
-crval:
-	.word	0xfcffffff			// clear
-	.word	0x34d5d91d			// set
-	.popsection



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 038/119] arm64: Expose support for optional ARMv8-A features
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (36 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 037/119] arm64: sysreg: Move to use definitions for all the SCTLR bits Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 039/119] arm64: Fix the feature type for ID register fields Greg Kroah-Hartman
                   ` (85 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Mark Rutland, Dave Martin,
	Marc Zyngier, Catalin Marinas, Suzuki K Poulose, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit f5e035f8694c3bdddc66ea46ecda965ee6853718 ]

ARMv8-A adds a few optional features for ARMv8.2 and ARMv8.3.
Expose them to the userspace via HWCAPs and mrs emulation.

SHA2-512  - Instruction support for SHA512 Hash algorithm (e.g SHA512H,
	    SHA512H2, SHA512U0, SHA512SU1)
SHA3 	  - SHA3 crypto instructions (EOR3, RAX1, XAR, BCAX).
SM3	  - Instruction support for Chinese cryptography algorithm SM3
SM4 	  - Instruction support for Chinese cryptography algorithm SM4
DP	  - Dot Product instructions (UDOT, SDOT).

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 Documentation/arm64/cpu-feature-registers.txt |   12 +++++++++++-
 arch/arm64/include/asm/sysreg.h               |    4 ++++
 arch/arm64/include/uapi/asm/hwcap.h           |    5 +++++
 arch/arm64/kernel/cpufeature.c                |    9 +++++++++
 arch/arm64/kernel/cpuinfo.c                   |    5 +++++
 5 files changed, 34 insertions(+), 1 deletion(-)

--- a/Documentation/arm64/cpu-feature-registers.txt
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -110,10 +110,20 @@ infrastructure:
      x--------------------------------------------------x
      | Name                         |  bits   | visible |
      |--------------------------------------------------|
-     | RES0                         | [63-32] |    n    |
+     | RES0                         | [63-48] |    n    |
+     |--------------------------------------------------|
+     | DP                           | [47-44] |    y    |
+     |--------------------------------------------------|
+     | SM4                          | [43-40] |    y    |
+     |--------------------------------------------------|
+     | SM3                          | [39-36] |    y    |
+     |--------------------------------------------------|
+     | SHA3                         | [35-32] |    y    |
      |--------------------------------------------------|
      | RDM                          | [31-28] |    y    |
      |--------------------------------------------------|
+     | RES0                         | [27-24] |    n    |
+     |--------------------------------------------------|
      | ATOMICS                      | [23-20] |    y    |
      |--------------------------------------------------|
      | CRC32                        | [19-16] |    y    |
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -375,6 +375,10 @@
 #define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
 
 /* id_aa64isar0 */
+#define ID_AA64ISAR0_DP_SHIFT		44
+#define ID_AA64ISAR0_SM4_SHIFT		40
+#define ID_AA64ISAR0_SM3_SHIFT		36
+#define ID_AA64ISAR0_SHA3_SHIFT		32
 #define ID_AA64ISAR0_RDM_SHIFT		28
 #define ID_AA64ISAR0_ATOMICS_SHIFT	20
 #define ID_AA64ISAR0_CRC32_SHIFT	16
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -37,5 +37,10 @@
 #define HWCAP_FCMA		(1 << 14)
 #define HWCAP_LRCPC		(1 << 15)
 #define HWCAP_DCPOP		(1 << 16)
+#define HWCAP_SHA3		(1 << 17)
+#define HWCAP_SM3		(1 << 18)
+#define HWCAP_SM4		(1 << 19)
+#define HWCAP_ASIMDDP		(1 << 20)
+#define HWCAP_SHA512		(1 << 21)
 
 #endif /* _UAPI__ASM_HWCAP_H */
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -107,6 +107,10 @@ cpufeature_pan_not_uao(const struct arm6
  * sync with the documentation of the CPU feature register ABI.
  */
 static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_DP_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
@@ -1040,9 +1044,14 @@ static const struct arm64_cpu_capabiliti
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_AES),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA1),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA2),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_SHA512),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_CRC32),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ATOMICS),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDRDM),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA3),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM3),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -69,6 +69,11 @@ static const char *const hwcap_str[] = {
 	"fcma",
 	"lrcpc",
 	"dcpop",
+	"sha3",
+	"sm3",
+	"sm4",
+	"asimddp",
+	"sha512",
 	NULL
 };
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 039/119] arm64: Fix the feature type for ID register fields
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (37 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 038/119] arm64: Expose support for optional ARMv8-A features Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 040/119] arm64: v8.4: Support for new floating point multiplication instructions Greg Kroah-Hartman
                   ` (84 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Catalin Marinas, Dave Martin, Mark Rutland,
	Will Deacon, Suzuki K Poulose, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 5bdecb7971572a1aef828df507558e7a4dfe25ec ]

Now that the ARM ARM clearly specifies the rules for inferring
the values of the ID register fields, fix the types of the
feature bits we have in the kernel.

As per ARM ARM DDI0487B.b, section D10.1.4 "Principles of the
ID scheme for fields in ID registers" lists the registers to
which the scheme applies along with the exceptions.

This patch changes the relevant feature bits from FTR_EXACT
to FTR_LOWER_SAFE to select the safer value. This will enable
an older kernel running on a new CPU detect the safer option
rather than completely disabling the feature.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |  102 ++++++++++++++++++++---------------------
 1 file changed, 51 insertions(+), 51 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -107,11 +107,11 @@ cpufeature_pan_not_uao(const struct arm6
  * sync with the documentation of the CPU feature register ABI.
  */
 static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_DP_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
@@ -121,36 +121,36 @@ static const struct arm64_ftr_bits ftr_i
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64PFR0_GIC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
 	/* Linux doesn't care about the EL3 */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64PFR0_EL3_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64PFR0_EL2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
 	ARM64_FTR_END,
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
-	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),
-	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0),
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0),
 	/* Linux shouldn't care about secure memory */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
 	/*
 	 * Differing PARange is fine as long as all peripherals and memory are mapped
 	 * within the minimum PARange of all CPUs
@@ -161,20 +161,20 @@ static const struct arm64_ftr_bits ftr_i
 
 static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
 
@@ -201,14 +201,14 @@ struct arm64_ftr_reg arm64_ftr_reg_ctrel
 };
 
 static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
-	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 28, 4, 0xf),	/* InnerShr */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 24, 4, 0),	/* FCSE */
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0xf),	/* InnerShr */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0),	/* FCSE */
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, 20, 4, 0),	/* AuxReg */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 16, 4, 0),	/* TCM */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 12, 4, 0),	/* ShareLvl */
-	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 8, 4, 0xf),	/* OuterShr */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 4, 4, 0),	/* PMSA */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 4, 0),	/* VMSA */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0),	/* TCM */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 12, 4, 0),	/* ShareLvl */
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 8, 4, 0xf),	/* OuterShr */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0),	/* PMSA */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0),	/* VMSA */
 	ARM64_FTR_END,
 };
 
@@ -229,8 +229,8 @@ static const struct arm64_ftr_bits ftr_i
 };
 
 static const struct arm64_ftr_bits ftr_mvfr2[] = {
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 4, 4, 0),		/* FPMisc */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 4, 0),		/* SIMDMisc */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0),		/* FPMisc */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0),		/* SIMDMisc */
 	ARM64_FTR_END,
 };
 
@@ -242,25 +242,25 @@ static const struct arm64_ftr_bits ftr_d
 
 
 static const struct arm64_ftr_bits ftr_id_isar5[] = {
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_RDM_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_CRC32_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_SHA2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_SHA1_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_AES_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_ISAR5_SEVL_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };
 
 static const struct arm64_ftr_bits ftr_id_mmfr4[] = {
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 4, 4, 0),		/* ac2 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0),	/* ac2 */
 	ARM64_FTR_END,
 };
 
 static const struct arm64_ftr_bits ftr_id_pfr0[] = {
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 12, 4, 0),	/* State3 */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 8, 4, 0),		/* State2 */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 4, 4, 0),		/* State1 */
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 4, 0),		/* State0 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 12, 4, 0),		/* State3 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 8, 4, 0),		/* State2 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0),		/* State1 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0),		/* State0 */
 	ARM64_FTR_END,
 };
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 040/119] arm64: v8.4: Support for new floating point multiplication instructions
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (38 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 039/119] arm64: Fix the feature type for ID register fields Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 041/119] arm64: Documentation: cpu-feature-registers: Remove RES0 fields Greg Kroah-Hartman
                   ` (83 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Dave Martin, Suzuki K Poulose, Dongjiu Geng,
	Catalin Marinas, Ard Biesheuvel

From: Dongjiu Geng <gengdongjiu@huawei.com>

[ Upstream commit 3b3b681097fae73b7f5dcdd42db6cfdf32943d4c ]

ARM v8.4 extensions add new neon instructions for performing a
multiplication of each FP16 element of one vector with the corresponding
FP16 element of a second vector, and to add or subtract this without an
intermediate rounding to the corresponding FP32 element in a third vector.

This patch detects this feature and let the userspace know about it via a
HWCAP bit and MRS emulation.

Cc: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[ardb: fix up for missing SVE in context]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 Documentation/arm64/cpu-feature-registers.txt |    4 +++-
 arch/arm64/include/asm/sysreg.h               |    1 +
 arch/arm64/include/uapi/asm/hwcap.h           |    2 ++
 arch/arm64/kernel/cpufeature.c                |    2 ++
 arch/arm64/kernel/cpuinfo.c                   |    2 ++
 5 files changed, 10 insertions(+), 1 deletion(-)

--- a/Documentation/arm64/cpu-feature-registers.txt
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -110,7 +110,9 @@ infrastructure:
      x--------------------------------------------------x
      | Name                         |  bits   | visible |
      |--------------------------------------------------|
-     | RES0                         | [63-48] |    n    |
+     | RES0                         | [63-52] |    n    |
+     |--------------------------------------------------|
+     | FHM                          | [51-48] |    y    |
      |--------------------------------------------------|
      | DP                           | [47-44] |    y    |
      |--------------------------------------------------|
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -375,6 +375,7 @@
 #define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
 
 /* id_aa64isar0 */
+#define ID_AA64ISAR0_FHM_SHIFT		48
 #define ID_AA64ISAR0_DP_SHIFT		44
 #define ID_AA64ISAR0_SM4_SHIFT		40
 #define ID_AA64ISAR0_SM3_SHIFT		36
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -42,5 +42,7 @@
 #define HWCAP_SM4		(1 << 19)
 #define HWCAP_ASIMDDP		(1 << 20)
 #define HWCAP_SHA512		(1 << 21)
+#define HWCAP_SVE		(1 << 22)
+#define HWCAP_ASIMDFHM		(1 << 23)
 
 #endif /* _UAPI__ASM_HWCAP_H */
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -107,6 +107,7 @@ cpufeature_pan_not_uao(const struct arm6
  * sync with the documentation of the CPU feature register ABI.
  */
 static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
@@ -1052,6 +1053,7 @@ static const struct arm64_cpu_capabiliti
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM3),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDFHM),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -74,6 +74,8 @@ static const char *const hwcap_str[] = {
 	"sm4",
 	"asimddp",
 	"sha512",
+	"sve",
+	"asimdfhm",
 	NULL
 };
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 041/119] arm64: Documentation: cpu-feature-registers: Remove RES0 fields
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (39 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 040/119] arm64: v8.4: Support for new floating point multiplication instructions Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 042/119] arm64: Expose Arm v8.4 features Greg Kroah-Hartman
                   ` (82 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Catalin Marinas, Will Deacon, Mark Rutland,
	Dave Martin, Suzuki K Poulose, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 847ecd3fa311cde0f10a1b66c572abb136742b1d ]

Remove the invisible RES0 field entries from the table, listing
fields in CPU ID feature registers, as :
 1) We are only interested in the user visible fields.
 2) The field description may not be up-to-date, as the
    field could be assigned a new meaning.
 3) We already explain the rules of the fields which are not
    visible.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[ardb: fix up for missing SVE in context]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 Documentation/arm64/cpu-feature-registers.txt |    8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

--- a/Documentation/arm64/cpu-feature-registers.txt
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -110,7 +110,6 @@ infrastructure:
      x--------------------------------------------------x
      | Name                         |  bits   | visible |
      |--------------------------------------------------|
-     | RES0                         | [63-52] |    n    |
      |--------------------------------------------------|
      | FHM                          | [51-48] |    y    |
      |--------------------------------------------------|
@@ -124,8 +123,6 @@ infrastructure:
      |--------------------------------------------------|
      | RDM                          | [31-28] |    y    |
      |--------------------------------------------------|
-     | RES0                         | [27-24] |    n    |
-     |--------------------------------------------------|
      | ATOMICS                      | [23-20] |    y    |
      |--------------------------------------------------|
      | CRC32                        | [19-16] |    y    |
@@ -135,8 +132,6 @@ infrastructure:
      | SHA1                         | [11-8]  |    y    |
      |--------------------------------------------------|
      | AES                          | [7-4]   |    y    |
-     |--------------------------------------------------|
-     | RES0                         | [3-0]   |    n    |
      x--------------------------------------------------x
 
 
@@ -144,7 +139,8 @@ infrastructure:
      x--------------------------------------------------x
      | Name                         |  bits   | visible |
      |--------------------------------------------------|
-     | RES0                         | [63-28] |    n    |
+     |--------------------------------------------------|
+     | SVE                          | [35-32] |    y    |
      |--------------------------------------------------|
      | GIC                          | [27-24] |    n    |
      |--------------------------------------------------|



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 042/119] arm64: Expose Arm v8.4 features
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (40 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 041/119] arm64: Documentation: cpu-feature-registers: Remove RES0 fields Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 043/119] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h> Greg Kroah-Hartman
                   ` (81 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Catalin Marinas, Will Deacon, Mark Rutland,
	Dave Martin, Suzuki K Poulose, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 7206dc93a58fb76421c4411eefa3c003337bcb2d ]

Expose the new features introduced by Arm v8.4 extensions to
Arm v8-A profile.

These include :

 1) Data indpendent timing of instructions. (DIT, exposed as HWCAP_DIT)
 2) Unaligned atomic instructions and Single-copy atomicity of loads
    and stores. (AT, expose as HWCAP_USCAT)
 3) LDAPR and STLR instructions with immediate offsets (extension to
    LRCPC, exposed as HWCAP_ILRCPC)
 4) Flag manipulation instructions (TS, exposed as HWCAP_FLAGM).

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[ardb: fix up context for missing SVE]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 Documentation/arm64/cpu-feature-registers.txt |   10 ++++++++++
 arch/arm64/include/asm/sysreg.h               |    3 +++
 arch/arm64/include/uapi/asm/hwcap.h           |    4 ++++
 arch/arm64/kernel/cpufeature.c                |    7 +++++++
 arch/arm64/kernel/cpuinfo.c                   |    4 ++++
 5 files changed, 28 insertions(+)

--- a/Documentation/arm64/cpu-feature-registers.txt
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -110,6 +110,7 @@ infrastructure:
      x--------------------------------------------------x
      | Name                         |  bits   | visible |
      |--------------------------------------------------|
+     | TS                           | [55-52] |    y    |
      |--------------------------------------------------|
      | FHM                          | [51-48] |    y    |
      |--------------------------------------------------|
@@ -139,6 +140,7 @@ infrastructure:
      x--------------------------------------------------x
      | Name                         |  bits   | visible |
      |--------------------------------------------------|
+     | DIT                          | [51-48] |    y    |
      |--------------------------------------------------|
      | SVE                          | [35-32] |    y    |
      |--------------------------------------------------|
@@ -191,6 +193,14 @@ infrastructure:
      | DPB                          | [3-0]   |    y    |
      x--------------------------------------------------x
 
+  5) ID_AA64MMFR2_EL1 - Memory model feature register 2
+
+     x--------------------------------------------------x
+     | Name                         |  bits   | visible |
+     |--------------------------------------------------|
+     | AT                           | [35-32] |    y    |
+     x--------------------------------------------------x
+
 Appendix I: Example
 ---------------------------
 
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -375,6 +375,7 @@
 #define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
 
 /* id_aa64isar0 */
+#define ID_AA64ISAR0_TS_SHIFT		52
 #define ID_AA64ISAR0_FHM_SHIFT		48
 #define ID_AA64ISAR0_DP_SHIFT		44
 #define ID_AA64ISAR0_SM4_SHIFT		40
@@ -396,6 +397,7 @@
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT		60
 #define ID_AA64PFR0_CSV2_SHIFT		56
+#define ID_AA64PFR0_DIT_SHIFT		48
 #define ID_AA64PFR0_GIC_SHIFT		24
 #define ID_AA64PFR0_ASIMD_SHIFT		20
 #define ID_AA64PFR0_FP_SHIFT		16
@@ -441,6 +443,7 @@
 #define ID_AA64MMFR1_VMIDBITS_16	2
 
 /* id_aa64mmfr2 */
+#define ID_AA64MMFR2_AT_SHIFT		32
 #define ID_AA64MMFR2_LVA_SHIFT		16
 #define ID_AA64MMFR2_IESB_SHIFT		12
 #define ID_AA64MMFR2_LSM_SHIFT		8
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -44,5 +44,9 @@
 #define HWCAP_SHA512		(1 << 21)
 #define HWCAP_SVE		(1 << 22)
 #define HWCAP_ASIMDFHM		(1 << 23)
+#define HWCAP_DIT		(1 << 24)
+#define HWCAP_USCAT		(1 << 25)
+#define HWCAP_ILRCPC		(1 << 26)
+#define HWCAP_FLAGM		(1 << 27)
 
 #endif /* _UAPI__ASM_HWCAP_H */
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -107,6 +107,7 @@ cpufeature_pan_not_uao(const struct arm6
  * sync with the documentation of the CPU feature register ABI.
  */
 static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
@@ -132,6 +133,7 @@ static const struct arm64_ftr_bits ftr_i
 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
@@ -171,6 +173,7 @@ static const struct arm64_ftr_bits ftr_i
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
@@ -1054,14 +1057,18 @@ static const struct arm64_cpu_capabiliti
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
 	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDFHM),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FLAGM),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_DIT),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
+	HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
 	{},
 };
 
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -76,6 +76,10 @@ static const char *const hwcap_str[] = {
 	"sha512",
 	"sve",
 	"asimdfhm",
+	"dit",
+	"uscat",
+	"ilrcpc",
+	"flagm",
 	NULL
 };
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 043/119] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (41 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 042/119] arm64: Expose Arm v8.4 features Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 044/119] arm64: add PSR_AA32_* definitions Greg Kroah-Hartman
                   ` (80 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Mark Rutland, Catalin Marinas, Dave Martin,
	James Morse, Will Deacon, Ard Biesheuvel

From: Mark Rutland <mark.rutland@arm.com>

[ Upstream commit 1c312e84c2d71da4101754fa6118f703f7473e01 ]

Currently we assert that the SCTLR_EL{1,2}_{SET,CLEAR} bits are
self-consistent with an assertion in config_sctlr_el1(). This is a bit
unusual, since config_sctlr_el1() doesn't make use of these definitions,
and is far away from the definitions themselves.

We can use the CPP #error directive to have equivalent assertions in
<asm/sysreg.h>, next to the definitions of the set/clear bits, which is
a bit clearer and simpler.

At the same time, lets fill in the upper 32 bits for both registers in
their respective RES0 definitions. This could be a little nicer with
GENMASK_ULL(63, 32), but this currently lives in <linux/bitops.h>, which
cannot safely be included from assembly, as <asm/sysreg.h> can.

Note the when the preprocessor evaluates an expression for an #if
directive, all signed or unsigned values are treated as intmax_t or
uintmax_t respectively. To avoid ambiguity, we define explicitly define
the mask of all 64 bits.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/sysreg.h |   20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -315,7 +315,8 @@
 #define SCTLR_EL2_RES0	((1 << 6)  | (1 << 7)  | (1 << 8)  | (1 << 9)  | \
 			 (1 << 10) | (1 << 13) | (1 << 14) | (1 << 15) | \
 			 (1 << 17) | (1 << 20) | (1 << 21) | (1 << 24) | \
-			 (1 << 26) | (1 << 27) | (1 << 30) | (1 << 31))
+			 (1 << 26) | (1 << 27) | (1 << 30) | (1 << 31) | \
+			 (0xffffffffUL << 32))
 
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define ENDIAN_SET_EL2		SCTLR_ELx_EE
@@ -331,9 +332,9 @@
 			 SCTLR_ELx_SA     | SCTLR_ELx_I    | SCTLR_ELx_WXN | \
 			 ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
 
-/* Check all the bits are accounted for */
-#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
-
+#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffff
+#error "Inconsistent SCTLR_EL2 set/clear bits"
+#endif
 
 /* SCTLR_EL1 specific flags. */
 #define SCTLR_EL1_UCI		(1 << 26)
@@ -352,7 +353,8 @@
 #define SCTLR_EL1_RES1	((1 << 11) | (1 << 20) | (1 << 22) | (1 << 28) | \
 			 (1 << 29))
 #define SCTLR_EL1_RES0  ((1 << 6)  | (1 << 10) | (1 << 13) | (1 << 17) | \
-			 (1 << 21) | (1 << 27) | (1 << 30) | (1 << 31))
+			 (1 << 21) | (1 << 27) | (1 << 30) | (1 << 31) | \
+			 (0xffffffffUL << 32))
 
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define ENDIAN_SET_EL1		(SCTLR_EL1_E0E | SCTLR_ELx_EE)
@@ -371,8 +373,9 @@
 			 SCTLR_EL1_UMA | SCTLR_ELx_WXN     | ENDIAN_CLEAR_EL1 |\
 			 SCTLR_EL1_RES0)
 
-/* Check all the bits are accounted for */
-#define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
+#if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffff
+#error "Inconsistent SCTLR_EL1 set/clear bits"
+#endif
 
 /* id_aa64isar0 */
 #define ID_AA64ISAR0_TS_SHIFT		52
@@ -585,9 +588,6 @@ static inline void config_sctlr_el1(u32
 {
 	u32 val;
 
-	SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS;
-	SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS;
-
 	val = read_sysreg(sctlr_el1);
 	val &= ~clear;
 	val |= set;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 044/119] arm64: add PSR_AA32_* definitions
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (42 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 043/119] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h> Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 045/119] arm64: Introduce sysreg_clear_set() Greg Kroah-Hartman
                   ` (79 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Mark Rutland, Catalin Marinas,
	Christoffer Dall, Marc Zyngier, Suzuki Poulose, Will Deacon,
	Ard Biesheuvel

From: Mark Rutland <mark.rutland@arm.com>

[ Upstream commit 25086263425641c74123f9387426c23072b299ea ]

The AArch32 CPSR/SPSR format is *almost* identical to the AArch64
SPSR_ELx format for exceptions taken from AArch32, but the two have
diverged with the addition of DIT, and we need to treat the two as
logically distinct.

This patch adds new definitions for the SPSR_ELx format for exceptions
taken from AArch32, with a consistent PSR_AA32_ prefix. The existing
COMPAT_PSR_ definitions will be used for the PSR format as seen from
AArch32.

Definitions of DIT are provided for both, and inline functions are
provided to map between the two formats. Note that for SPSR_ELx, the
(RES0) J bit has been re-allocated as the DIT bit.

Once users of the COMPAT_PSR definitions have been migrated over to the
PSR_AA32 definitions, the (majority of) the former will be removed, so
no efforts is made to avoid duplication until then.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/ptrace.h |   57 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 56 insertions(+), 1 deletion(-)

--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -35,7 +35,37 @@
 #define COMPAT_PTRACE_GETHBPREGS	29
 #define COMPAT_PTRACE_SETHBPREGS	30
 
-/* AArch32 CPSR bits */
+/* SPSR_ELx bits for exceptions taken from AArch32 */
+#define PSR_AA32_MODE_MASK	0x0000001f
+#define PSR_AA32_MODE_USR	0x00000010
+#define PSR_AA32_MODE_FIQ	0x00000011
+#define PSR_AA32_MODE_IRQ	0x00000012
+#define PSR_AA32_MODE_SVC	0x00000013
+#define PSR_AA32_MODE_ABT	0x00000017
+#define PSR_AA32_MODE_HYP	0x0000001a
+#define PSR_AA32_MODE_UND	0x0000001b
+#define PSR_AA32_MODE_SYS	0x0000001f
+#define PSR_AA32_T_BIT		0x00000020
+#define PSR_AA32_F_BIT		0x00000040
+#define PSR_AA32_I_BIT		0x00000080
+#define PSR_AA32_A_BIT		0x00000100
+#define PSR_AA32_E_BIT		0x00000200
+#define PSR_AA32_DIT_BIT	0x01000000
+#define PSR_AA32_Q_BIT		0x08000000
+#define PSR_AA32_V_BIT		0x10000000
+#define PSR_AA32_C_BIT		0x20000000
+#define PSR_AA32_Z_BIT		0x40000000
+#define PSR_AA32_N_BIT		0x80000000
+#define PSR_AA32_IT_MASK	0x0600fc00	/* If-Then execution state mask */
+#define PSR_AA32_GE_MASK	0x000f0000
+
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#define PSR_AA32_ENDSTATE	PSR_AA32_E_BIT
+#else
+#define PSR_AA32_ENDSTATE	0
+#endif
+
+/* AArch32 CPSR bits, as seen in AArch32 */
 #define COMPAT_PSR_MODE_MASK	0x0000001f
 #define COMPAT_PSR_MODE_USR	0x00000010
 #define COMPAT_PSR_MODE_FIQ	0x00000011
@@ -50,6 +80,7 @@
 #define COMPAT_PSR_I_BIT	0x00000080
 #define COMPAT_PSR_A_BIT	0x00000100
 #define COMPAT_PSR_E_BIT	0x00000200
+#define COMPAT_PSR_DIT_BIT	0x00200000
 #define COMPAT_PSR_J_BIT	0x01000000
 #define COMPAT_PSR_Q_BIT	0x08000000
 #define COMPAT_PSR_V_BIT	0x10000000
@@ -111,6 +142,30 @@
 #define compat_sp_fiq	regs[29]
 #define compat_lr_fiq	regs[30]
 
+static inline unsigned long compat_psr_to_pstate(const unsigned long psr)
+{
+	unsigned long pstate;
+
+	pstate = psr & ~COMPAT_PSR_DIT_BIT;
+
+	if (psr & COMPAT_PSR_DIT_BIT)
+		pstate |= PSR_AA32_DIT_BIT;
+
+	return pstate;
+}
+
+static inline unsigned long pstate_to_compat_psr(const unsigned long pstate)
+{
+	unsigned long psr;
+
+	psr = pstate & ~PSR_AA32_DIT_BIT;
+
+	if (pstate & PSR_AA32_DIT_BIT)
+		psr |= COMPAT_PSR_DIT_BIT;
+
+	return psr;
+}
+
 /*
  * This struct defines the way the registers are stored on the stack during an
  * exception. Note that sizeof(struct pt_regs) has to be a multiple of 16 (for



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 045/119] arm64: Introduce sysreg_clear_set()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (43 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 044/119] arm64: add PSR_AA32_* definitions Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 046/119] arm64: capabilities: Update prototype for enable call back Greg Kroah-Hartman
                   ` (78 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Mark Rutland, Dave Martin, Catalin Marinas,
	Marc Zyngier, Ard Biesheuvel

From: Mark Rutland <mark.rutland@arm.com>

[ Upstream commit 6ebdf4db8fa564a150f46d32178af0873eb5abbb ]

Currently we have a couple of helpers to manipulate bits in particular
sysregs:

 * config_sctlr_el1(u32 clear, u32 set)

 * change_cpacr(u64 val, u64 mask)

The parameters of these differ in naming convention, order, and size,
which is unfortunate. They also differ slightly in behaviour, as
change_cpacr() skips the sysreg write if the bits are unchanged, which
is a useful optimization when sysreg writes are expensive.

Before we gain yet another sysreg manipulation function, let's
unify these with a common helper, providing a consistent order for
clear/set operands, and the write skipping behaviour from
change_cpacr(). Code will be migrated to the new helper in subsequent
patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/sysreg.h |   11 +++++++++++
 1 file changed, 11 insertions(+)

--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -584,6 +584,17 @@ asm(
 	asm volatile("msr_s " __stringify(r) ", %x0" : : "rZ" (__val));	\
 } while (0)
 
+/*
+ * Modify bits in a sysreg. Bits in the clear mask are zeroed, then bits in the
+ * set mask are set. Other bits are left as-is.
+ */
+#define sysreg_clear_set(sysreg, clear, set) do {			\
+	u64 __scs_val = read_sysreg(sysreg);				\
+	u64 __scs_new = (__scs_val & ~(u64)(clear)) | (set);		\
+	if (__scs_new != __scs_val)					\
+		write_sysreg(__scs_new, sysreg);			\
+} while (0)
+
 static inline void config_sctlr_el1(u32 clear, u32 set)
 {
 	u32 val;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 046/119] arm64: capabilities: Update prototype for enable call back
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (44 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 045/119] arm64: Introduce sysreg_clear_set() Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 047/119] arm64: capabilities: Move errata work around check on boot CPU Greg Kroah-Hartman
                   ` (77 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Catalin Marinas, Mark Rutland,
	Andre Przywara, James Morse, Robin Murphy, Julien Thierry,
	Dave Martin, Suzuki K Poulose, Ard Biesheuvel

From: Dave Martin <dave.martin@arm.com>

[ Upstream commit c0cda3b8ee6b4b6851b2fd8b6db91fd7b0e2524a ]

We issue the enable() call back for all CPU hwcaps capabilities
available on the system, on all the CPUs. So far we have ignored
the argument passed to the call back, which had a prototype to
accept a "void *" for use with on_each_cpu() and later with
stop_machine(). However, with commit 0a0d111d40fd1
("arm64: cpufeature: Pass capability structure to ->enable callback"),
there are some users of the argument who wants the matching capability
struct pointer where there are multiple matching criteria for a single
capability. Clean up the declaration of the call back to make it clear.

 1) Renamed to cpu_enable(), to imply taking necessary actions on the
    called CPU for the entry.
 2) Pass const pointer to the capability, to allow the call back to
    check the entry. (e.,g to check if any action is needed on the CPU)
 3) We don't care about the result of the call back, turning this to
    a void.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: James Morse <james.morse@arm.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Dave Martin <dave.martin@arm.com>
[suzuki: convert more users, rename call back and drop results]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |    7 +++-
 arch/arm64/include/asm/processor.h  |    5 +--
 arch/arm64/kernel/cpu_errata.c      |   55 +++++++++++++++++-------------------
 arch/arm64/kernel/cpufeature.c      |   34 +++++++++++++---------
 arch/arm64/kernel/fpsimd.c          |    1 
 arch/arm64/kernel/traps.c           |    4 +-
 arch/arm64/mm/fault.c               |    3 -
 7 files changed, 60 insertions(+), 49 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -96,7 +96,12 @@ struct arm64_cpu_capabilities {
 	u16 capability;
 	int def_scope;			/* default scope */
 	bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
-	int (*enable)(void *);		/* Called on all active CPUs */
+	/*
+	 * Take the appropriate actions to enable this capability for this CPU.
+	 * For each successfully booted CPU, this method is called for each
+	 * globally detected capability.
+	 */
+	void (*cpu_enable)(const struct arm64_cpu_capabilities *cap);
 	union {
 		struct {	/* To be used for erratum handling only */
 			u32 midr_model;
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -37,6 +37,7 @@
 #include <linux/string.h>
 
 #include <asm/alternative.h>
+#include <asm/cpufeature.h>
 #include <asm/fpsimd.h>
 #include <asm/hw_breakpoint.h>
 #include <asm/lse.h>
@@ -222,8 +223,8 @@ static inline void spin_lock_prefetch(co
 
 #endif
 
-int cpu_enable_pan(void *__unused);
-int cpu_enable_cache_maint_trap(void *__unused);
+void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused);
+void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused);
 
 #endif /* __ASSEMBLY__ */
 #endif /* __ASM_PROCESSOR_H */
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -61,11 +61,11 @@ has_mismatched_cache_type(const struct a
 	       (arm64_ftr_reg_ctrel0.sys_val & mask);
 }
 
-static int cpu_enable_trap_ctr_access(void *__unused)
+static void
+cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
 {
 	/* Clear SCTLR_EL1.UCT */
 	config_sctlr_el1(SCTLR_EL1_UCT, 0);
-	return 0;
 }
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
@@ -169,25 +169,25 @@ static void call_hvc_arch_workaround_1(v
 	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
 }
 
-static int enable_smccc_arch_workaround_1(void *data)
+static void
+enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 {
-	const struct arm64_cpu_capabilities *entry = data;
 	bp_hardening_cb_t cb;
 	void *smccc_start, *smccc_end;
 	struct arm_smccc_res res;
 
 	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
-		return 0;
+		return;
 
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
-		return 0;
+		return;
 
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 		if ((int)res.a0 < 0)
-			return 0;
+			return;
 		cb = call_hvc_arch_workaround_1;
 		smccc_start = __smccc_workaround_1_hvc_start;
 		smccc_end = __smccc_workaround_1_hvc_end;
@@ -197,19 +197,19 @@ static int enable_smccc_arch_workaround_
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 		if ((int)res.a0 < 0)
-			return 0;
+			return;
 		cb = call_smc_arch_workaround_1;
 		smccc_start = __smccc_workaround_1_smc_start;
 		smccc_end = __smccc_workaround_1_smc_end;
 		break;
 
 	default:
-		return 0;
+		return;
 	}
 
 	install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
 
-	return 0;
+	return;
 }
 
 static void qcom_link_stack_sanitization(void)
@@ -224,15 +224,12 @@ static void qcom_link_stack_sanitization
 		     : "=&r" (tmp));
 }
 
-static int qcom_enable_link_stack_sanitization(void *data)
+static void
+qcom_enable_link_stack_sanitization(const struct arm64_cpu_capabilities *entry)
 {
-	const struct arm64_cpu_capabilities *entry = data;
-
 	install_bp_hardening_cb(entry, qcom_link_stack_sanitization,
 				__qcom_hyp_sanitize_link_stack_start,
 				__qcom_hyp_sanitize_link_stack_end);
-
-	return 0;
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
@@ -431,7 +428,7 @@ const struct arm64_cpu_capabilities arm6
 		.desc = "ARM errata 826319, 827319, 824069",
 		.capability = ARM64_WORKAROUND_CLEAN_CACHE,
 		MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x02),
-		.enable = cpu_enable_cache_maint_trap,
+		.cpu_enable = cpu_enable_cache_maint_trap,
 	},
 #endif
 #ifdef CONFIG_ARM64_ERRATUM_819472
@@ -440,7 +437,7 @@ const struct arm64_cpu_capabilities arm6
 		.desc = "ARM errata 819472",
 		.capability = ARM64_WORKAROUND_CLEAN_CACHE,
 		MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x01),
-		.enable = cpu_enable_cache_maint_trap,
+		.cpu_enable = cpu_enable_cache_maint_trap,
 	},
 #endif
 #ifdef CONFIG_ARM64_ERRATUM_832075
@@ -521,14 +518,14 @@ const struct arm64_cpu_capabilities arm6
 		.capability = ARM64_MISMATCHED_CACHE_LINE_SIZE,
 		.matches = has_mismatched_cache_type,
 		.def_scope = SCOPE_LOCAL_CPU,
-		.enable = cpu_enable_trap_ctr_access,
+		.cpu_enable = cpu_enable_trap_ctr_access,
 	},
 	{
 		.desc = "Mismatched cache type",
 		.capability = ARM64_MISMATCHED_CACHE_TYPE,
 		.matches = has_mismatched_cache_type,
 		.def_scope = SCOPE_LOCAL_CPU,
-		.enable = cpu_enable_trap_ctr_access,
+		.cpu_enable = cpu_enable_trap_ctr_access,
 	},
 #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
 	{
@@ -567,27 +564,27 @@ const struct arm64_cpu_capabilities arm6
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
-		.enable = enable_smccc_arch_workaround_1,
+		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
-		.enable = enable_smccc_arch_workaround_1,
+		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
-		.enable = enable_smccc_arch_workaround_1,
+		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
-		.enable = enable_smccc_arch_workaround_1,
+		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
-		.enable = qcom_enable_link_stack_sanitization,
+		.cpu_enable = qcom_enable_link_stack_sanitization,
 	},
 	{
 		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
@@ -596,7 +593,7 @@ const struct arm64_cpu_capabilities arm6
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
-		.enable = qcom_enable_link_stack_sanitization,
+		.cpu_enable = qcom_enable_link_stack_sanitization,
 	},
 	{
 		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
@@ -605,12 +602,12 @@ const struct arm64_cpu_capabilities arm6
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
-		.enable = enable_smccc_arch_workaround_1,
+		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
-		.enable = enable_smccc_arch_workaround_1,
+		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 #endif
 #ifdef CONFIG_ARM64_SSBD
@@ -636,8 +633,8 @@ void verify_local_cpu_errata_workarounds
 
 	for (; caps->matches; caps++) {
 		if (cpus_have_cap(caps->capability)) {
-			if (caps->enable)
-				caps->enable((void *)caps);
+			if (caps->cpu_enable)
+				caps->cpu_enable(caps);
 		} else if (caps->matches(caps, SCOPE_LOCAL_CPU)) {
 			pr_crit("CPU%d: Requires work around for %s, not detected"
 					" at boot time\n",
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -859,7 +859,8 @@ static bool unmap_kernel_at_el0(const st
 						     ID_AA64PFR0_CSV3_SHIFT);
 }
 
-static int kpti_install_ng_mappings(void *__unused)
+static void
+kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 {
 	typedef void (kpti_remap_fn)(int, int, phys_addr_t);
 	extern kpti_remap_fn idmap_kpti_install_ng_mappings;
@@ -869,7 +870,7 @@ static int kpti_install_ng_mappings(void
 	int cpu = smp_processor_id();
 
 	if (kpti_applied)
-		return 0;
+		return;
 
 	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
 
@@ -880,7 +881,7 @@ static int kpti_install_ng_mappings(void
 	if (!cpu)
 		kpti_applied = true;
 
-	return 0;
+	return;
 }
 
 static int __init parse_kpti(char *str)
@@ -897,7 +898,7 @@ static int __init parse_kpti(char *str)
 early_param("kpti", parse_kpti);
 #endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
-static int cpu_copy_el2regs(void *__unused)
+static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
 {
 	/*
 	 * Copy register values that aren't redirected by hardware.
@@ -909,8 +910,6 @@ static int cpu_copy_el2regs(void *__unus
 	 */
 	if (!alternatives_applied)
 		write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
-
-	return 0;
 }
 
 static const struct arm64_cpu_capabilities arm64_features[] = {
@@ -934,7 +933,7 @@ static const struct arm64_cpu_capabiliti
 		.field_pos = ID_AA64MMFR1_PAN_SHIFT,
 		.sign = FTR_UNSIGNED,
 		.min_field_value = 1,
-		.enable = cpu_enable_pan,
+		.cpu_enable = cpu_enable_pan,
 	},
 #endif /* CONFIG_ARM64_PAN */
 #if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS)
@@ -982,7 +981,7 @@ static const struct arm64_cpu_capabiliti
 		.capability = ARM64_HAS_VIRT_HOST_EXTN,
 		.def_scope = SCOPE_SYSTEM,
 		.matches = runs_at_el2,
-		.enable = cpu_copy_el2regs,
+		.cpu_enable = cpu_copy_el2regs,
 	},
 	{
 		.desc = "32-bit EL0 Support",
@@ -1006,7 +1005,7 @@ static const struct arm64_cpu_capabiliti
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
 		.def_scope = SCOPE_SYSTEM,
 		.matches = unmap_kernel_at_el0,
-		.enable = kpti_install_ng_mappings,
+		.cpu_enable = kpti_install_ng_mappings,
 	},
 #endif
 	{
@@ -1169,6 +1168,14 @@ void update_cpu_capabilities(const struc
 	}
 }
 
+static int __enable_cpu_capability(void *arg)
+{
+	const struct arm64_cpu_capabilities *cap = arg;
+
+	cap->cpu_enable(cap);
+	return 0;
+}
+
 /*
  * Run through the enabled capabilities and enable() it on all active
  * CPUs
@@ -1184,14 +1191,15 @@ void __init enable_cpu_capabilities(cons
 		/* Ensure cpus_have_const_cap(num) works */
 		static_branch_enable(&cpu_hwcap_keys[num]);
 
-		if (caps->enable) {
+		if (caps->cpu_enable) {
 			/*
 			 * Use stop_machine() as it schedules the work allowing
 			 * us to modify PSTATE, instead of on_each_cpu() which
 			 * uses an IPI, giving us a PSTATE that disappears when
 			 * we return.
 			 */
-			stop_machine(caps->enable, (void *)caps, cpu_online_mask);
+			stop_machine(__enable_cpu_capability, (void *)caps,
+				     cpu_online_mask);
 		}
 	}
 }
@@ -1249,8 +1257,8 @@ verify_local_cpu_features(const struct a
 					smp_processor_id(), caps->desc);
 			cpu_die_early();
 		}
-		if (caps->enable)
-			caps->enable((void *)caps);
+		if (caps->cpu_enable)
+			caps->cpu_enable(caps);
 	}
 }
 
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -28,6 +28,7 @@
 #include <linux/signal.h>
 
 #include <asm/fpsimd.h>
+#include <asm/cpufeature.h>
 #include <asm/cputype.h>
 #include <asm/simd.h>
 
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -38,6 +38,7 @@
 
 #include <asm/atomic.h>
 #include <asm/bug.h>
+#include <asm/cpufeature.h>
 #include <asm/debug-monitors.h>
 #include <asm/esr.h>
 #include <asm/insn.h>
@@ -436,10 +437,9 @@ asmlinkage void __exception do_undefinst
 	force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0);
 }
 
-int cpu_enable_cache_maint_trap(void *__unused)
+void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 {
 	config_sctlr_el1(SCTLR_EL1_UCI, 0);
-	return 0;
 }
 
 #define __user_cache_maint(insn, address, res)			\
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -875,7 +875,7 @@ asmlinkage int __exception do_debug_exce
 NOKPROBE_SYMBOL(do_debug_exception);
 
 #ifdef CONFIG_ARM64_PAN
-int cpu_enable_pan(void *__unused)
+void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
 {
 	/*
 	 * We modify PSTATE. This won't work from irq context as the PSTATE
@@ -885,6 +885,5 @@ int cpu_enable_pan(void *__unused)
 
 	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
 	asm(SET_PSTATE_PAN(1));
-	return 0;
 }
 #endif /* CONFIG_ARM64_PAN */



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 047/119] arm64: capabilities: Move errata work around check on boot CPU
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (45 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 046/119] arm64: capabilities: Update prototype for enable call back Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 048/119] arm64: capabilities: Move errata processing code Greg Kroah-Hartman
                   ` (76 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Catalin Marinas, Mark Rutland,
	Dave Martin, Suzuki K Poulose, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 5e91107b06811f0ca147cebbedce53626c9c4443 ]

We trigger CPU errata work around check on the boot CPU from
smp_prepare_boot_cpu() to make sure that we run the checks only
after the CPU feature infrastructure is initialised. While this
is correct, we can also do this from init_cpu_features() which
initilises the infrastructure, and is called only on the
Boot CPU. This helps to consolidate the CPU capability handling
to cpufeature.c. No functional changes.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |    5 +++++
 arch/arm64/kernel/smp.c        |    6 ------
 2 files changed, 5 insertions(+), 6 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -521,6 +521,11 @@ void __init init_cpu_features(struct cpu
 		init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
 	}
 
+	/*
+	 * Run the errata work around checks on the boot CPU, once we have
+	 * initialised the cpu feature infrastructure.
+	 */
+	update_cpu_errata_workarounds();
 }
 
 static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new)
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -449,12 +449,6 @@ void __init smp_prepare_boot_cpu(void)
 	jump_label_init();
 	cpuinfo_store_boot_cpu();
 	save_boot_cpu_run_el();
-	/*
-	 * Run the errata work around checks on the boot CPU, once we have
-	 * initialised the cpu feature infrastructure from
-	 * cpuinfo_store_boot_cpu() above.
-	 */
-	update_cpu_errata_workarounds();
 }
 
 static u64 __init of_get_cpu_mpidr(struct device_node *dn)



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 048/119] arm64: capabilities: Move errata processing code
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (46 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 047/119] arm64: capabilities: Move errata work around check on boot CPU Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 049/119] arm64: capabilities: Prepare for fine grained capabilities Greg Kroah-Hartman
                   ` (75 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Marc Zyngier, Mark Rutland,
	Andre Przywara, Dave Martin, Suzuki K Poulose, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 1e89baed5d50d2b8d9fd420830902570270703f1 ]

We have errata work around processing code in cpu_errata.c,
which calls back into helpers defined in cpufeature.c. Now
that we are going to make the handling of capabilities
generic, by adding the information to each capability,
move the errata work around specific processing code.
No functional changes.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |    7 -----
 arch/arm64/kernel/cpu_errata.c      |   33 ---------------------------
 arch/arm64/kernel/cpufeature.c      |   43 +++++++++++++++++++++++++++++++++---
 3 files changed, 40 insertions(+), 43 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -230,15 +230,8 @@ static inline bool id_aa64pfr0_32bit_el0
 }
 
 void __init setup_cpu_features(void);
-
-void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
-			    const char *info);
-void enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps);
 void check_local_cpu_capabilities(void);
 
-void update_cpu_errata_workarounds(void);
-void __init enable_errata_workarounds(void);
-void verify_local_cpu_errata_workarounds(void);
 
 u64 read_sanitised_ftr_reg(u32 id);
 
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -621,36 +621,3 @@ const struct arm64_cpu_capabilities arm6
 	{
 	}
 };
-
-/*
- * The CPU Errata work arounds are detected and applied at boot time
- * and the related information is freed soon after. If the new CPU requires
- * an errata not detected at boot, fail this CPU.
- */
-void verify_local_cpu_errata_workarounds(void)
-{
-	const struct arm64_cpu_capabilities *caps = arm64_errata;
-
-	for (; caps->matches; caps++) {
-		if (cpus_have_cap(caps->capability)) {
-			if (caps->cpu_enable)
-				caps->cpu_enable(caps);
-		} else if (caps->matches(caps, SCOPE_LOCAL_CPU)) {
-			pr_crit("CPU%d: Requires work around for %s, not detected"
-					" at boot time\n",
-				smp_processor_id(),
-				caps->desc ? : "an erratum");
-			cpu_die_early();
-		}
-	}
-}
-
-void update_cpu_errata_workarounds(void)
-{
-	update_cpu_capabilities(arm64_errata, "enabling workaround for");
-}
-
-void __init enable_errata_workarounds(void)
-{
-	enable_cpu_capabilities(arm64_errata);
-}
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -484,6 +484,9 @@ static void __init init_cpu_ftr_reg(u32
 	reg->user_mask = user_mask;
 }
 
+extern const struct arm64_cpu_capabilities arm64_errata[];
+static void update_cpu_errata_workarounds(void);
+
 void __init init_cpu_features(struct cpuinfo_arm64 *info)
 {
 	/* Before we start using the tables, make sure it is sorted */
@@ -1160,8 +1163,8 @@ static bool __this_cpu_has_cap(const str
 	return false;
 }
 
-void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
-			    const char *info)
+static void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
+				    const char *info)
 {
 	for (; caps->matches; caps++) {
 		if (!caps->matches(caps, caps->def_scope))
@@ -1185,7 +1188,8 @@ static int __enable_cpu_capability(void
  * Run through the enabled capabilities and enable() it on all active
  * CPUs
  */
-void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
+static void __init
+enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
 {
 	for (; caps->matches; caps++) {
 		unsigned int num = caps->capability;
@@ -1268,6 +1272,39 @@ verify_local_cpu_features(const struct a
 }
 
 /*
+ * The CPU Errata work arounds are detected and applied at boot time
+ * and the related information is freed soon after. If the new CPU requires
+ * an errata not detected at boot, fail this CPU.
+ */
+static void verify_local_cpu_errata_workarounds(void)
+{
+	const struct arm64_cpu_capabilities *caps = arm64_errata;
+
+	for (; caps->matches; caps++) {
+		if (cpus_have_cap(caps->capability)) {
+			if (caps->cpu_enable)
+				caps->cpu_enable(caps);
+		} else if (caps->matches(caps, SCOPE_LOCAL_CPU)) {
+			pr_crit("CPU%d: Requires work around for %s, not detected"
+					" at boot time\n",
+				smp_processor_id(),
+				caps->desc ? : "an erratum");
+			cpu_die_early();
+		}
+	}
+}
+
+static void update_cpu_errata_workarounds(void)
+{
+	update_cpu_capabilities(arm64_errata, "enabling workaround for");
+}
+
+static void __init enable_errata_workarounds(void)
+{
+	enable_cpu_capabilities(arm64_errata);
+}
+
+/*
  * Run through the enabled system capabilities and enable() it on this CPU.
  * The capabilities were decided based on the available CPUs at the boot time.
  * Any new CPU should match the system wide status of the capability. If the



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 049/119] arm64: capabilities: Prepare for fine grained capabilities
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (47 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 048/119] arm64: capabilities: Move errata processing code Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 050/119] arm64: capabilities: Add flags to handle the conflicts on late CPU Greg Kroah-Hartman
                   ` (74 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Mark Rutland, Dave Martin, Suzuki K Poulose,
	Will Deacon, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 143ba05d867af34827faf99e0eed4de27106c7cb ]

We use arm64_cpu_capabilities to represent CPU ELF HWCAPs exposed
to the userspace and the CPU hwcaps used by the kernel, which
include cpu features and CPU errata work arounds. Capabilities
have some properties that decide how they should be treated :

 1) Detection, i.e scope : A cap could be "detected" either :
    - if it is present on at least one CPU (SCOPE_LOCAL_CPU)
	Or
    - if it is present on all the CPUs (SCOPE_SYSTEM)

 2) When is it enabled ? - A cap is treated as "enabled" when the
  system takes some action based on whether the capability is detected or
  not. e.g, setting some control register, patching the kernel code.
  Right now, we treat all caps are enabled at boot-time, after all
  the CPUs are brought up by the kernel. But there are certain caps,
  which are enabled early during the boot (e.g, VHE, GIC_CPUIF for NMI)
  and kernel starts using them, even before the secondary CPUs are brought
  up. We would need a way to describe this for each capability.

 3) Conflict on a late CPU - When a CPU is brought up, it is checked
  against the caps that are known to be enabled on the system (via
  verify_local_cpu_capabilities()). Based on the state of the capability
  on the CPU vs. that of System we could have the following combinations
  of conflict.

	x-----------------------------x
	| Type	| System   | Late CPU |
	------------------------------|
	|  a    |   y      |    n     |
	------------------------------|
	|  b    |   n      |    y     |
	x-----------------------------x

  Case (a) is not permitted for caps which are system features, which the
  system expects all the CPUs to have (e.g VHE). While (a) is ignored for
  all errata work arounds. However, there could be exceptions to the plain
  filtering approach. e.g, KPTI is an optional feature for a late CPU as
  long as the system already enables it.

  Case (b) is not permitted for errata work arounds which requires some
  work around, which cannot be delayed. And we ignore (b) for features.
  Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we
  are too late to enable it (because we change the allocation of ASIDs
  etc).

So this calls for a lot more fine grained behavior for each capability.
And if we define all the attributes to control their behavior properly,
we may be able to use a single table for the CPU hwcaps (which cover
errata and features, not the ELF HWCAPs). This is a prepartory step
to get there. More bits would be added for the properties listed above.

We are going to use a bit-mask to encode all the properties of a
capabilities. This patch encodes the "SCOPE" of the capability.

As such there is no change in how the capabilities are treated.

Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |  105 +++++++++++++++++++++++++++++++++---
 arch/arm64/kernel/cpu_errata.c      |   12 ++--
 arch/arm64/kernel/cpufeature.c      |   34 +++++------
 3 files changed, 122 insertions(+), 29 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -85,16 +85,104 @@ struct arm64_ftr_reg {
 
 extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
 
-/* scope of capability check */
-enum {
-	SCOPE_SYSTEM,
-	SCOPE_LOCAL_CPU,
-};
+/*
+ * CPU capabilities:
+ *
+ * We use arm64_cpu_capabilities to represent system features, errata work
+ * arounds (both used internally by kernel and tracked in cpu_hwcaps) and
+ * ELF HWCAPs (which are exposed to user).
+ *
+ * To support systems with heterogeneous CPUs, we need to make sure that we
+ * detect the capabilities correctly on the system and take appropriate
+ * measures to ensure there are no incompatibilities.
+ *
+ * This comment tries to explain how we treat the capabilities.
+ * Each capability has the following list of attributes :
+ *
+ * 1) Scope of Detection : The system detects a given capability by
+ *    performing some checks at runtime. This could be, e.g, checking the
+ *    value of a field in CPU ID feature register or checking the cpu
+ *    model. The capability provides a call back ( @matches() ) to
+ *    perform the check. Scope defines how the checks should be performed.
+ *    There are two cases:
+ *
+ *     a) SCOPE_LOCAL_CPU: check all the CPUs and "detect" if at least one
+ *        matches. This implies, we have to run the check on all the
+ *        booting CPUs, until the system decides that state of the
+ *        capability is finalised. (See section 2 below)
+ *		Or
+ *     b) SCOPE_SYSTEM: check all the CPUs and "detect" if all the CPUs
+ *        matches. This implies, we run the check only once, when the
+ *        system decides to finalise the state of the capability. If the
+ *        capability relies on a field in one of the CPU ID feature
+ *        registers, we use the sanitised value of the register from the
+ *        CPU feature infrastructure to make the decision.
+ *
+ *    The process of detection is usually denoted by "update" capability
+ *    state in the code.
+ *
+ * 2) Finalise the state : The kernel should finalise the state of a
+ *    capability at some point during its execution and take necessary
+ *    actions if any. Usually, this is done, after all the boot-time
+ *    enabled CPUs are brought up by the kernel, so that it can make
+ *    better decision based on the available set of CPUs. However, there
+ *    are some special cases, where the action is taken during the early
+ *    boot by the primary boot CPU. (e.g, running the kernel at EL2 with
+ *    Virtualisation Host Extensions). The kernel usually disallows any
+ *    changes to the state of a capability once it finalises the capability
+ *    and takes any action, as it may be impossible to execute the actions
+ *    safely. A CPU brought up after a capability is "finalised" is
+ *    referred to as "Late CPU" w.r.t the capability. e.g, all secondary
+ *    CPUs are treated "late CPUs" for capabilities determined by the boot
+ *    CPU.
+ *
+ * 3) Verification: When a CPU is brought online (e.g, by user or by the
+ *    kernel), the kernel should make sure that it is safe to use the CPU,
+ *    by verifying that the CPU is compliant with the state of the
+ *    capabilities finalised already. This happens via :
+ *
+ *	secondary_start_kernel()-> check_local_cpu_capabilities()
+ *
+ *    As explained in (2) above, capabilities could be finalised at
+ *    different points in the execution. Each CPU is verified against the
+ *    "finalised" capabilities and if there is a conflict, the kernel takes
+ *    an action, based on the severity (e.g, a CPU could be prevented from
+ *    booting or cause a kernel panic). The CPU is allowed to "affect" the
+ *    state of the capability, if it has not been finalised already.
+ *
+ * 4) Action: As mentioned in (2), the kernel can take an action for each
+ *    detected capability, on all CPUs on the system. Appropriate actions
+ *    include, turning on an architectural feature, modifying the control
+ *    registers (e.g, SCTLR, TCR etc.) or patching the kernel via
+ *    alternatives. The kernel patching is batched and performed at later
+ *    point. The actions are always initiated only after the capability
+ *    is finalised. This is usally denoted by "enabling" the capability.
+ *    The actions are initiated as follows :
+ *	a) Action is triggered on all online CPUs, after the capability is
+ *	finalised, invoked within the stop_machine() context from
+ *	enable_cpu_capabilitie().
+ *
+ *	b) Any late CPU, brought up after (1), the action is triggered via:
+ *
+ *	  check_local_cpu_capabilities() -> verify_local_cpu_capabilities()
+ *
+ */
+
+
+/* Decide how the capability is detected. On a local CPU vs System wide */
+#define ARM64_CPUCAP_SCOPE_LOCAL_CPU		((u16)BIT(0))
+#define ARM64_CPUCAP_SCOPE_SYSTEM		((u16)BIT(1))
+#define ARM64_CPUCAP_SCOPE_MASK			\
+	(ARM64_CPUCAP_SCOPE_SYSTEM	|	\
+	 ARM64_CPUCAP_SCOPE_LOCAL_CPU)
+
+#define SCOPE_SYSTEM				ARM64_CPUCAP_SCOPE_SYSTEM
+#define SCOPE_LOCAL_CPU				ARM64_CPUCAP_SCOPE_LOCAL_CPU
 
 struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
-	int def_scope;			/* default scope */
+	u16 type;
 	bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
 	/*
 	 * Take the appropriate actions to enable this capability for this CPU.
@@ -119,6 +207,11 @@ struct arm64_cpu_capabilities {
 	};
 };
 
+static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap)
+{
+	return cap->type & ARM64_CPUCAP_SCOPE_MASK;
+}
+
 extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
 extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS];
 extern struct static_key_false arm64_const_caps_ready;
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -406,14 +406,14 @@ static bool has_ssbd_mitigation(const st
 #endif	/* CONFIG_ARM64_SSBD */
 
 #define MIDR_RANGE(model, min, max) \
-	.def_scope = SCOPE_LOCAL_CPU, \
+	.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \
 	.matches = is_affected_midr_range, \
 	.midr_model = model, \
 	.midr_range_min = min, \
 	.midr_range_max = max
 
 #define MIDR_ALL_VERSIONS(model) \
-	.def_scope = SCOPE_LOCAL_CPU, \
+	.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \
 	.matches = is_affected_midr_range, \
 	.midr_model = model, \
 	.midr_range_min = 0, \
@@ -517,14 +517,14 @@ const struct arm64_cpu_capabilities arm6
 		.desc = "Mismatched cache line size",
 		.capability = ARM64_MISMATCHED_CACHE_LINE_SIZE,
 		.matches = has_mismatched_cache_type,
-		.def_scope = SCOPE_LOCAL_CPU,
+		.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU,
 		.cpu_enable = cpu_enable_trap_ctr_access,
 	},
 	{
 		.desc = "Mismatched cache type",
 		.capability = ARM64_MISMATCHED_CACHE_TYPE,
 		.matches = has_mismatched_cache_type,
-		.def_scope = SCOPE_LOCAL_CPU,
+		.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU,
 		.cpu_enable = cpu_enable_trap_ctr_access,
 	},
 #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
@@ -538,7 +538,7 @@ const struct arm64_cpu_capabilities arm6
 	{
 		.desc = "Qualcomm Technologies Kryo erratum 1003",
 		.capability = ARM64_WORKAROUND_QCOM_FALKOR_E1003,
-		.def_scope = SCOPE_LOCAL_CPU,
+		.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU,
 		.midr_model = MIDR_QCOM_KRYO,
 		.matches = is_kryo_midr,
 	},
@@ -613,7 +613,7 @@ const struct arm64_cpu_capabilities arm6
 #ifdef CONFIG_ARM64_SSBD
 	{
 		.desc = "Speculative Store Bypass Disable",
-		.def_scope = SCOPE_LOCAL_CPU,
+		.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU,
 		.capability = ARM64_SSBD,
 		.matches = has_ssbd_mitigation,
 	},
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -924,7 +924,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "GIC system register CPU interface",
 		.capability = ARM64_HAS_SYSREG_GIC_CPUIF,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = has_useable_gicv3_cpuif,
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.field_pos = ID_AA64PFR0_GIC_SHIFT,
@@ -935,7 +935,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "Privileged Access Never",
 		.capability = ARM64_HAS_PAN,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = has_cpuid_feature,
 		.sys_reg = SYS_ID_AA64MMFR1_EL1,
 		.field_pos = ID_AA64MMFR1_PAN_SHIFT,
@@ -948,7 +948,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "LSE atomic instructions",
 		.capability = ARM64_HAS_LSE_ATOMICS,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = has_cpuid_feature,
 		.sys_reg = SYS_ID_AA64ISAR0_EL1,
 		.field_pos = ID_AA64ISAR0_ATOMICS_SHIFT,
@@ -959,14 +959,14 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "Software prefetching using PRFM",
 		.capability = ARM64_HAS_NO_HW_PREFETCH,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = has_no_hw_prefetch,
 	},
 #ifdef CONFIG_ARM64_UAO
 	{
 		.desc = "User Access Override",
 		.capability = ARM64_HAS_UAO,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = has_cpuid_feature,
 		.sys_reg = SYS_ID_AA64MMFR2_EL1,
 		.field_pos = ID_AA64MMFR2_UAO_SHIFT,
@@ -980,21 +980,21 @@ static const struct arm64_cpu_capabiliti
 #ifdef CONFIG_ARM64_PAN
 	{
 		.capability = ARM64_ALT_PAN_NOT_UAO,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = cpufeature_pan_not_uao,
 	},
 #endif /* CONFIG_ARM64_PAN */
 	{
 		.desc = "Virtualization Host Extensions",
 		.capability = ARM64_HAS_VIRT_HOST_EXTN,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = runs_at_el2,
 		.cpu_enable = cpu_copy_el2regs,
 	},
 	{
 		.desc = "32-bit EL0 Support",
 		.capability = ARM64_HAS_32BIT_EL0,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = has_cpuid_feature,
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
@@ -1004,14 +1004,14 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "Reduced HYP mapping offset",
 		.capability = ARM64_HYP_OFFSET_LOW,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = hyp_offset_low,
 	},
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	{
 		.desc = "Kernel page table isolation (KPTI)",
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = unmap_kernel_at_el0,
 		.cpu_enable = kpti_install_ng_mappings,
 	},
@@ -1019,7 +1019,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.min_field_value = 0,
 		.matches = has_no_fpsimd,
 	},
@@ -1027,7 +1027,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "Data cache clean to Point of Persistence",
 		.capability = ARM64_HAS_DCPOP,
-		.def_scope = SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
 		.matches = has_cpuid_feature,
 		.sys_reg = SYS_ID_AA64ISAR1_EL1,
 		.field_pos = ID_AA64ISAR1_DPB_SHIFT,
@@ -1037,16 +1037,16 @@ static const struct arm64_cpu_capabiliti
 	{},
 };
 
-#define HWCAP_CAP(reg, field, s, min_value, type, cap)	\
+#define HWCAP_CAP(reg, field, s, min_value, cap_type, cap)	\
 	{							\
 		.desc = #cap,					\
-		.def_scope = SCOPE_SYSTEM,			\
+		.type = ARM64_CPUCAP_SCOPE_SYSTEM,		\
 		.matches = has_cpuid_feature,			\
 		.sys_reg = reg,					\
 		.field_pos = field,				\
 		.sign = s,					\
 		.min_field_value = min_value,			\
-		.hwcap_type = type,				\
+		.hwcap_type = cap_type,				\
 		.hwcap = cap,					\
 	}
 
@@ -1140,7 +1140,7 @@ static void __init setup_elf_hwcaps(cons
 	/* We support emulation of accesses to CPU ID feature registers */
 	elf_hwcap |= HWCAP_CPUID;
 	for (; hwcaps->matches; hwcaps++)
-		if (hwcaps->matches(hwcaps, hwcaps->def_scope))
+		if (hwcaps->matches(hwcaps, cpucap_default_scope(hwcaps)))
 			cap_set_elf_hwcap(hwcaps);
 }
 
@@ -1167,7 +1167,7 @@ static void update_cpu_capabilities(cons
 				    const char *info)
 {
 	for (; caps->matches; caps++) {
-		if (!caps->matches(caps, caps->def_scope))
+		if (!caps->matches(caps, cpucap_default_scope(caps)))
 			continue;
 
 		if (!cpus_have_cap(caps->capability) && caps->desc)



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 050/119] arm64: capabilities: Add flags to handle the conflicts on late CPU
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (48 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 049/119] arm64: capabilities: Prepare for fine grained capabilities Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 051/119] arm64: capabilities: Unify the verification Greg Kroah-Hartman
                   ` (73 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Mark Rutland, Dave Martin,
	Suzuki K Poulose, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 5b4747c5dce7a873e1e7fe1608835825f714267a ]

When a CPU is brought up, it is checked against the caps that are
known to be enabled on the system (via verify_local_cpu_capabilities()).
Based on the state of the capability on the CPU vs. that of System we
could have the following combinations of conflict.

	x-----------------------------x
	| Type  | System   | Late CPU |
	|-----------------------------|
	|  a    |   y      |    n     |
	|-----------------------------|
	|  b    |   n      |    y     |
	x-----------------------------x

Case (a) is not permitted for caps which are system features, which the
system expects all the CPUs to have (e.g VHE). While (a) is ignored for
all errata work arounds. However, there could be exceptions to the plain
filtering approach. e.g, KPTI is an optional feature for a late CPU as
long as the system already enables it.

Case (b) is not permitted for errata work arounds that cannot be activated
after the kernel has finished booting.And we ignore (b) for features. Here,
yet again, KPTI is an exception, where if a late CPU needs KPTI we are too
late to enable it (because we change the allocation of ASIDs etc).

Add two different flags to indicate how the conflict should be handled.

 ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - CPUs may have the capability
 ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - CPUs may not have the cappability.

Now that we have the flags to describe the behavior of the errata and
the features, as we treat them, define types for ERRATUM and FEATURE.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |   68 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/cpu_errata.c      |   12 +++---
 arch/arm64/kernel/cpufeature.c      |   26 ++++++-------
 3 files changed, 87 insertions(+), 19 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -149,6 +149,7 @@ extern struct arm64_ftr_reg arm64_ftr_re
  *    an action, based on the severity (e.g, a CPU could be prevented from
  *    booting or cause a kernel panic). The CPU is allowed to "affect" the
  *    state of the capability, if it has not been finalised already.
+ *    See section 5 for more details on conflicts.
  *
  * 4) Action: As mentioned in (2), the kernel can take an action for each
  *    detected capability, on all CPUs on the system. Appropriate actions
@@ -166,6 +167,34 @@ extern struct arm64_ftr_reg arm64_ftr_re
  *
  *	  check_local_cpu_capabilities() -> verify_local_cpu_capabilities()
  *
+ * 5) Conflicts: Based on the state of the capability on a late CPU vs.
+ *    the system state, we could have the following combinations :
+ *
+ *		x-----------------------------x
+ *		| Type  | System   | Late CPU |
+ *		|-----------------------------|
+ *		|  a    |   y      |    n     |
+ *		|-----------------------------|
+ *		|  b    |   n      |    y     |
+ *		x-----------------------------x
+ *
+ *     Two separate flag bits are defined to indicate whether each kind of
+ *     conflict can be allowed:
+ *		ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - Case(a) is allowed
+ *		ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - Case(b) is allowed
+ *
+ *     Case (a) is not permitted for a capability that the system requires
+ *     all CPUs to have in order for the capability to be enabled. This is
+ *     typical for capabilities that represent enhanced functionality.
+ *
+ *     Case (b) is not permitted for a capability that must be enabled
+ *     during boot if any CPU in the system requires it in order to run
+ *     safely. This is typical for erratum work arounds that cannot be
+ *     enabled after the corresponding capability is finalised.
+ *
+ *     In some non-typical cases either both (a) and (b), or neither,
+ *     should be permitted. This can be described by including neither
+ *     or both flags in the capability's type field.
  */
 
 
@@ -179,6 +208,33 @@ extern struct arm64_ftr_reg arm64_ftr_re
 #define SCOPE_SYSTEM				ARM64_CPUCAP_SCOPE_SYSTEM
 #define SCOPE_LOCAL_CPU				ARM64_CPUCAP_SCOPE_LOCAL_CPU
 
+/*
+ * Is it permitted for a late CPU to have this capability when system
+ * hasn't already enabled it ?
+ */
+#define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU	((u16)BIT(4))
+/* Is it safe for a late CPU to miss this capability when system has it */
+#define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU	((u16)BIT(5))
+
+/*
+ * CPU errata workarounds that need to be enabled at boot time if one or
+ * more CPUs in the system requires it. When one of these capabilities
+ * has been enabled, it is safe to allow any CPU to boot that doesn't
+ * require the workaround. However, it is not safe if a "late" CPU
+ * requires a workaround and the system hasn't enabled it already.
+ */
+#define ARM64_CPUCAP_LOCAL_CPU_ERRATUM		\
+	(ARM64_CPUCAP_SCOPE_LOCAL_CPU | ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU)
+/*
+ * CPU feature detected at boot time based on system-wide value of a
+ * feature. It is safe for a late CPU to have this feature even though
+ * the system hasn't enabled it, although the featuer will not be used
+ * by Linux in this case. If the system has enabled this feature already,
+ * then every late CPU must have it.
+ */
+#define ARM64_CPUCAP_SYSTEM_FEATURE	\
+	(ARM64_CPUCAP_SCOPE_SYSTEM | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
+
 struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
@@ -212,6 +268,18 @@ static inline int cpucap_default_scope(c
 	return cap->type & ARM64_CPUCAP_SCOPE_MASK;
 }
 
+static inline bool
+cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
+{
+	return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
+}
+
+static inline bool
+cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
+{
+	return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
+}
+
 extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
 extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS];
 extern struct static_key_false arm64_const_caps_ready;
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -406,14 +406,14 @@ static bool has_ssbd_mitigation(const st
 #endif	/* CONFIG_ARM64_SSBD */
 
 #define MIDR_RANGE(model, min, max) \
-	.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \
+	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \
 	.matches = is_affected_midr_range, \
 	.midr_model = model, \
 	.midr_range_min = min, \
 	.midr_range_max = max
 
 #define MIDR_ALL_VERSIONS(model) \
-	.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \
+	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \
 	.matches = is_affected_midr_range, \
 	.midr_model = model, \
 	.midr_range_min = 0, \
@@ -517,14 +517,14 @@ const struct arm64_cpu_capabilities arm6
 		.desc = "Mismatched cache line size",
 		.capability = ARM64_MISMATCHED_CACHE_LINE_SIZE,
 		.matches = has_mismatched_cache_type,
-		.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU,
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.cpu_enable = cpu_enable_trap_ctr_access,
 	},
 	{
 		.desc = "Mismatched cache type",
 		.capability = ARM64_MISMATCHED_CACHE_TYPE,
 		.matches = has_mismatched_cache_type,
-		.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU,
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.cpu_enable = cpu_enable_trap_ctr_access,
 	},
 #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
@@ -538,7 +538,7 @@ const struct arm64_cpu_capabilities arm6
 	{
 		.desc = "Qualcomm Technologies Kryo erratum 1003",
 		.capability = ARM64_WORKAROUND_QCOM_FALKOR_E1003,
-		.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU,
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.midr_model = MIDR_QCOM_KRYO,
 		.matches = is_kryo_midr,
 	},
@@ -613,7 +613,7 @@ const struct arm64_cpu_capabilities arm6
 #ifdef CONFIG_ARM64_SSBD
 	{
 		.desc = "Speculative Store Bypass Disable",
-		.type = ARM64_CPUCAP_SCOPE_LOCAL_CPU,
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.capability = ARM64_SSBD,
 		.matches = has_ssbd_mitigation,
 	},
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -924,7 +924,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "GIC system register CPU interface",
 		.capability = ARM64_HAS_SYSREG_GIC_CPUIF,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = has_useable_gicv3_cpuif,
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.field_pos = ID_AA64PFR0_GIC_SHIFT,
@@ -935,7 +935,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "Privileged Access Never",
 		.capability = ARM64_HAS_PAN,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = has_cpuid_feature,
 		.sys_reg = SYS_ID_AA64MMFR1_EL1,
 		.field_pos = ID_AA64MMFR1_PAN_SHIFT,
@@ -948,7 +948,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "LSE atomic instructions",
 		.capability = ARM64_HAS_LSE_ATOMICS,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = has_cpuid_feature,
 		.sys_reg = SYS_ID_AA64ISAR0_EL1,
 		.field_pos = ID_AA64ISAR0_ATOMICS_SHIFT,
@@ -959,14 +959,14 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "Software prefetching using PRFM",
 		.capability = ARM64_HAS_NO_HW_PREFETCH,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = has_no_hw_prefetch,
 	},
 #ifdef CONFIG_ARM64_UAO
 	{
 		.desc = "User Access Override",
 		.capability = ARM64_HAS_UAO,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = has_cpuid_feature,
 		.sys_reg = SYS_ID_AA64MMFR2_EL1,
 		.field_pos = ID_AA64MMFR2_UAO_SHIFT,
@@ -980,21 +980,21 @@ static const struct arm64_cpu_capabiliti
 #ifdef CONFIG_ARM64_PAN
 	{
 		.capability = ARM64_ALT_PAN_NOT_UAO,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = cpufeature_pan_not_uao,
 	},
 #endif /* CONFIG_ARM64_PAN */
 	{
 		.desc = "Virtualization Host Extensions",
 		.capability = ARM64_HAS_VIRT_HOST_EXTN,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = runs_at_el2,
 		.cpu_enable = cpu_copy_el2regs,
 	},
 	{
 		.desc = "32-bit EL0 Support",
 		.capability = ARM64_HAS_32BIT_EL0,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = has_cpuid_feature,
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
@@ -1004,14 +1004,14 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "Reduced HYP mapping offset",
 		.capability = ARM64_HYP_OFFSET_LOW,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = hyp_offset_low,
 	},
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	{
 		.desc = "Kernel page table isolation (KPTI)",
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = unmap_kernel_at_el0,
 		.cpu_enable = kpti_install_ng_mappings,
 	},
@@ -1019,7 +1019,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.min_field_value = 0,
 		.matches = has_no_fpsimd,
 	},
@@ -1027,7 +1027,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "Data cache clean to Point of Persistence",
 		.capability = ARM64_HAS_DCPOP,
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = has_cpuid_feature,
 		.sys_reg = SYS_ID_AA64ISAR1_EL1,
 		.field_pos = ID_AA64ISAR1_DPB_SHIFT,
@@ -1040,7 +1040,7 @@ static const struct arm64_cpu_capabiliti
 #define HWCAP_CAP(reg, field, s, min_value, cap_type, cap)	\
 	{							\
 		.desc = #cap,					\
-		.type = ARM64_CPUCAP_SCOPE_SYSTEM,		\
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,		\
 		.matches = has_cpuid_feature,			\
 		.sys_reg = reg,					\
 		.field_pos = field,				\



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 051/119] arm64: capabilities: Unify the verification
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (49 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 050/119] arm64: capabilities: Add flags to handle the conflicts on late CPU Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 052/119] arm64: capabilities: Filter the entries based on a given mask Greg Kroah-Hartman
                   ` (72 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Dave Martin, Suzuki K Poulose, Will Deacon,
	Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit eaac4d83daa50fc1b9b7850346e9a62adfd4647e ]

Now that each capability describes how to treat the conflicts
of CPU cap state vs System wide cap state, we can unify the
verification logic to a single place.

Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |   91 ++++++++++++++++++++++++++---------------
 1 file changed, 58 insertions(+), 33 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1229,6 +1229,58 @@ static inline void set_sys_caps_initiali
 }
 
 /*
+ * Run through the list of capabilities to check for conflicts.
+ * If the system has already detected a capability, take necessary
+ * action on this CPU.
+ *
+ * Returns "false" on conflicts.
+ */
+static bool
+__verify_local_cpu_caps(const struct arm64_cpu_capabilities *caps_list)
+{
+	bool cpu_has_cap, system_has_cap;
+	const struct arm64_cpu_capabilities *caps;
+
+	for (caps = caps_list; caps->matches; caps++) {
+		cpu_has_cap = __this_cpu_has_cap(caps_list, caps->capability);
+		system_has_cap = cpus_have_cap(caps->capability);
+
+		if (system_has_cap) {
+			/*
+			 * Check if the new CPU misses an advertised feature,
+			 * which is not safe to miss.
+			 */
+			if (!cpu_has_cap && !cpucap_late_cpu_optional(caps))
+				break;
+			/*
+			 * We have to issue cpu_enable() irrespective of
+			 * whether the CPU has it or not, as it is enabeld
+			 * system wide. It is upto the call back to take
+			 * appropriate action on this CPU.
+			 */
+			if (caps->cpu_enable)
+				caps->cpu_enable(caps);
+		} else {
+			/*
+			 * Check if the CPU has this capability if it isn't
+			 * safe to have when the system doesn't.
+			 */
+			if (cpu_has_cap && !cpucap_late_cpu_permitted(caps))
+				break;
+		}
+	}
+
+	if (caps->matches) {
+		pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n",
+			smp_processor_id(), caps->capability,
+			caps->desc, system_has_cap, cpu_has_cap);
+		return false;
+	}
+
+	return true;
+}
+
+/*
  * Check for CPU features that are used in early boot
  * based on the Boot CPU value.
  */
@@ -1250,25 +1302,10 @@ verify_local_elf_hwcaps(const struct arm
 		}
 }
 
-static void
-verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list)
+static void verify_local_cpu_features(void)
 {
-	const struct arm64_cpu_capabilities *caps = caps_list;
-	for (; caps->matches; caps++) {
-		if (!cpus_have_cap(caps->capability))
-			continue;
-		/*
-		 * If the new CPU misses an advertised feature, we cannot proceed
-		 * further, park the cpu.
-		 */
-		if (!__this_cpu_has_cap(caps_list, caps->capability)) {
-			pr_crit("CPU%d: missing feature: %s\n",
-					smp_processor_id(), caps->desc);
-			cpu_die_early();
-		}
-		if (caps->cpu_enable)
-			caps->cpu_enable(caps);
-	}
+	if (!__verify_local_cpu_caps(arm64_features))
+		cpu_die_early();
 }
 
 /*
@@ -1278,20 +1315,8 @@ verify_local_cpu_features(const struct a
  */
 static void verify_local_cpu_errata_workarounds(void)
 {
-	const struct arm64_cpu_capabilities *caps = arm64_errata;
-
-	for (; caps->matches; caps++) {
-		if (cpus_have_cap(caps->capability)) {
-			if (caps->cpu_enable)
-				caps->cpu_enable(caps);
-		} else if (caps->matches(caps, SCOPE_LOCAL_CPU)) {
-			pr_crit("CPU%d: Requires work around for %s, not detected"
-					" at boot time\n",
-				smp_processor_id(),
-				caps->desc ? : "an erratum");
-			cpu_die_early();
-		}
-	}
+	if (!__verify_local_cpu_caps(arm64_errata))
+		cpu_die_early();
 }
 
 static void update_cpu_errata_workarounds(void)
@@ -1315,7 +1340,7 @@ static void __init enable_errata_workaro
 static void verify_local_cpu_capabilities(void)
 {
 	verify_local_cpu_errata_workarounds();
-	verify_local_cpu_features(arm64_features);
+	verify_local_cpu_features();
 	verify_local_elf_hwcaps(arm64_elf_hwcaps);
 	if (system_supports_32bit_el0())
 		verify_local_elf_hwcaps(compat_elf_hwcaps);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 052/119] arm64: capabilities: Filter the entries based on a given mask
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (50 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 051/119] arm64: capabilities: Unify the verification Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 053/119] arm64: capabilities: Prepare for grouping features and errata work arounds Greg Kroah-Hartman
                   ` (71 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Mark Rutland, Dave Martin,
	Suzuki K Poulose, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit cce360b54ce6ca1bcf4b0a870ec076d83606775e ]

While processing the list of capabilities, it is useful to
filter out some of the entries based on the given mask for the
scope of the capabilities to allow better control. This can be
used later for handling LOCAL vs SYSTEM wide capabilities and more.
All capabilities should have their scope set to either LOCAL_CPU or
SYSTEM. No functional/flow change.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |    1 +
 arch/arm64/kernel/cpufeature.c      |   33 ++++++++++++++++++++++-----------
 2 files changed, 23 insertions(+), 11 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -207,6 +207,7 @@ extern struct arm64_ftr_reg arm64_ftr_re
 
 #define SCOPE_SYSTEM				ARM64_CPUCAP_SCOPE_SYSTEM
 #define SCOPE_LOCAL_CPU				ARM64_CPUCAP_SCOPE_LOCAL_CPU
+#define SCOPE_ALL				ARM64_CPUCAP_SCOPE_MASK
 
 /*
  * Is it permitted for a late CPU to have this capability when system
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1164,10 +1164,12 @@ static bool __this_cpu_has_cap(const str
 }
 
 static void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
-				    const char *info)
+				    u16 scope_mask, const char *info)
 {
+	scope_mask &= ARM64_CPUCAP_SCOPE_MASK;
 	for (; caps->matches; caps++) {
-		if (!caps->matches(caps, cpucap_default_scope(caps)))
+		if (!(caps->type & scope_mask) ||
+		    !caps->matches(caps, cpucap_default_scope(caps)))
 			continue;
 
 		if (!cpus_have_cap(caps->capability) && caps->desc)
@@ -1189,12 +1191,14 @@ static int __enable_cpu_capability(void
  * CPUs
  */
 static void __init
-enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
+enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
+			u16 scope_mask)
 {
+	scope_mask &= ARM64_CPUCAP_SCOPE_MASK;
 	for (; caps->matches; caps++) {
 		unsigned int num = caps->capability;
 
-		if (!cpus_have_cap(num))
+		if (!(caps->type & scope_mask) || !cpus_have_cap(num))
 			continue;
 
 		/* Ensure cpus_have_const_cap(num) works */
@@ -1236,12 +1240,18 @@ static inline void set_sys_caps_initiali
  * Returns "false" on conflicts.
  */
 static bool
-__verify_local_cpu_caps(const struct arm64_cpu_capabilities *caps_list)
+__verify_local_cpu_caps(const struct arm64_cpu_capabilities *caps_list,
+			u16 scope_mask)
 {
 	bool cpu_has_cap, system_has_cap;
 	const struct arm64_cpu_capabilities *caps;
 
+	scope_mask &= ARM64_CPUCAP_SCOPE_MASK;
+
 	for (caps = caps_list; caps->matches; caps++) {
+		if (!(caps->type & scope_mask))
+			continue;
+
 		cpu_has_cap = __this_cpu_has_cap(caps_list, caps->capability);
 		system_has_cap = cpus_have_cap(caps->capability);
 
@@ -1304,7 +1314,7 @@ verify_local_elf_hwcaps(const struct arm
 
 static void verify_local_cpu_features(void)
 {
-	if (!__verify_local_cpu_caps(arm64_features))
+	if (!__verify_local_cpu_caps(arm64_features, SCOPE_ALL))
 		cpu_die_early();
 }
 
@@ -1315,18 +1325,19 @@ static void verify_local_cpu_features(vo
  */
 static void verify_local_cpu_errata_workarounds(void)
 {
-	if (!__verify_local_cpu_caps(arm64_errata))
+	if (!__verify_local_cpu_caps(arm64_errata, SCOPE_ALL))
 		cpu_die_early();
 }
 
 static void update_cpu_errata_workarounds(void)
 {
-	update_cpu_capabilities(arm64_errata, "enabling workaround for");
+	update_cpu_capabilities(arm64_errata, SCOPE_ALL,
+				"enabling workaround for");
 }
 
 static void __init enable_errata_workarounds(void)
 {
-	enable_cpu_capabilities(arm64_errata);
+	enable_cpu_capabilities(arm64_errata, SCOPE_ALL);
 }
 
 /*
@@ -1368,8 +1379,8 @@ void check_local_cpu_capabilities(void)
 
 static void __init setup_feature_capabilities(void)
 {
-	update_cpu_capabilities(arm64_features, "detected feature:");
-	enable_cpu_capabilities(arm64_features);
+	update_cpu_capabilities(arm64_features, SCOPE_ALL, "detected:");
+	enable_cpu_capabilities(arm64_features, SCOPE_ALL);
 }
 
 DEFINE_STATIC_KEY_FALSE(arm64_const_caps_ready);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 053/119] arm64: capabilities: Prepare for grouping features and errata work arounds
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (51 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 052/119] arm64: capabilities: Filter the entries based on a given mask Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 054/119] arm64: capabilities: Split the processing of " Greg Kroah-Hartman
                   ` (70 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Dave Martin, Suzuki K Poulose, Will Deacon,
	Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 600b9c919c2f4d07a7bf67864086aa3432224674 ]

We are about to group the handling of all capabilities (features
and errata workarounds). This patch open codes the wrapper routines
to make it easier to merge the handling.

Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |   58 ++++++++++++-----------------------------
 1 file changed, 18 insertions(+), 40 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -485,7 +485,8 @@ static void __init init_cpu_ftr_reg(u32
 }
 
 extern const struct arm64_cpu_capabilities arm64_errata[];
-static void update_cpu_errata_workarounds(void);
+static void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
+				    u16 scope_mask, const char *info);
 
 void __init init_cpu_features(struct cpuinfo_arm64 *info)
 {
@@ -528,7 +529,8 @@ void __init init_cpu_features(struct cpu
 	 * Run the errata work around checks on the boot CPU, once we have
 	 * initialised the cpu feature infrastructure.
 	 */
-	update_cpu_errata_workarounds();
+	update_cpu_capabilities(arm64_errata, SCOPE_ALL,
+				"enabling workaround for");
 }
 
 static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new)
@@ -1312,33 +1314,6 @@ verify_local_elf_hwcaps(const struct arm
 		}
 }
 
-static void verify_local_cpu_features(void)
-{
-	if (!__verify_local_cpu_caps(arm64_features, SCOPE_ALL))
-		cpu_die_early();
-}
-
-/*
- * The CPU Errata work arounds are detected and applied at boot time
- * and the related information is freed soon after. If the new CPU requires
- * an errata not detected at boot, fail this CPU.
- */
-static void verify_local_cpu_errata_workarounds(void)
-{
-	if (!__verify_local_cpu_caps(arm64_errata, SCOPE_ALL))
-		cpu_die_early();
-}
-
-static void update_cpu_errata_workarounds(void)
-{
-	update_cpu_capabilities(arm64_errata, SCOPE_ALL,
-				"enabling workaround for");
-}
-
-static void __init enable_errata_workarounds(void)
-{
-	enable_cpu_capabilities(arm64_errata, SCOPE_ALL);
-}
 
 /*
  * Run through the enabled system capabilities and enable() it on this CPU.
@@ -1350,8 +1325,15 @@ static void __init enable_errata_workaro
  */
 static void verify_local_cpu_capabilities(void)
 {
-	verify_local_cpu_errata_workarounds();
-	verify_local_cpu_features();
+	/*
+	 * The CPU Errata work arounds are detected and applied at boot time
+	 * and the related information is freed soon after. If the new CPU
+	 * requires an errata not detected at boot, fail this CPU.
+	 */
+	if (!__verify_local_cpu_caps(arm64_errata, SCOPE_ALL))
+		cpu_die_early();
+	if (!__verify_local_cpu_caps(arm64_features, SCOPE_ALL))
+		cpu_die_early();
 	verify_local_elf_hwcaps(arm64_elf_hwcaps);
 	if (system_supports_32bit_el0())
 		verify_local_elf_hwcaps(compat_elf_hwcaps);
@@ -1372,17 +1354,12 @@ void check_local_cpu_capabilities(void)
 	 * advertised capabilities.
 	 */
 	if (!sys_caps_initialised)
-		update_cpu_errata_workarounds();
+		update_cpu_capabilities(arm64_errata, SCOPE_ALL,
+					"enabling workaround for");
 	else
 		verify_local_cpu_capabilities();
 }
 
-static void __init setup_feature_capabilities(void)
-{
-	update_cpu_capabilities(arm64_features, SCOPE_ALL, "detected:");
-	enable_cpu_capabilities(arm64_features, SCOPE_ALL);
-}
-
 DEFINE_STATIC_KEY_FALSE(arm64_const_caps_ready);
 EXPORT_SYMBOL(arm64_const_caps_ready);
 
@@ -1405,8 +1382,9 @@ void __init setup_cpu_features(void)
 	int cls;
 
 	/* Set the CPU feature capabilies */
-	setup_feature_capabilities();
-	enable_errata_workarounds();
+	update_cpu_capabilities(arm64_features, SCOPE_ALL, "detected:");
+	enable_cpu_capabilities(arm64_features, SCOPE_ALL);
+	enable_cpu_capabilities(arm64_errata, SCOPE_ALL);
 	mark_const_caps_ready();
 	setup_elf_hwcaps(arm64_elf_hwcaps);
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 054/119] arm64: capabilities: Split the processing of errata work arounds
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (52 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 053/119] arm64: capabilities: Prepare for grouping features and errata work arounds Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 055/119] arm64: capabilities: Allow features based on local CPU scope Greg Kroah-Hartman
                   ` (69 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Dave Martin, Suzuki K Poulose, Will Deacon,
	Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit d69fe9a7e7214d49fe157ec20889892388d0fe23 ]

Right now we run through the errata workarounds check on all boot
active CPUs, with SCOPE_ALL. This wouldn't help for detecting erratum
workarounds with a SYSTEM_SCOPE. There are none yet, but we plan to
introduce some: let us clean this up so that such workarounds can be
detected and enabled correctly.

So, we run the checks with SCOPE_LOCAL_CPU on all CPUs and SCOPE_SYSTEM
checks are run only once after all the boot time CPUs are active.

Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -529,7 +529,7 @@ void __init init_cpu_features(struct cpu
 	 * Run the errata work around checks on the boot CPU, once we have
 	 * initialised the cpu feature infrastructure.
 	 */
-	update_cpu_capabilities(arm64_errata, SCOPE_ALL,
+	update_cpu_capabilities(arm64_errata, SCOPE_LOCAL_CPU,
 				"enabling workaround for");
 }
 
@@ -1354,7 +1354,7 @@ void check_local_cpu_capabilities(void)
 	 * advertised capabilities.
 	 */
 	if (!sys_caps_initialised)
-		update_cpu_capabilities(arm64_errata, SCOPE_ALL,
+		update_cpu_capabilities(arm64_errata, SCOPE_LOCAL_CPU,
 					"enabling workaround for");
 	else
 		verify_local_cpu_capabilities();
@@ -1383,6 +1383,8 @@ void __init setup_cpu_features(void)
 
 	/* Set the CPU feature capabilies */
 	update_cpu_capabilities(arm64_features, SCOPE_ALL, "detected:");
+	update_cpu_capabilities(arm64_errata, SCOPE_SYSTEM,
+				"enabling workaround for");
 	enable_cpu_capabilities(arm64_features, SCOPE_ALL);
 	enable_cpu_capabilities(arm64_errata, SCOPE_ALL);
 	mark_const_caps_ready();



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 055/119] arm64: capabilities: Allow features based on local CPU scope
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (53 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 054/119] arm64: capabilities: Split the processing of " Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 056/119] arm64: capabilities: Group handling of features and errata workarounds Greg Kroah-Hartman
                   ` (68 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Dave Martin, Suzuki K Poulose, Will Deacon,
	Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit fbd890b9b8497bab04c1d338bd97579a7bc53fab ]

So far we have treated the feature capabilities as system wide
and this wouldn't help with features that could be detected locally
on one or more CPUs (e.g, KPTI, Software prefetch). This patch
splits the feature detection to two phases :

 1) Local CPU features are checked on all boot time active CPUs.
 2) System wide features are checked only once after all CPUs are
    active.

Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |   17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -485,6 +485,7 @@ static void __init init_cpu_ftr_reg(u32
 }
 
 extern const struct arm64_cpu_capabilities arm64_errata[];
+static const struct arm64_cpu_capabilities arm64_features[];
 static void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 				    u16 scope_mask, const char *info);
 
@@ -526,11 +527,12 @@ void __init init_cpu_features(struct cpu
 	}
 
 	/*
-	 * Run the errata work around checks on the boot CPU, once we have
-	 * initialised the cpu feature infrastructure.
+	 * Run the errata work around and local feature checks on the
+	 * boot CPU, once we have initialised the cpu feature infrastructure.
 	 */
 	update_cpu_capabilities(arm64_errata, SCOPE_LOCAL_CPU,
 				"enabling workaround for");
+	update_cpu_capabilities(arm64_features, SCOPE_LOCAL_CPU, "detected:");
 }
 
 static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new)
@@ -1349,15 +1351,18 @@ void check_local_cpu_capabilities(void)
 
 	/*
 	 * If we haven't finalised the system capabilities, this CPU gets
-	 * a chance to update the errata work arounds.
+	 * a chance to update the errata work arounds and local features.
 	 * Otherwise, this CPU should verify that it has all the system
 	 * advertised capabilities.
 	 */
-	if (!sys_caps_initialised)
+	if (!sys_caps_initialised) {
 		update_cpu_capabilities(arm64_errata, SCOPE_LOCAL_CPU,
 					"enabling workaround for");
-	else
+		update_cpu_capabilities(arm64_features, SCOPE_LOCAL_CPU,
+					"detected:");
+	} else {
 		verify_local_cpu_capabilities();
+	}
 }
 
 DEFINE_STATIC_KEY_FALSE(arm64_const_caps_ready);
@@ -1382,7 +1387,7 @@ void __init setup_cpu_features(void)
 	int cls;
 
 	/* Set the CPU feature capabilies */
-	update_cpu_capabilities(arm64_features, SCOPE_ALL, "detected:");
+	update_cpu_capabilities(arm64_features, SCOPE_SYSTEM, "detected:");
 	update_cpu_capabilities(arm64_errata, SCOPE_SYSTEM,
 				"enabling workaround for");
 	enable_cpu_capabilities(arm64_features, SCOPE_ALL);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 056/119] arm64: capabilities: Group handling of features and errata workarounds
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (54 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 055/119] arm64: capabilities: Allow features based on local CPU scope Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 057/119] arm64: capabilities: Introduce weak features based on local CPU Greg Kroah-Hartman
                   ` (67 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Dave Martin, Suzuki K Poulose, Will Deacon,
	Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit ed478b3f9e4ac97fdbe07007fb2662415de8fe25 ]

Now that the features and errata workarounds have the same
rules and flow, group the handling of the tables.

Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |   73 +++++++++++++++++++++++------------------
 1 file changed, 42 insertions(+), 31 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -485,9 +485,7 @@ static void __init init_cpu_ftr_reg(u32
 }
 
 extern const struct arm64_cpu_capabilities arm64_errata[];
-static const struct arm64_cpu_capabilities arm64_features[];
-static void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
-				    u16 scope_mask, const char *info);
+static void update_cpu_capabilities(u16 scope_mask);
 
 void __init init_cpu_features(struct cpuinfo_arm64 *info)
 {
@@ -530,9 +528,7 @@ void __init init_cpu_features(struct cpu
 	 * Run the errata work around and local feature checks on the
 	 * boot CPU, once we have initialised the cpu feature infrastructure.
 	 */
-	update_cpu_capabilities(arm64_errata, SCOPE_LOCAL_CPU,
-				"enabling workaround for");
-	update_cpu_capabilities(arm64_features, SCOPE_LOCAL_CPU, "detected:");
+	update_cpu_capabilities(SCOPE_LOCAL_CPU);
 }
 
 static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new)
@@ -1167,8 +1163,8 @@ static bool __this_cpu_has_cap(const str
 	return false;
 }
 
-static void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
-				    u16 scope_mask, const char *info)
+static void __update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
+				      u16 scope_mask, const char *info)
 {
 	scope_mask &= ARM64_CPUCAP_SCOPE_MASK;
 	for (; caps->matches; caps++) {
@@ -1182,6 +1178,13 @@ static void update_cpu_capabilities(cons
 	}
 }
 
+static void update_cpu_capabilities(u16 scope_mask)
+{
+	__update_cpu_capabilities(arm64_features, scope_mask, "detected:");
+	__update_cpu_capabilities(arm64_errata, scope_mask,
+				  "enabling workaround for");
+}
+
 static int __enable_cpu_capability(void *arg)
 {
 	const struct arm64_cpu_capabilities *cap = arg;
@@ -1195,8 +1198,8 @@ static int __enable_cpu_capability(void
  * CPUs
  */
 static void __init
-enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
-			u16 scope_mask)
+__enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
+			  u16 scope_mask)
 {
 	scope_mask &= ARM64_CPUCAP_SCOPE_MASK;
 	for (; caps->matches; caps++) {
@@ -1221,6 +1224,12 @@ enable_cpu_capabilities(const struct arm
 	}
 }
 
+static void __init enable_cpu_capabilities(u16 scope_mask)
+{
+	__enable_cpu_capabilities(arm64_features, scope_mask);
+	__enable_cpu_capabilities(arm64_errata, scope_mask);
+}
+
 /*
  * Flag to indicate if we have computed the system wide
  * capabilities based on the boot time active CPUs. This
@@ -1294,6 +1303,12 @@ __verify_local_cpu_caps(const struct arm
 	return true;
 }
 
+static bool verify_local_cpu_caps(u16 scope_mask)
+{
+	return __verify_local_cpu_caps(arm64_errata, scope_mask) &&
+	       __verify_local_cpu_caps(arm64_features, scope_mask);
+}
+
 /*
  * Check for CPU features that are used in early boot
  * based on the Boot CPU value.
@@ -1327,15 +1342,9 @@ verify_local_elf_hwcaps(const struct arm
  */
 static void verify_local_cpu_capabilities(void)
 {
-	/*
-	 * The CPU Errata work arounds are detected and applied at boot time
-	 * and the related information is freed soon after. If the new CPU
-	 * requires an errata not detected at boot, fail this CPU.
-	 */
-	if (!__verify_local_cpu_caps(arm64_errata, SCOPE_ALL))
-		cpu_die_early();
-	if (!__verify_local_cpu_caps(arm64_features, SCOPE_ALL))
+	if (!verify_local_cpu_caps(SCOPE_ALL))
 		cpu_die_early();
+
 	verify_local_elf_hwcaps(arm64_elf_hwcaps);
 	if (system_supports_32bit_el0())
 		verify_local_elf_hwcaps(compat_elf_hwcaps);
@@ -1355,14 +1364,10 @@ void check_local_cpu_capabilities(void)
 	 * Otherwise, this CPU should verify that it has all the system
 	 * advertised capabilities.
 	 */
-	if (!sys_caps_initialised) {
-		update_cpu_capabilities(arm64_errata, SCOPE_LOCAL_CPU,
-					"enabling workaround for");
-		update_cpu_capabilities(arm64_features, SCOPE_LOCAL_CPU,
-					"detected:");
-	} else {
+	if (!sys_caps_initialised)
+		update_cpu_capabilities(SCOPE_LOCAL_CPU);
+	else
 		verify_local_cpu_capabilities();
-	}
 }
 
 DEFINE_STATIC_KEY_FALSE(arm64_const_caps_ready);
@@ -1381,17 +1386,23 @@ bool this_cpu_has_cap(unsigned int cap)
 		__this_cpu_has_cap(arm64_errata, cap));
 }
 
+static void __init setup_system_capabilities(void)
+{
+	/*
+	 * We have finalised the system-wide safe feature
+	 * registers, finalise the capabilities that depend
+	 * on it. Also enable all the available capabilities.
+	 */
+	update_cpu_capabilities(SCOPE_SYSTEM);
+	enable_cpu_capabilities(SCOPE_ALL);
+}
+
 void __init setup_cpu_features(void)
 {
 	u32 cwg;
 	int cls;
 
-	/* Set the CPU feature capabilies */
-	update_cpu_capabilities(arm64_features, SCOPE_SYSTEM, "detected:");
-	update_cpu_capabilities(arm64_errata, SCOPE_SYSTEM,
-				"enabling workaround for");
-	enable_cpu_capabilities(arm64_features, SCOPE_ALL);
-	enable_cpu_capabilities(arm64_errata, SCOPE_ALL);
+	setup_system_capabilities();
 	mark_const_caps_ready();
 	setup_elf_hwcaps(arm64_elf_hwcaps);
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 057/119] arm64: capabilities: Introduce weak features based on local CPU
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (55 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 056/119] arm64: capabilities: Group handling of features and errata workarounds Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 058/119] arm64: capabilities: Restrict KPTI detection to boot-time CPUs Greg Kroah-Hartman
                   ` (66 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Dave Martin, Suzuki K Poulose,
	Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 5c137714dd8cae464dbd5f028c07af149e6d09fc ]

Now that we have the flexibility of defining system features based
on individual CPUs, introduce CPU feature type that can be detected
on a local SCOPE and ignores the conflict on late CPUs. This is
applicable for ARM64_HAS_NO_HW_PREFETCH, where it is fine for
the system to have CPUs without hardware prefetch turning up
later. We only suffer a performance penalty, nothing fatal.

Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |    8 ++++++++
 arch/arm64/kernel/cpufeature.c      |    2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -235,6 +235,14 @@ extern struct arm64_ftr_reg arm64_ftr_re
  */
 #define ARM64_CPUCAP_SYSTEM_FEATURE	\
 	(ARM64_CPUCAP_SCOPE_SYSTEM | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
+/*
+ * CPU feature detected at boot time based on feature of one or more CPUs.
+ * All possible conflicts for a late CPU are ignored.
+ */
+#define ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE		\
+	(ARM64_CPUCAP_SCOPE_LOCAL_CPU		|	\
+	 ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU	|	\
+	 ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
 
 struct arm64_cpu_capabilities {
 	const char *desc;
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -959,7 +959,7 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "Software prefetching using PRFM",
 		.capability = ARM64_HAS_NO_HW_PREFETCH,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
 		.matches = has_no_hw_prefetch,
 	},
 #ifdef CONFIG_ARM64_UAO



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 058/119] arm64: capabilities: Restrict KPTI detection to boot-time CPUs
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (56 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 057/119] arm64: capabilities: Introduce weak features based on local CPU Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 059/119] arm64: capabilities: Add support for features enabled early Greg Kroah-Hartman
                   ` (65 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Dave Martin, Suzuki K Poulose,
	Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit d3aec8a28be3b88bf75442e7c24fd9da8d69a6df ]

KPTI is treated as a system wide feature and is only detected if all
the CPUs in the sysetm needs the defense, unless it is forced via kernel
command line. This leaves a system with a mix of CPUs with and without
the defense vulnerable. Also, if a late CPU needs KPTI but KPTI was not
activated at boot time, the CPU is currently allowed to boot, which is a
potential security vulnerability.
This patch ensures that the KPTI is turned on if at least one CPU detects
the capability (i.e, change scope to SCOPE_LOCAL_CPU). Also rejetcs a late
CPU, if it requires the defense, when the system hasn't enabled it,

Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |    9 +++++++++
 arch/arm64/kernel/cpufeature.c      |   16 +++++++++++-----
 2 files changed, 20 insertions(+), 5 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -244,6 +244,15 @@ extern struct arm64_ftr_reg arm64_ftr_re
 	 ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU	|	\
 	 ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
 
+/*
+ * CPU feature detected at boot time, on one or more CPUs. A late CPU
+ * is not allowed to have the capability when the system doesn't have it.
+ * It is Ok for a late CPU to miss the feature.
+ */
+#define ARM64_CPUCAP_BOOT_RESTRICTED_CPU_LOCAL_FEATURE	\
+	(ARM64_CPUCAP_SCOPE_LOCAL_CPU		|	\
+	 ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU)
+
 struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -824,10 +824,9 @@ static bool has_no_fpsimd(const struct a
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
-				int __unused)
+				int scope)
 {
 	char const *str = "command line option";
-	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
 
 	/*
 	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
@@ -863,8 +862,7 @@ static bool unmap_kernel_at_el0(const st
 	}
 
 	/* Defer to CPU feature registers */
-	return !cpuid_feature_extract_unsigned_field(pfr0,
-						     ID_AA64PFR0_CSV3_SHIFT);
+	return !has_cpuid_feature(entry, scope);
 }
 
 static void
@@ -1011,7 +1009,15 @@ static const struct arm64_cpu_capabiliti
 	{
 		.desc = "Kernel page table isolation (KPTI)",
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_BOOT_RESTRICTED_CPU_LOCAL_FEATURE,
+		/*
+		 * The ID feature fields below are used to indicate that
+		 * the CPU doesn't need KPTI. See unmap_kernel_at_el0 for
+		 * more details.
+		 */
+		.sys_reg = SYS_ID_AA64PFR0_EL1,
+		.field_pos = ID_AA64PFR0_CSV3_SHIFT,
+		.min_field_value = 1,
 		.matches = unmap_kernel_at_el0,
 		.cpu_enable = kpti_install_ng_mappings,
 	},



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 059/119] arm64: capabilities: Add support for features enabled early
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (57 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 058/119] arm64: capabilities: Restrict KPTI detection to boot-time CPUs Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 060/119] arm64: capabilities: Change scope of VHE to Boot CPU feature Greg Kroah-Hartman
                   ` (64 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Julien Thierry, Will Deacon, Mark Rutland,
	Marc Zyngier, Dave Martin, Suzuki K Poulose, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit fd9d63da17daf09c0099e3d5e3f0c0f03d9b251b ]

The kernel detects and uses some of the features based on the boot
CPU and expects that all the following CPUs conform to it. e.g,
with VHE and the boot CPU running at EL2, the kernel decides to
keep the kernel running at EL2. If another CPU is brought up without
this capability, we use custom hooks (via check_early_cpu_features())
to handle it. To handle such capabilities add support for detecting
and enabling capabilities based on the boot CPU.

A bit is added to indicate if the capability should be detected
early on the boot CPU. The infrastructure then ensures that such
capabilities are probed and "enabled" early on in the boot CPU
and, enabled on the subsequent CPUs.

Cc: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |   48 ++++++++++++++++++++++++------
 arch/arm64/kernel/cpufeature.c      |   57 +++++++++++++++++++++++++++---------
 2 files changed, 83 insertions(+), 22 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -104,7 +104,7 @@ extern struct arm64_ftr_reg arm64_ftr_re
  *    value of a field in CPU ID feature register or checking the cpu
  *    model. The capability provides a call back ( @matches() ) to
  *    perform the check. Scope defines how the checks should be performed.
- *    There are two cases:
+ *    There are three cases:
  *
  *     a) SCOPE_LOCAL_CPU: check all the CPUs and "detect" if at least one
  *        matches. This implies, we have to run the check on all the
@@ -117,6 +117,11 @@ extern struct arm64_ftr_reg arm64_ftr_re
  *        capability relies on a field in one of the CPU ID feature
  *        registers, we use the sanitised value of the register from the
  *        CPU feature infrastructure to make the decision.
+ *		Or
+ *     c) SCOPE_BOOT_CPU: Check only on the primary boot CPU to detect the
+ *        feature. This category is for features that are "finalised"
+ *        (or used) by the kernel very early even before the SMP cpus
+ *        are brought up.
  *
  *    The process of detection is usually denoted by "update" capability
  *    state in the code.
@@ -136,6 +141,11 @@ extern struct arm64_ftr_reg arm64_ftr_re
  *    CPUs are treated "late CPUs" for capabilities determined by the boot
  *    CPU.
  *
+ *    At the moment there are two passes of finalising the capabilities.
+ *      a) Boot CPU scope capabilities - Finalised by primary boot CPU via
+ *         setup_boot_cpu_capabilities().
+ *      b) Everything except (a) - Run via setup_system_capabilities().
+ *
  * 3) Verification: When a CPU is brought online (e.g, by user or by the
  *    kernel), the kernel should make sure that it is safe to use the CPU,
  *    by verifying that the CPU is compliant with the state of the
@@ -144,12 +154,21 @@ extern struct arm64_ftr_reg arm64_ftr_re
  *	secondary_start_kernel()-> check_local_cpu_capabilities()
  *
  *    As explained in (2) above, capabilities could be finalised at
- *    different points in the execution. Each CPU is verified against the
- *    "finalised" capabilities and if there is a conflict, the kernel takes
- *    an action, based on the severity (e.g, a CPU could be prevented from
- *    booting or cause a kernel panic). The CPU is allowed to "affect" the
- *    state of the capability, if it has not been finalised already.
- *    See section 5 for more details on conflicts.
+ *    different points in the execution. Each newly booted CPU is verified
+ *    against the capabilities that have been finalised by the time it
+ *    boots.
+ *
+ *	a) SCOPE_BOOT_CPU : All CPUs are verified against the capability
+ *	except for the primary boot CPU.
+ *
+ *	b) SCOPE_LOCAL_CPU, SCOPE_SYSTEM: All CPUs hotplugged on by the
+ *	user after the kernel boot are verified against the capability.
+ *
+ *    If there is a conflict, the kernel takes an action, based on the
+ *    severity (e.g, a CPU could be prevented from booting or cause a
+ *    kernel panic). The CPU is allowed to "affect" the state of the
+ *    capability, if it has not been finalised already. See section 5
+ *    for more details on conflicts.
  *
  * 4) Action: As mentioned in (2), the kernel can take an action for each
  *    detected capability, on all CPUs on the system. Appropriate actions
@@ -198,15 +217,26 @@ extern struct arm64_ftr_reg arm64_ftr_re
  */
 
 
-/* Decide how the capability is detected. On a local CPU vs System wide */
+/*
+ * Decide how the capability is detected.
+ * On any local CPU vs System wide vs the primary boot CPU
+ */
 #define ARM64_CPUCAP_SCOPE_LOCAL_CPU		((u16)BIT(0))
 #define ARM64_CPUCAP_SCOPE_SYSTEM		((u16)BIT(1))
+/*
+ * The capabilitiy is detected on the Boot CPU and is used by kernel
+ * during early boot. i.e, the capability should be "detected" and
+ * "enabled" as early as possibly on all booting CPUs.
+ */
+#define ARM64_CPUCAP_SCOPE_BOOT_CPU		((u16)BIT(2))
 #define ARM64_CPUCAP_SCOPE_MASK			\
 	(ARM64_CPUCAP_SCOPE_SYSTEM	|	\
-	 ARM64_CPUCAP_SCOPE_LOCAL_CPU)
+	 ARM64_CPUCAP_SCOPE_LOCAL_CPU	|	\
+	 ARM64_CPUCAP_SCOPE_BOOT_CPU)
 
 #define SCOPE_SYSTEM				ARM64_CPUCAP_SCOPE_SYSTEM
 #define SCOPE_LOCAL_CPU				ARM64_CPUCAP_SCOPE_LOCAL_CPU
+#define SCOPE_BOOT_CPU				ARM64_CPUCAP_SCOPE_BOOT_CPU
 #define SCOPE_ALL				ARM64_CPUCAP_SCOPE_MASK
 
 /*
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -485,7 +485,7 @@ static void __init init_cpu_ftr_reg(u32
 }
 
 extern const struct arm64_cpu_capabilities arm64_errata[];
-static void update_cpu_capabilities(u16 scope_mask);
+static void __init setup_boot_cpu_capabilities(void);
 
 void __init init_cpu_features(struct cpuinfo_arm64 *info)
 {
@@ -525,10 +525,10 @@ void __init init_cpu_features(struct cpu
 	}
 
 	/*
-	 * Run the errata work around and local feature checks on the
-	 * boot CPU, once we have initialised the cpu feature infrastructure.
+	 * Detect and enable early CPU capabilities based on the boot CPU,
+	 * after we have initialised the CPU feature infrastructure.
 	 */
-	update_cpu_capabilities(SCOPE_LOCAL_CPU);
+	setup_boot_cpu_capabilities();
 }
 
 static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new)
@@ -1219,13 +1219,24 @@ __enable_cpu_capabilities(const struct a
 
 		if (caps->cpu_enable) {
 			/*
-			 * Use stop_machine() as it schedules the work allowing
-			 * us to modify PSTATE, instead of on_each_cpu() which
-			 * uses an IPI, giving us a PSTATE that disappears when
-			 * we return.
+			 * Capabilities with SCOPE_BOOT_CPU scope are finalised
+			 * before any secondary CPU boots. Thus, each secondary
+			 * will enable the capability as appropriate via
+			 * check_local_cpu_capabilities(). The only exception is
+			 * the boot CPU, for which the capability must be
+			 * enabled here. This approach avoids costly
+			 * stop_machine() calls for this case.
+			 *
+			 * Otherwise, use stop_machine() as it schedules the
+			 * work allowing us to modify PSTATE, instead of
+			 * on_each_cpu() which uses an IPI, giving us a PSTATE
+			 * that disappears when we return.
 			 */
-			stop_machine(__enable_cpu_capability, (void *)caps,
-				     cpu_online_mask);
+			if (scope_mask & SCOPE_BOOT_CPU)
+				caps->cpu_enable(caps);
+			else
+				stop_machine(__enable_cpu_capability,
+					     (void *)caps, cpu_online_mask);
 		}
 	}
 }
@@ -1323,6 +1334,12 @@ static void check_early_cpu_features(voi
 {
 	verify_cpu_run_el();
 	verify_cpu_asid_bits();
+	/*
+	 * Early features are used by the kernel already. If there
+	 * is a conflict, we cannot proceed further.
+	 */
+	if (!verify_local_cpu_caps(SCOPE_BOOT_CPU))
+		cpu_panic_kernel();
 }
 
 static void
@@ -1348,7 +1365,12 @@ verify_local_elf_hwcaps(const struct arm
  */
 static void verify_local_cpu_capabilities(void)
 {
-	if (!verify_local_cpu_caps(SCOPE_ALL))
+	/*
+	 * The capabilities with SCOPE_BOOT_CPU are checked from
+	 * check_early_cpu_features(), as they need to be verified
+	 * on all secondary CPUs.
+	 */
+	if (!verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU))
 		cpu_die_early();
 
 	verify_local_elf_hwcaps(arm64_elf_hwcaps);
@@ -1376,6 +1398,14 @@ void check_local_cpu_capabilities(void)
 		verify_local_cpu_capabilities();
 }
 
+static void __init setup_boot_cpu_capabilities(void)
+{
+	/* Detect capabilities with either SCOPE_BOOT_CPU or SCOPE_LOCAL_CPU */
+	update_cpu_capabilities(SCOPE_BOOT_CPU | SCOPE_LOCAL_CPU);
+	/* Enable the SCOPE_BOOT_CPU capabilities alone right away */
+	enable_cpu_capabilities(SCOPE_BOOT_CPU);
+}
+
 DEFINE_STATIC_KEY_FALSE(arm64_const_caps_ready);
 EXPORT_SYMBOL(arm64_const_caps_ready);
 
@@ -1397,10 +1427,11 @@ static void __init setup_system_capabili
 	/*
 	 * We have finalised the system-wide safe feature
 	 * registers, finalise the capabilities that depend
-	 * on it. Also enable all the available capabilities.
+	 * on it. Also enable all the available capabilities,
+	 * that are not enabled already.
 	 */
 	update_cpu_capabilities(SCOPE_SYSTEM);
-	enable_cpu_capabilities(SCOPE_ALL);
+	enable_cpu_capabilities(SCOPE_ALL & ~SCOPE_BOOT_CPU);
 }
 
 void __init setup_cpu_features(void)



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 060/119] arm64: capabilities: Change scope of VHE to Boot CPU feature
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (58 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 059/119] arm64: capabilities: Add support for features enabled early Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 061/119] arm64: capabilities: Clean up midr range helpers Greg Kroah-Hartman
                   ` (63 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Marc Zyngier, Will Deacon, Dave Martin,
	Suzuki K Poulose, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 830dcc9f9a7cd26a812522a26efaacf7df6fc365 ]

We expect all CPUs to be running at the same EL inside the kernel
with or without VHE enabled and we have strict checks to ensure
that any mismatch triggers a kernel panic. If VHE is enabled,
we use the feature based on the boot CPU and all other CPUs
should follow. This makes it a perfect candidate for a capability
based on the boot CPU,  which should be matched by all the CPUs
(both when is ON and OFF). This saves us some not-so-pretty
hooks and special code, just for verifying the conflict.

The patch also makes the VHE capability entry depend on
CONFIG_ARM64_VHE.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |    6 +++++
 arch/arm64/include/asm/virt.h       |    6 -----
 arch/arm64/kernel/cpufeature.c      |    5 ++--
 arch/arm64/kernel/smp.c             |   38 ------------------------------------
 4 files changed, 9 insertions(+), 46 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -283,6 +283,12 @@ extern struct arm64_ftr_reg arm64_ftr_re
 	(ARM64_CPUCAP_SCOPE_LOCAL_CPU		|	\
 	 ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU)
 
+/*
+ * CPU feature used early in the boot based on the boot CPU. All secondary
+ * CPUs must match the state of the capability as detected by the boot CPU.
+ */
+#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU
+
 struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -102,12 +102,6 @@ static inline bool has_vhe(void)
 	return false;
 }
 
-#ifdef CONFIG_ARM64_VHE
-extern void verify_cpu_run_el(void);
-#else
-static inline void verify_cpu_run_el(void) {}
-#endif
-
 #endif /* __ASSEMBLY__ */
 
 #endif /* ! __ASM__VIRT_H */
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -982,13 +982,15 @@ static const struct arm64_cpu_capabiliti
 		.matches = cpufeature_pan_not_uao,
 	},
 #endif /* CONFIG_ARM64_PAN */
+#ifdef CONFIG_ARM64_VHE
 	{
 		.desc = "Virtualization Host Extensions",
 		.capability = ARM64_HAS_VIRT_HOST_EXTN,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
 		.matches = runs_at_el2,
 		.cpu_enable = cpu_copy_el2regs,
 	},
+#endif	/* CONFIG_ARM64_VHE */
 	{
 		.desc = "32-bit EL0 Support",
 		.capability = ARM64_HAS_32BIT_EL0,
@@ -1332,7 +1334,6 @@ static bool verify_local_cpu_caps(u16 sc
  */
 static void check_early_cpu_features(void)
 {
-	verify_cpu_run_el();
 	verify_cpu_asid_bits();
 	/*
 	 * Early features are used by the kernel already. If there
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -83,43 +83,6 @@ enum ipi_msg_type {
 	IPI_WAKEUP
 };
 
-#ifdef CONFIG_ARM64_VHE
-
-/* Whether the boot CPU is running in HYP mode or not*/
-static bool boot_cpu_hyp_mode;
-
-static inline void save_boot_cpu_run_el(void)
-{
-	boot_cpu_hyp_mode = is_kernel_in_hyp_mode();
-}
-
-static inline bool is_boot_cpu_in_hyp_mode(void)
-{
-	return boot_cpu_hyp_mode;
-}
-
-/*
- * Verify that a secondary CPU is running the kernel at the same
- * EL as that of the boot CPU.
- */
-void verify_cpu_run_el(void)
-{
-	bool in_el2 = is_kernel_in_hyp_mode();
-	bool boot_cpu_el2 = is_boot_cpu_in_hyp_mode();
-
-	if (in_el2 ^ boot_cpu_el2) {
-		pr_crit("CPU%d: mismatched Exception Level(EL%d) with boot CPU(EL%d)\n",
-					smp_processor_id(),
-					in_el2 ? 2 : 1,
-					boot_cpu_el2 ? 2 : 1);
-		cpu_panic_kernel();
-	}
-}
-
-#else
-static inline void save_boot_cpu_run_el(void) {}
-#endif
-
 #ifdef CONFIG_HOTPLUG_CPU
 static int op_cpu_kill(unsigned int cpu);
 #else
@@ -448,7 +411,6 @@ void __init smp_prepare_boot_cpu(void)
 	 */
 	jump_label_init();
 	cpuinfo_store_boot_cpu();
-	save_boot_cpu_run_el();
 }
 
 static u64 __init of_get_cpu_mpidr(struct device_node *dn)



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 061/119] arm64: capabilities: Clean up midr range helpers
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (59 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 060/119] arm64: capabilities: Change scope of VHE to Boot CPU feature Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 062/119] arm64: Add helpers for checking CPU MIDR against a range Greg Kroah-Hartman
                   ` (62 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Mark Rutland, Ard Biesheuvel,
	Dave Martin, Suzuki K Poulose

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 5e7951ce19abf4113645ae789c033917356ee96f ]

We are about to introduce generic MIDR range helpers. Clean
up the existing helpers in erratum handling, preparing them
to use generic version.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpu_errata.c |  109 +++++++++++++++++++++++------------------
 1 file changed, 62 insertions(+), 47 deletions(-)

--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -405,20 +405,38 @@ static bool has_ssbd_mitigation(const st
 }
 #endif	/* CONFIG_ARM64_SSBD */
 
-#define MIDR_RANGE(model, min, max) \
-	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \
-	.matches = is_affected_midr_range, \
-	.midr_model = model, \
-	.midr_range_min = min, \
-	.midr_range_max = max
-
-#define MIDR_ALL_VERSIONS(model) \
-	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \
-	.matches = is_affected_midr_range, \
-	.midr_model = model, \
-	.midr_range_min = 0, \
+#define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
+	.matches = is_affected_midr_range,			\
+	.midr_model = model,					\
+	.midr_range_min = MIDR_CPU_VAR_REV(v_min, r_min),	\
+	.midr_range_max = MIDR_CPU_VAR_REV(v_max, r_max)
+
+#define CAP_MIDR_ALL_VERSIONS(model)					\
+	.matches = is_affected_midr_range,				\
+	.midr_model = model,						\
+	.midr_range_min = MIDR_CPU_VAR_REV(0, 0),			\
 	.midr_range_max = (MIDR_VARIANT_MASK | MIDR_REVISION_MASK)
 
+#define MIDR_FIXED(rev, revidr_mask) \
+	.fixed_revs = (struct arm64_midr_revidr[]){{ (rev), (revidr_mask) }, {}}
+
+#define ERRATA_MIDR_RANGE(model, v_min, r_min, v_max, r_max)		\
+	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,				\
+	CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)
+
+/* Errata affecting a range of revisions of  given model variant */
+#define ERRATA_MIDR_REV_RANGE(m, var, r_min, r_max)	 \
+	ERRATA_MIDR_RANGE(m, var, r_min, var, r_max)
+
+/* Errata affecting a single variant/revision of a model */
+#define ERRATA_MIDR_REV(model, var, rev)	\
+	ERRATA_MIDR_RANGE(model, var, rev, var, rev)
+
+/* Errata affecting all variants/revisions of a given a model */
+#define ERRATA_MIDR_ALL_VERSIONS(model)				\
+	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
+	CAP_MIDR_ALL_VERSIONS(model)
+
 const struct arm64_cpu_capabilities arm64_errata[] = {
 #if	defined(CONFIG_ARM64_ERRATUM_826319) || \
 	defined(CONFIG_ARM64_ERRATUM_827319) || \
@@ -427,7 +445,7 @@ const struct arm64_cpu_capabilities arm6
 	/* Cortex-A53 r0p[012] */
 		.desc = "ARM errata 826319, 827319, 824069",
 		.capability = ARM64_WORKAROUND_CLEAN_CACHE,
-		MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x02),
+		ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 2),
 		.cpu_enable = cpu_enable_cache_maint_trap,
 	},
 #endif
@@ -436,7 +454,7 @@ const struct arm64_cpu_capabilities arm6
 	/* Cortex-A53 r0p[01] */
 		.desc = "ARM errata 819472",
 		.capability = ARM64_WORKAROUND_CLEAN_CACHE,
-		MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x01),
+		ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 1),
 		.cpu_enable = cpu_enable_cache_maint_trap,
 	},
 #endif
@@ -445,9 +463,9 @@ const struct arm64_cpu_capabilities arm6
 	/* Cortex-A57 r0p0 - r1p2 */
 		.desc = "ARM erratum 832075",
 		.capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
-		MIDR_RANGE(MIDR_CORTEX_A57,
-			   MIDR_CPU_VAR_REV(0, 0),
-			   MIDR_CPU_VAR_REV(1, 2)),
+		ERRATA_MIDR_RANGE(MIDR_CORTEX_A57,
+				  0, 0,
+				  1, 2),
 	},
 #endif
 #ifdef CONFIG_ARM64_ERRATUM_834220
@@ -455,9 +473,9 @@ const struct arm64_cpu_capabilities arm6
 	/* Cortex-A57 r0p0 - r1p2 */
 		.desc = "ARM erratum 834220",
 		.capability = ARM64_WORKAROUND_834220,
-		MIDR_RANGE(MIDR_CORTEX_A57,
-			   MIDR_CPU_VAR_REV(0, 0),
-			   MIDR_CPU_VAR_REV(1, 2)),
+		ERRATA_MIDR_RANGE(MIDR_CORTEX_A57,
+				  0, 0,
+				  1, 2),
 	},
 #endif
 #ifdef CONFIG_ARM64_ERRATUM_845719
@@ -465,7 +483,7 @@ const struct arm64_cpu_capabilities arm6
 	/* Cortex-A53 r0p[01234] */
 		.desc = "ARM erratum 845719",
 		.capability = ARM64_WORKAROUND_845719,
-		MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x04),
+		ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4),
 	},
 #endif
 #ifdef CONFIG_CAVIUM_ERRATUM_23154
@@ -473,7 +491,7 @@ const struct arm64_cpu_capabilities arm6
 	/* Cavium ThunderX, pass 1.x */
 		.desc = "Cavium erratum 23154",
 		.capability = ARM64_WORKAROUND_CAVIUM_23154,
-		MIDR_RANGE(MIDR_THUNDERX, 0x00, 0x01),
+		ERRATA_MIDR_REV_RANGE(MIDR_THUNDERX, 0, 0, 1),
 	},
 #endif
 #ifdef CONFIG_CAVIUM_ERRATUM_27456
@@ -481,15 +499,15 @@ const struct arm64_cpu_capabilities arm6
 	/* Cavium ThunderX, T88 pass 1.x - 2.1 */
 		.desc = "Cavium erratum 27456",
 		.capability = ARM64_WORKAROUND_CAVIUM_27456,
-		MIDR_RANGE(MIDR_THUNDERX,
-			   MIDR_CPU_VAR_REV(0, 0),
-			   MIDR_CPU_VAR_REV(1, 1)),
+		ERRATA_MIDR_RANGE(MIDR_THUNDERX,
+				  0, 0,
+				  1, 1),
 	},
 	{
 	/* Cavium ThunderX, T81 pass 1.0 */
 		.desc = "Cavium erratum 27456",
 		.capability = ARM64_WORKAROUND_CAVIUM_27456,
-		MIDR_RANGE(MIDR_THUNDERX_81XX, 0x00, 0x00),
+		ERRATA_MIDR_REV(MIDR_THUNDERX_81XX, 0, 0),
 	},
 #endif
 #ifdef CONFIG_CAVIUM_ERRATUM_30115
@@ -497,20 +515,21 @@ const struct arm64_cpu_capabilities arm6
 	/* Cavium ThunderX, T88 pass 1.x - 2.2 */
 		.desc = "Cavium erratum 30115",
 		.capability = ARM64_WORKAROUND_CAVIUM_30115,
-		MIDR_RANGE(MIDR_THUNDERX, 0x00,
-			   (1 << MIDR_VARIANT_SHIFT) | 2),
+		ERRATA_MIDR_RANGE(MIDR_THUNDERX,
+				      0, 0,
+				      1, 2),
 	},
 	{
 	/* Cavium ThunderX, T81 pass 1.0 - 1.2 */
 		.desc = "Cavium erratum 30115",
 		.capability = ARM64_WORKAROUND_CAVIUM_30115,
-		MIDR_RANGE(MIDR_THUNDERX_81XX, 0x00, 0x02),
+		ERRATA_MIDR_REV_RANGE(MIDR_THUNDERX_81XX, 0, 0, 2),
 	},
 	{
 	/* Cavium ThunderX, T83 pass 1.0 */
 		.desc = "Cavium erratum 30115",
 		.capability = ARM64_WORKAROUND_CAVIUM_30115,
-		MIDR_RANGE(MIDR_THUNDERX_83XX, 0x00, 0x00),
+		ERRATA_MIDR_REV(MIDR_THUNDERX_83XX, 0, 0),
 	},
 #endif
 	{
@@ -531,9 +550,7 @@ const struct arm64_cpu_capabilities arm6
 	{
 		.desc = "Qualcomm Technologies Falkor erratum 1003",
 		.capability = ARM64_WORKAROUND_QCOM_FALKOR_E1003,
-		MIDR_RANGE(MIDR_QCOM_FALKOR_V1,
-			   MIDR_CPU_VAR_REV(0, 0),
-			   MIDR_CPU_VAR_REV(0, 0)),
+		ERRATA_MIDR_REV(MIDR_QCOM_FALKOR_V1, 0, 0),
 	},
 	{
 		.desc = "Qualcomm Technologies Kryo erratum 1003",
@@ -547,9 +564,7 @@ const struct arm64_cpu_capabilities arm6
 	{
 		.desc = "Qualcomm Technologies Falkor erratum 1009",
 		.capability = ARM64_WORKAROUND_REPEAT_TLBI,
-		MIDR_RANGE(MIDR_QCOM_FALKOR_V1,
-			   MIDR_CPU_VAR_REV(0, 0),
-			   MIDR_CPU_VAR_REV(0, 0)),
+		ERRATA_MIDR_REV(MIDR_QCOM_FALKOR_V1, 0, 0),
 	},
 #endif
 #ifdef CONFIG_ARM64_ERRATUM_858921
@@ -557,56 +572,56 @@ const struct arm64_cpu_capabilities arm6
 	/* Cortex-A73 all versions */
 		.desc = "ARM erratum 858921",
 		.capability = ARM64_WORKAROUND_858921,
-		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
 	},
 #endif
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
 		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
 		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
 		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
 		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
 		.cpu_enable = qcom_enable_link_stack_sanitization,
 	},
 	{
 		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
-		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
 		.cpu_enable = qcom_enable_link_stack_sanitization,
 	},
 	{
 		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
-		MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
 		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
+		ERRATA_MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
 		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 #endif



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 062/119] arm64: Add helpers for checking CPU MIDR against a range
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (60 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 061/119] arm64: capabilities: Clean up midr range helpers Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 063/119] arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35 Greg Kroah-Hartman
                   ` (61 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Mark Rutland, Ard Biesheuvel,
	Dave Martin, Suzuki K Poulose

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 1df310505d6d544802016f6bae49aab836ae8510 ]

Add helpers for checking if the given CPU midr falls in a range
of variants/revisions for a given model.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |    4 ++--
 arch/arm64/include/asm/cputype.h    |   30 ++++++++++++++++++++++++++++++
 arch/arm64/kernel/cpu_errata.c      |   18 +++++++-----------
 3 files changed, 39 insertions(+), 13 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -10,6 +10,7 @@
 #define __ASM_CPUFEATURE_H
 
 #include <asm/cpucaps.h>
+#include <asm/cputype.h>
 #include <asm/hwcap.h>
 #include <asm/sysreg.h>
 
@@ -302,8 +303,7 @@ struct arm64_cpu_capabilities {
 	void (*cpu_enable)(const struct arm64_cpu_capabilities *cap);
 	union {
 		struct {	/* To be used for erratum handling only */
-			u32 midr_model;
-			u32 midr_range_min, midr_range_max;
+			struct midr_range midr_range;
 		};
 
 		struct {	/* Feature register checking */
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -126,6 +126,36 @@
 #define read_cpuid(reg)			read_sysreg_s(SYS_ ## reg)
 
 /*
+ * Represent a range of MIDR values for a given CPU model and a
+ * range of variant/revision values.
+ *
+ * @model	- CPU model as defined by MIDR_CPU_MODEL
+ * @rv_min	- Minimum value for the revision/variant as defined by
+ *		  MIDR_CPU_VAR_REV
+ * @rv_max	- Maximum value for the variant/revision for the range.
+ */
+struct midr_range {
+	u32 model;
+	u32 rv_min;
+	u32 rv_max;
+};
+
+#define MIDR_RANGE(m, v_min, r_min, v_max, r_max)		\
+	{							\
+		.model = m,					\
+		.rv_min = MIDR_CPU_VAR_REV(v_min, r_min),	\
+		.rv_max = MIDR_CPU_VAR_REV(v_max, r_max),	\
+	}
+
+#define MIDR_ALL_VERSIONS(m) MIDR_RANGE(m, 0, 0, 0xf, 0xf)
+
+static inline bool is_midr_in_range(u32 midr, struct midr_range const *range)
+{
+	return MIDR_IS_CPU_MODEL_RANGE(midr, range->model,
+				 range->rv_min, range->rv_max);
+}
+
+/*
  * The CPU ID never changes at run time, so we might as well tell the
  * compiler that it's constant.  Use this function to read the CPU ID
  * rather than directly reading processor_id or read_cpuid() directly.
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -26,10 +26,10 @@
 static bool __maybe_unused
 is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope)
 {
+	u32 midr = read_cpuid_id();
+
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
-	return MIDR_IS_CPU_MODEL_RANGE(read_cpuid_id(), entry->midr_model,
-				       entry->midr_range_min,
-				       entry->midr_range_max);
+	return is_midr_in_range(midr, &entry->midr_range);
 }
 
 static bool __maybe_unused
@@ -43,7 +43,7 @@ is_kryo_midr(const struct arm64_cpu_capa
 	model &= MIDR_IMPLEMENTOR_MASK | (0xf00 << MIDR_PARTNUM_SHIFT) |
 		 MIDR_ARCHITECTURE_MASK;
 
-	return model == entry->midr_model;
+	return model == entry->midr_range.model;
 }
 
 static bool
@@ -407,15 +407,11 @@ static bool has_ssbd_mitigation(const st
 
 #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
 	.matches = is_affected_midr_range,			\
-	.midr_model = model,					\
-	.midr_range_min = MIDR_CPU_VAR_REV(v_min, r_min),	\
-	.midr_range_max = MIDR_CPU_VAR_REV(v_max, r_max)
+	.midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
 
 #define CAP_MIDR_ALL_VERSIONS(model)					\
 	.matches = is_affected_midr_range,				\
-	.midr_model = model,						\
-	.midr_range_min = MIDR_CPU_VAR_REV(0, 0),			\
-	.midr_range_max = (MIDR_VARIANT_MASK | MIDR_REVISION_MASK)
+	.midr_range = MIDR_ALL_VERSIONS(model)
 
 #define MIDR_FIXED(rev, revidr_mask) \
 	.fixed_revs = (struct arm64_midr_revidr[]){{ (rev), (revidr_mask) }, {}}
@@ -556,7 +552,7 @@ const struct arm64_cpu_capabilities arm6
 		.desc = "Qualcomm Technologies Kryo erratum 1003",
 		.capability = ARM64_WORKAROUND_QCOM_FALKOR_E1003,
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
-		.midr_model = MIDR_QCOM_KRYO,
+		.midr_range.model = MIDR_QCOM_KRYO,
 		.matches = is_kryo_midr,
 	},
 #endif



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 063/119] arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (61 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 062/119] arm64: Add helpers for checking CPU MIDR against a range Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 064/119] arm64: capabilities: Add support for checks based on a list of MIDRs Greg Kroah-Hartman
                   ` (60 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Mark Rutland, Dave Martin, Suzuki K Poulose,
	Will Deacon, Ard Biesheuvel

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit 6e616864f21160d8d503523b60a53a29cecc6f24 ]

Update the MIDR encodings for the Cortex-A55 and Cortex-A35

Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cputype.h |    4 ++++
 1 file changed, 4 insertions(+)

--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -85,6 +85,8 @@
 #define ARM_CPU_PART_CORTEX_A53		0xD03
 #define ARM_CPU_PART_CORTEX_A73		0xD09
 #define ARM_CPU_PART_CORTEX_A75		0xD0A
+#define ARM_CPU_PART_CORTEX_A35		0xD04
+#define ARM_CPU_PART_CORTEX_A55		0xD05
 
 #define APM_CPU_PART_POTENZA		0x000
 
@@ -108,6 +110,8 @@
 #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
 #define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73)
 #define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75)
+#define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35)
+#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55)
 #define MIDR_THUNDERX	MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 064/119] arm64: capabilities: Add support for checks based on a list of MIDRs
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (62 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 063/119] arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35 Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 065/119] arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening Greg Kroah-Hartman
                   ` (59 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Mark Rutland, Ard Biesheuvel,
	Dave Martin, Suzuki K Poulose

From: Suzuki K Poulose <suzuki.poulose@arm.com>

[ Upstream commit be5b299830c63ed76e0357473c4218c85fb388b3 ]

Add helpers for detecting an errata on list of midr ranges
of affected CPUs, with the same work around.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[ardb: add Cortex-A35 to kpti_safe_list[] as well]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |    1 
 arch/arm64/include/asm/cputype.h    |    9 ++++
 arch/arm64/kernel/cpu_errata.c      |   81 +++++++++++++++++++-----------------
 arch/arm64/kernel/cpufeature.c      |   21 +++++----
 4 files changed, 66 insertions(+), 46 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -306,6 +306,7 @@ struct arm64_cpu_capabilities {
 			struct midr_range midr_range;
 		};
 
+		const struct midr_range *midr_range_list;
 		struct {	/* Feature register checking */
 			u32 sys_reg;
 			u8 field_pos;
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -159,6 +159,15 @@ static inline bool is_midr_in_range(u32
 				 range->rv_min, range->rv_max);
 }
 
+static inline bool
+is_midr_in_range_list(u32 midr, struct midr_range const *ranges)
+{
+	while (ranges->model)
+		if (is_midr_in_range(midr, ranges++))
+			return true;
+	return false;
+}
+
 /*
  * The CPU ID never changes at run time, so we might as well tell the
  * compiler that it's constant.  Use this function to read the CPU ID
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -33,6 +33,14 @@ is_affected_midr_range(const struct arm6
 }
 
 static bool __maybe_unused
+is_affected_midr_range_list(const struct arm64_cpu_capabilities *entry,
+			    int scope)
+{
+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+	return is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list);
+}
+
+static bool __maybe_unused
 is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope)
 {
 	u32 model;
@@ -420,6 +428,10 @@ static bool has_ssbd_mitigation(const st
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,				\
 	CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)
 
+#define CAP_MIDR_RANGE_LIST(list)				\
+	.matches = is_affected_midr_range_list,			\
+	.midr_range_list = list
+
 /* Errata affecting a range of revisions of  given model variant */
 #define ERRATA_MIDR_REV_RANGE(m, var, r_min, r_max)	 \
 	ERRATA_MIDR_RANGE(m, var, r_min, var, r_max)
@@ -433,6 +445,35 @@ static bool has_ssbd_mitigation(const st
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
 	CAP_MIDR_ALL_VERSIONS(model)
 
+/* Errata affecting a list of midr ranges, with same work around */
+#define ERRATA_MIDR_RANGE_LIST(midr_list)			\
+	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
+	CAP_MIDR_RANGE_LIST(midr_list)
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+
+/*
+ * List of CPUs where we need to issue a psci call to
+ * harden the branch predictor.
+ */
+static const struct midr_range arm64_bp_harden_smccc_cpus[] = {
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
+	MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+	MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
+	{},
+};
+
+static const struct midr_range qcom_bp_harden_cpus[] = {
+	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
+	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
+	{},
+};
+
+#endif
+
 const struct arm64_cpu_capabilities arm64_errata[] = {
 #if	defined(CONFIG_ARM64_ERRATUM_826319) || \
 	defined(CONFIG_ARM64_ERRATUM_827319) || \
@@ -574,51 +615,17 @@ const struct arm64_cpu_capabilities arm6
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
-		.cpu_enable = enable_smccc_arch_workaround_1,
-	},
-	{
-		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
-		.cpu_enable = enable_smccc_arch_workaround_1,
-	},
-	{
-		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
 		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
-		.cpu_enable = enable_smccc_arch_workaround_1,
-	},
-	{
-		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		ERRATA_MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
-		.cpu_enable = qcom_enable_link_stack_sanitization,
-	},
-	{
-		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
-		ERRATA_MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
-	},
-	{
-		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		ERRATA_MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
+		ERRATA_MIDR_RANGE_LIST(qcom_bp_harden_cpus),
 		.cpu_enable = qcom_enable_link_stack_sanitization,
 	},
 	{
 		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
-		ERRATA_MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
-	},
-	{
-		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		ERRATA_MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
-		.cpu_enable = enable_smccc_arch_workaround_1,
-	},
-	{
-		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		ERRATA_MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
-		.cpu_enable = enable_smccc_arch_workaround_1,
+		ERRATA_MIDR_RANGE_LIST(qcom_bp_harden_cpus),
 	},
 #endif
 #ifdef CONFIG_ARM64_SSBD
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -826,6 +826,17 @@ static int __kpti_forced; /* 0: not forc
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 				int scope)
 {
+	/* List of CPUs that are not vulnerable and don't need KPTI */
+	static const struct midr_range kpti_safe_list[] = {
+		MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
+		MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+		MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+		MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+		MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+		MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+		MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+	};
 	char const *str = "command line option";
 
 	/*
@@ -850,16 +861,8 @@ static bool unmap_kernel_at_el0(const st
 		return true;
 
 	/* Don't force KPTI for CPUs that are not vulnerable */
-	switch (read_cpuid_id() & MIDR_CPU_MODEL_MASK) {
-	case MIDR_CAVIUM_THUNDERX2:
-	case MIDR_BRCM_VULCAN:
-	case MIDR_CORTEX_A53:
-	case MIDR_CORTEX_A55:
-	case MIDR_CORTEX_A57:
-	case MIDR_CORTEX_A72:
-	case MIDR_CORTEX_A73:
+	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
 		return false;
-	}
 
 	/* Defer to CPU feature registers */
 	return !has_cpuid_feature(entry, scope);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 065/119] arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (63 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 064/119] arm64: capabilities: Add support for checks based on a list of MIDRs Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 066/119] arm64: dont zero DIT on signal return Greg Kroah-Hartman
                   ` (58 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Shanker Donthineni, Marc Zyngier,
	Will Deacon, Ard Biesheuvel

From: Shanker Donthineni <shankerd@codeaurora.org>

[ Upstream commit 4bc352ffb39e4eec253e70f8c076f2f48a6c1926 ]

The function SMCCC_ARCH_WORKAROUND_1 was introduced as part of SMC
V1.1 Calling Convention to mitigate CVE-2017-5715. This patch uses
the standard call SMCCC_ARCH_WORKAROUND_1 for Falkor chips instead
of Silicon provider service ID 0xC2001700.

Cc: <stable@vger.kernel.org> # 4.14+
Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
[maz: reworked errata framework integration]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpucaps.h |    7 ++---
 arch/arm64/include/asm/kvm_asm.h |    2 -
 arch/arm64/kernel/bpi.S          |    7 -----
 arch/arm64/kernel/cpu_errata.c   |   54 ++++++++++++---------------------------
 arch/arm64/kvm/hyp/entry.S       |   12 --------
 arch/arm64/kvm/hyp/switch.c      |   10 -------
 6 files changed, 20 insertions(+), 72 deletions(-)

--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -42,10 +42,9 @@
 #define ARM64_HAS_DCPOP				21
 #define ARM64_UNMAP_KERNEL_AT_EL0		23
 #define ARM64_HARDEN_BRANCH_PREDICTOR		24
-#define ARM64_HARDEN_BP_POST_GUEST_EXIT		25
-#define ARM64_SSBD				26
-#define ARM64_MISMATCHED_CACHE_TYPE		27
+#define ARM64_SSBD				25
+#define ARM64_MISMATCHED_CACHE_TYPE		26
 
-#define ARM64_NCAPS				28
+#define ARM64_NCAPS				27
 
 #endif /* __ASM_CPUCAPS_H */
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -70,8 +70,6 @@ extern u32 __kvm_get_mdcr_el2(void);
 
 extern u32 __init_stage2_translation(void);
 
-extern void __qcom_hyp_sanitize_btac_predictors(void);
-
 /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
 #define __hyp_this_cpu_ptr(sym)						\
 	({								\
--- a/arch/arm64/kernel/bpi.S
+++ b/arch/arm64/kernel/bpi.S
@@ -55,13 +55,6 @@ ENTRY(__bp_harden_hyp_vecs_start)
 	.endr
 ENTRY(__bp_harden_hyp_vecs_end)
 
-ENTRY(__qcom_hyp_sanitize_link_stack_start)
-	stp     x29, x30, [sp, #-16]!
-	.rept	16
-	bl	. + 4
-	.endr
-	ldp	x29, x30, [sp], #16
-ENTRY(__qcom_hyp_sanitize_link_stack_end)
 
 .macro smccc_workaround_1 inst
 	sub	sp, sp, #(8 * 4)
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -83,8 +83,6 @@ cpu_enable_trap_ctr_access(const struct
 DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
 
 #ifdef CONFIG_KVM
-extern char __qcom_hyp_sanitize_link_stack_start[];
-extern char __qcom_hyp_sanitize_link_stack_end[];
 extern char __smccc_workaround_1_smc_start[];
 extern char __smccc_workaround_1_smc_end[];
 extern char __smccc_workaround_1_hvc_start[];
@@ -131,8 +129,6 @@ static void __install_bp_hardening_cb(bp
 	spin_unlock(&bp_lock);
 }
 #else
-#define __qcom_hyp_sanitize_link_stack_start	NULL
-#define __qcom_hyp_sanitize_link_stack_end	NULL
 #define __smccc_workaround_1_smc_start		NULL
 #define __smccc_workaround_1_smc_end		NULL
 #define __smccc_workaround_1_hvc_start		NULL
@@ -177,12 +173,25 @@ static void call_hvc_arch_workaround_1(v
 	arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
 }
 
+static void qcom_link_stack_sanitization(void)
+{
+	u64 tmp;
+
+	asm volatile("mov	%0, x30		\n"
+		     ".rept	16		\n"
+		     "bl	. + 4		\n"
+		     ".endr			\n"
+		     "mov	x30, %0		\n"
+		     : "=&r" (tmp));
+}
+
 static void
 enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 {
 	bp_hardening_cb_t cb;
 	void *smccc_start, *smccc_end;
 	struct arm_smccc_res res;
+	u32 midr = read_cpuid_id();
 
 	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
 		return;
@@ -215,30 +224,14 @@ enable_smccc_arch_workaround_1(const str
 		return;
 	}
 
+	if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
+	    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
+		cb = qcom_link_stack_sanitization;
+
 	install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
 
 	return;
 }
-
-static void qcom_link_stack_sanitization(void)
-{
-	u64 tmp;
-
-	asm volatile("mov	%0, x30		\n"
-		     ".rept	16		\n"
-		     "bl	. + 4		\n"
-		     ".endr			\n"
-		     "mov	x30, %0		\n"
-		     : "=&r" (tmp));
-}
-
-static void
-qcom_enable_link_stack_sanitization(const struct arm64_cpu_capabilities *entry)
-{
-	install_bp_hardening_cb(entry, qcom_link_stack_sanitization,
-				__qcom_hyp_sanitize_link_stack_start,
-				__qcom_hyp_sanitize_link_stack_end);
-}
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
 #ifdef CONFIG_ARM64_SSBD
@@ -463,10 +456,6 @@ static const struct midr_range arm64_bp_
 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
 	MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
 	MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
-	{},
-};
-
-static const struct midr_range qcom_bp_harden_cpus[] = {
 	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
 	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
 	{},
@@ -618,15 +607,6 @@ const struct arm64_cpu_capabilities arm6
 		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
 		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
-	{
-		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		ERRATA_MIDR_RANGE_LIST(qcom_bp_harden_cpus),
-		.cpu_enable = qcom_enable_link_stack_sanitization,
-	},
-	{
-		.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
-		ERRATA_MIDR_RANGE_LIST(qcom_bp_harden_cpus),
-	},
 #endif
 #ifdef CONFIG_ARM64_SSBD
 	{
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -196,15 +196,3 @@ alternative_endif
 
 	eret
 ENDPROC(__fpsimd_guest_restore)
-
-ENTRY(__qcom_hyp_sanitize_btac_predictors)
-	/**
-	 * Call SMC64 with Silicon provider serviceID 23<<8 (0xc2001700)
-	 * 0xC2000000-0xC200FFFF: assigned to SiP Service Calls
-	 * b15-b0: contains SiP functionID
-	 */
-	movz    x0, #0x1700
-	movk    x0, #0xc200, lsl #16
-	smc     #0
-	ret
-ENDPROC(__qcom_hyp_sanitize_btac_predictors)
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -405,16 +405,6 @@ again:
 
 	__set_host_arch_workaround_state(vcpu);
 
-	if (cpus_have_const_cap(ARM64_HARDEN_BP_POST_GUEST_EXIT)) {
-		u32 midr = read_cpuid_id();
-
-		/* Apply BTAC predictors mitigation to all Falkor chips */
-		if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
-		    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)) {
-			__qcom_hyp_sanitize_btac_predictors();
-		}
-	}
-
 	fp_enabled = __fpsimd_enabled();
 
 	__sysreg_save_guest_state(guest_ctxt);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 066/119] arm64: dont zero DIT on signal return
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (64 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 065/119] arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 067/119] arm64: Get rid of __smccc_workaround_1_hvc_* Greg Kroah-Hartman
                   ` (57 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Mark Rutland, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ard Biesheuvel

From: Mark Rutland <mark.rutland@arm.com>

[ Upstream commit 1265132127b63502d34e0f58c8bdef3a4dc927c2 ]

Currently valid_user_regs() treats SPSR_ELx.DIT as a RES0 bit, causing
it to be zeroed upon exception return, rather than preserved. Thus, code
relying on DIT will not function as expected, and may expose an
unexpected timing sidechannel.

Let's remove DIT from the set of RES0 bits, such that it is preserved.
At the same time, the related comment is updated to better describe the
situation, and to take into account the most recent documentation of
SPSR_ELx, in ARM DDI 0487C.a.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: 7206dc93a58fb764 ("arm64: Expose Arm v8.4 features")
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/ptrace.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -1402,15 +1402,19 @@ asmlinkage void syscall_trace_exit(struc
 }
 
 /*
- * Bits which are always architecturally RES0 per ARM DDI 0487A.h
+ * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487C.a
+ * We also take into account DIT (bit 24), which is not yet documented, and
+ * treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may be
+ * allocated an EL0 meaning in future.
  * Userspace cannot use these until they have an architectural meaning.
+ * Note that this follows the SPSR_ELx format, not the AArch32 PSR format.
  * We also reserve IL for the kernel; SS is handled dynamically.
  */
 #define SPSR_EL1_AARCH64_RES0_BITS \
-	(GENMASK_ULL(63,32) | GENMASK_ULL(27, 22) | GENMASK_ULL(20, 10) | \
-	 GENMASK_ULL(5, 5))
+	(GENMASK_ULL(63,32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
+	 GENMASK_ULL(20, 10) | GENMASK_ULL(5, 5))
 #define SPSR_EL1_AARCH32_RES0_BITS \
-	(GENMASK_ULL(63,32) | GENMASK_ULL(24, 22) | GENMASK_ULL(20,20))
+	(GENMASK_ULL(63,32) | GENMASK_ULL(23, 22) | GENMASK_ULL(20,20))
 
 static int valid_compat_regs(struct user_pt_regs *regs)
 {



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 067/119] arm64: Get rid of __smccc_workaround_1_hvc_*
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (65 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 066/119] arm64: dont zero DIT on signal return Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 068/119] arm64: cpufeature: Detect SSBS and advertise to userspace Greg Kroah-Hartman
                   ` (56 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Marc Zyngier, Will Deacon, Ard Biesheuvel

From: Marc Zyngier <marc.zyngier@arm.com>

[ Upstream commit 22765f30dbaf1118c6ff0fcb8b99c9f2b4d396d5 ]

The very existence of __smccc_workaround_1_hvc_* is a thinko, as
KVM will never use a HVC call to perform the branch prediction
invalidation. Even as a nested hypervisor, it would use an SMC
instruction.

Let's get rid of it.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/bpi.S        |   12 ++----------
 arch/arm64/kernel/cpu_errata.c |    9 +++------
 2 files changed, 5 insertions(+), 16 deletions(-)

--- a/arch/arm64/kernel/bpi.S
+++ b/arch/arm64/kernel/bpi.S
@@ -56,21 +56,13 @@ ENTRY(__bp_harden_hyp_vecs_start)
 ENTRY(__bp_harden_hyp_vecs_end)
 
 
-.macro smccc_workaround_1 inst
+ENTRY(__smccc_workaround_1_smc_start)
 	sub	sp, sp, #(8 * 4)
 	stp	x2, x3, [sp, #(8 * 0)]
 	stp	x0, x1, [sp, #(8 * 2)]
 	mov	w0, #ARM_SMCCC_ARCH_WORKAROUND_1
-	\inst	#0
+	smc	#0
 	ldp	x2, x3, [sp, #(8 * 0)]
 	ldp	x0, x1, [sp, #(8 * 2)]
 	add	sp, sp, #(8 * 4)
-.endm
-
-ENTRY(__smccc_workaround_1_smc_start)
-	smccc_workaround_1	smc
 ENTRY(__smccc_workaround_1_smc_end)
-
-ENTRY(__smccc_workaround_1_hvc_start)
-	smccc_workaround_1	hvc
-ENTRY(__smccc_workaround_1_hvc_end)
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -85,8 +85,6 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_har
 #ifdef CONFIG_KVM
 extern char __smccc_workaround_1_smc_start[];
 extern char __smccc_workaround_1_smc_end[];
-extern char __smccc_workaround_1_hvc_start[];
-extern char __smccc_workaround_1_hvc_end[];
 
 static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
 				const char *hyp_vecs_end)
@@ -131,8 +129,6 @@ static void __install_bp_hardening_cb(bp
 #else
 #define __smccc_workaround_1_smc_start		NULL
 #define __smccc_workaround_1_smc_end		NULL
-#define __smccc_workaround_1_hvc_start		NULL
-#define __smccc_workaround_1_hvc_end		NULL
 
 static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
 				      const char *hyp_vecs_start,
@@ -206,8 +202,9 @@ enable_smccc_arch_workaround_1(const str
 		if ((int)res.a0 < 0)
 			return;
 		cb = call_hvc_arch_workaround_1;
-		smccc_start = __smccc_workaround_1_hvc_start;
-		smccc_end = __smccc_workaround_1_hvc_end;
+		/* This is a guest, no need to patch KVM vectors */
+		smccc_start = NULL;
+		smccc_end = NULL;
 		break;
 
 	case PSCI_CONDUIT_SMC:



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 068/119] arm64: cpufeature: Detect SSBS and advertise to userspace
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (66 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 067/119] arm64: Get rid of __smccc_workaround_1_hvc_* Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 069/119] arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3 Greg Kroah-Hartman
                   ` (55 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Suzuki K Poulose, Will Deacon,
	Catalin Marinas, Ard Biesheuvel

From: Will Deacon <will.deacon@arm.com>

[ Upstream commit d71be2b6c0e19180b5f80a6d42039cc074a693a2 ]

Armv8.5 introduces a new PSTATE bit known as Speculative Store Bypass
Safe (SSBS) which can be used as a mitigation against Spectre variant 4.

Additionally, a CPU may provide instructions to manipulate PSTATE.SSBS
directly, so that userspace can toggle the SSBS control without trapping
to the kernel.

This patch probes for the existence of SSBS and advertise the new instructions
to userspace if they exist.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpucaps.h    |    3 ++-
 arch/arm64/include/asm/sysreg.h     |   16 ++++++++++++----
 arch/arm64/include/uapi/asm/hwcap.h |    1 +
 arch/arm64/kernel/cpufeature.c      |   19 +++++++++++++++++--
 arch/arm64/kernel/cpuinfo.c         |    1 +
 5 files changed, 33 insertions(+), 7 deletions(-)

--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -44,7 +44,8 @@
 #define ARM64_HARDEN_BRANCH_PREDICTOR		24
 #define ARM64_SSBD				25
 #define ARM64_MISMATCHED_CACHE_TYPE		26
+#define ARM64_SSBS				27
 
-#define ARM64_NCAPS				27
+#define ARM64_NCAPS				28
 
 #endif /* __ASM_CPUCAPS_H */
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -297,6 +297,7 @@
 #define SYS_ICH_LR15_EL2		__SYS__LR8_EL2(7)
 
 /* Common SCTLR_ELx flags. */
+#define SCTLR_ELx_DSSBS	(1UL << 44)
 #define SCTLR_ELx_EE    (1 << 25)
 #define SCTLR_ELx_WXN	(1 << 19)
 #define SCTLR_ELx_I	(1 << 12)
@@ -316,7 +317,7 @@
 			 (1 << 10) | (1 << 13) | (1 << 14) | (1 << 15) | \
 			 (1 << 17) | (1 << 20) | (1 << 21) | (1 << 24) | \
 			 (1 << 26) | (1 << 27) | (1 << 30) | (1 << 31) | \
-			 (0xffffffffUL << 32))
+			 (0xffffefffUL << 32))
 
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define ENDIAN_SET_EL2		SCTLR_ELx_EE
@@ -330,7 +331,7 @@
 #define SCTLR_EL2_SET	(ENDIAN_SET_EL2   | SCTLR_EL2_RES1)
 #define SCTLR_EL2_CLEAR	(SCTLR_ELx_M      | SCTLR_ELx_A    | SCTLR_ELx_C   | \
 			 SCTLR_ELx_SA     | SCTLR_ELx_I    | SCTLR_ELx_WXN | \
-			 ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
+			 SCTLR_ELx_DSSBS | ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
 
 #if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffff
 #error "Inconsistent SCTLR_EL2 set/clear bits"
@@ -354,7 +355,7 @@
 			 (1 << 29))
 #define SCTLR_EL1_RES0  ((1 << 6)  | (1 << 10) | (1 << 13) | (1 << 17) | \
 			 (1 << 21) | (1 << 27) | (1 << 30) | (1 << 31) | \
-			 (0xffffffffUL << 32))
+			 (0xffffefffUL << 32))
 
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define ENDIAN_SET_EL1		(SCTLR_EL1_E0E | SCTLR_ELx_EE)
@@ -371,7 +372,7 @@
 			 SCTLR_EL1_UCI  | SCTLR_EL1_RES1)
 #define SCTLR_EL1_CLEAR	(SCTLR_ELx_A   | SCTLR_EL1_CP15BEN | SCTLR_EL1_ITD    |\
 			 SCTLR_EL1_UMA | SCTLR_ELx_WXN     | ENDIAN_CLEAR_EL1 |\
-			 SCTLR_EL1_RES0)
+			 SCTLR_ELx_DSSBS | SCTLR_EL1_RES0)
 
 #if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffff
 #error "Inconsistent SCTLR_EL1 set/clear bits"
@@ -417,6 +418,13 @@
 #define ID_AA64PFR0_EL0_64BIT_ONLY	0x1
 #define ID_AA64PFR0_EL0_32BIT_64BIT	0x2
 
+/* id_aa64pfr1 */
+#define ID_AA64PFR1_SSBS_SHIFT		4
+
+#define ID_AA64PFR1_SSBS_PSTATE_NI	0
+#define ID_AA64PFR1_SSBS_PSTATE_ONLY	1
+#define ID_AA64PFR1_SSBS_PSTATE_INSNS	2
+
 /* id_aa64mmfr0 */
 #define ID_AA64MMFR0_TGRAN4_SHIFT	28
 #define ID_AA64MMFR0_TGRAN64_SHIFT	24
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -48,5 +48,6 @@
 #define HWCAP_USCAT		(1 << 25)
 #define HWCAP_ILRCPC		(1 << 26)
 #define HWCAP_FLAGM		(1 << 27)
+#define HWCAP_SSBS		(1 << 28)
 
 #endif /* _UAPI__ASM_HWCAP_H */
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -145,6 +145,11 @@ static const struct arm64_ftr_bits ftr_i
 	ARM64_FTR_END,
 };
 
+static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI),
+	ARM64_FTR_END,
+};
+
 static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
 	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),
 	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),
@@ -345,7 +350,7 @@ static const struct __ftr_reg_entry {
 
 	/* Op1 = 0, CRn = 0, CRm = 4 */
 	ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0),
-	ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_raz),
+	ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1),
 
 	/* Op1 = 0, CRn = 0, CRm = 5 */
 	ARM64_FTR_REG(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0),
@@ -625,7 +630,6 @@ void update_cpu_features(int cpu,
 
 	/*
 	 * EL3 is not our concern.
-	 * ID_AA64PFR1 is currently RES0.
 	 */
 	taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu,
 				      info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0);
@@ -1045,6 +1049,16 @@ static const struct arm64_cpu_capabiliti
 		.min_field_value = 1,
 	},
 #endif
+	{
+		.desc = "Speculative Store Bypassing Safe (SSBS)",
+		.capability = ARM64_SSBS,
+		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64PFR1_EL1,
+		.field_pos = ID_AA64PFR1_SSBS_SHIFT,
+		.sign = FTR_UNSIGNED,
+		.min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY,
+	},
 	{},
 };
 
@@ -1087,6 +1101,7 @@ static const struct arm64_cpu_capabiliti
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
 	HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
+	HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS),
 	{},
 };
 
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -80,6 +80,7 @@ static const char *const hwcap_str[] = {
 	"uscat",
 	"ilrcpc",
 	"flagm",
+	"ssbs",
 	NULL
 };
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 069/119] arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (67 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 068/119] arm64: cpufeature: Detect SSBS and advertise to userspace Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 070/119] KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe Greg Kroah-Hartman
                   ` (54 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Will Deacon, Catalin Marinas, Ard Biesheuvel

From: Will Deacon <will.deacon@arm.com>

[ Upstream commit 8f04e8e6e29c93421a95b61cad62e3918425eac7 ]

On CPUs with support for PSTATE.SSBS, the kernel can toggle the SSBD
state without needing to call into firmware.

This patch hooks into the existing SSBD infrastructure so that SSBS is
used on CPUs that support it, but it's all made horribly complicated by
the very real possibility of big/little systems that don't uniformly
provide the new capability.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[ardb: add #include of asm/compat.h]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/processor.h   |    7 +++++
 arch/arm64/include/asm/ptrace.h      |    1 
 arch/arm64/include/asm/sysreg.h      |    3 ++
 arch/arm64/include/uapi/asm/ptrace.h |    1 
 arch/arm64/kernel/cpu_errata.c       |   26 ++++++++++++++++++--
 arch/arm64/kernel/cpufeature.c       |   45 +++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/process.c          |    4 +++
 arch/arm64/kernel/ssbd.c             |   22 +++++++++++++++++
 8 files changed, 107 insertions(+), 2 deletions(-)

--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -153,6 +153,10 @@ static inline void start_thread(struct p
 {
 	start_thread_common(regs, pc);
 	regs->pstate = PSR_MODE_EL0t;
+
+	if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
+		regs->pstate |= PSR_SSBS_BIT;
+
 	regs->sp = sp;
 }
 
@@ -169,6 +173,9 @@ static inline void compat_start_thread(s
 	regs->pstate |= COMPAT_PSR_E_BIT;
 #endif
 
+	if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
+		regs->pstate |= PSR_AA32_SSBS_BIT;
+
 	regs->compat_sp = sp;
 }
 #endif
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -50,6 +50,7 @@
 #define PSR_AA32_I_BIT		0x00000080
 #define PSR_AA32_A_BIT		0x00000100
 #define PSR_AA32_E_BIT		0x00000200
+#define PSR_AA32_SSBS_BIT	0x00800000
 #define PSR_AA32_DIT_BIT	0x01000000
 #define PSR_AA32_Q_BIT		0x08000000
 #define PSR_AA32_V_BIT		0x10000000
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -86,11 +86,14 @@
 
 #define REG_PSTATE_PAN_IMM		sys_reg(0, 0, 4, 0, 4)
 #define REG_PSTATE_UAO_IMM		sys_reg(0, 0, 4, 0, 3)
+#define REG_PSTATE_SSBS_IMM		sys_reg(0, 3, 4, 0, 1)
 
 #define SET_PSTATE_PAN(x) __emit_inst(0xd5000000 | REG_PSTATE_PAN_IMM |	\
 				      (!!x)<<8 | 0x1f)
 #define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM |	\
 				      (!!x)<<8 | 0x1f)
+#define SET_PSTATE_SSBS(x) __emit_inst(0xd5000000 | REG_PSTATE_SSBS_IMM | \
+				       (!!x)<<8 | 0x1f)
 
 #define SYS_DC_ISW			sys_insn(1, 0, 7, 6, 2)
 #define SYS_DC_CSW			sys_insn(1, 0, 7, 10, 2)
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -45,6 +45,7 @@
 #define PSR_I_BIT	0x00000080
 #define PSR_A_BIT	0x00000100
 #define PSR_D_BIT	0x00000200
+#define PSR_SSBS_BIT	0x00001000
 #define PSR_PAN_BIT	0x00400000
 #define PSR_UAO_BIT	0x00800000
 #define PSR_Q_BIT	0x08000000
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -304,6 +304,14 @@ void __init arm64_enable_wa2_handling(st
 
 void arm64_set_ssbd_mitigation(bool state)
 {
+	if (this_cpu_has_cap(ARM64_SSBS)) {
+		if (state)
+			asm volatile(SET_PSTATE_SSBS(0));
+		else
+			asm volatile(SET_PSTATE_SSBS(1));
+		return;
+	}
+
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
@@ -328,6 +336,11 @@ static bool has_ssbd_mitigation(const st
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
+	if (this_cpu_has_cap(ARM64_SSBS)) {
+		required = false;
+		goto out_printmsg;
+	}
+
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
 		ssbd_state = ARM64_SSBD_UNKNOWN;
 		return false;
@@ -376,7 +389,6 @@ static bool has_ssbd_mitigation(const st
 
 	switch (ssbd_state) {
 	case ARM64_SSBD_FORCE_DISABLE:
-		pr_info_once("%s disabled from command-line\n", entry->desc);
 		arm64_set_ssbd_mitigation(false);
 		required = false;
 		break;
@@ -389,7 +401,6 @@ static bool has_ssbd_mitigation(const st
 		break;
 
 	case ARM64_SSBD_FORCE_ENABLE:
-		pr_info_once("%s forced from command-line\n", entry->desc);
 		arm64_set_ssbd_mitigation(true);
 		required = true;
 		break;
@@ -399,6 +410,17 @@ static bool has_ssbd_mitigation(const st
 		break;
 	}
 
+out_printmsg:
+	switch (ssbd_state) {
+	case ARM64_SSBD_FORCE_DISABLE:
+		pr_info_once("%s disabled from command-line\n", entry->desc);
+		break;
+
+	case ARM64_SSBD_FORCE_ENABLE:
+		pr_info_once("%s forced from command-line\n", entry->desc);
+		break;
+	}
+
 	return required;
 }
 #endif	/* CONFIG_ARM64_SSBD */
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -925,6 +925,48 @@ static void cpu_copy_el2regs(const struc
 		write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
 }
 
+#ifdef CONFIG_ARM64_SSBD
+static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr)
+{
+	if (user_mode(regs))
+		return 1;
+
+	if (instr & BIT(CRm_shift))
+		regs->pstate |= PSR_SSBS_BIT;
+	else
+		regs->pstate &= ~PSR_SSBS_BIT;
+
+	arm64_skip_faulting_instruction(regs, 4);
+	return 0;
+}
+
+static struct undef_hook ssbs_emulation_hook = {
+	.instr_mask	= ~(1U << CRm_shift),
+	.instr_val	= 0xd500001f | REG_PSTATE_SSBS_IMM,
+	.fn		= ssbs_emulation_handler,
+};
+
+static void cpu_enable_ssbs(const struct arm64_cpu_capabilities *__unused)
+{
+	static bool undef_hook_registered = false;
+	static DEFINE_SPINLOCK(hook_lock);
+
+	spin_lock(&hook_lock);
+	if (!undef_hook_registered) {
+		register_undef_hook(&ssbs_emulation_hook);
+		undef_hook_registered = true;
+	}
+	spin_unlock(&hook_lock);
+
+	if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) {
+		sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_DSSBS);
+		arm64_set_ssbd_mitigation(false);
+	} else {
+		arm64_set_ssbd_mitigation(true);
+	}
+}
+#endif /* CONFIG_ARM64_SSBD */
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -1049,6 +1091,7 @@ static const struct arm64_cpu_capabiliti
 		.min_field_value = 1,
 	},
 #endif
+#ifdef CONFIG_ARM64_SSBD
 	{
 		.desc = "Speculative Store Bypassing Safe (SSBS)",
 		.capability = ARM64_SSBS,
@@ -1058,7 +1101,9 @@ static const struct arm64_cpu_capabiliti
 		.field_pos = ID_AA64PFR1_SSBS_SHIFT,
 		.sign = FTR_UNSIGNED,
 		.min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY,
+		.cpu_enable = cpu_enable_ssbs,
 	},
+#endif
 	{},
 };
 
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -296,6 +296,10 @@ int copy_thread(unsigned long clone_flag
 		if (IS_ENABLED(CONFIG_ARM64_UAO) &&
 		    cpus_have_const_cap(ARM64_HAS_UAO))
 			childregs->pstate |= PSR_UAO_BIT;
+
+		if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
+			childregs->pstate |= PSR_SSBS_BIT;
+
 		p->thread.cpu_context.x19 = stack_start;
 		p->thread.cpu_context.x20 = stk_sz;
 	}
--- a/arch/arm64/kernel/ssbd.c
+++ b/arch/arm64/kernel/ssbd.c
@@ -3,13 +3,32 @@
  * Copyright (C) 2018 ARM Ltd, All Rights Reserved.
  */
 
+#include <linux/compat.h>
 #include <linux/errno.h>
 #include <linux/prctl.h>
 #include <linux/sched.h>
+#include <linux/sched/task_stack.h>
 #include <linux/thread_info.h>
 
+#include <asm/compat.h>
 #include <asm/cpufeature.h>
 
+static void ssbd_ssbs_enable(struct task_struct *task)
+{
+	u64 val = is_compat_thread(task_thread_info(task)) ?
+		  PSR_AA32_SSBS_BIT : PSR_SSBS_BIT;
+
+	task_pt_regs(task)->pstate |= val;
+}
+
+static void ssbd_ssbs_disable(struct task_struct *task)
+{
+	u64 val = is_compat_thread(task_thread_info(task)) ?
+		  PSR_AA32_SSBS_BIT : PSR_SSBS_BIT;
+
+	task_pt_regs(task)->pstate &= ~val;
+}
+
 /*
  * prctl interface for SSBD
  */
@@ -45,12 +64,14 @@ static int ssbd_prctl_set(struct task_st
 			return -EPERM;
 		task_clear_spec_ssb_disable(task);
 		clear_tsk_thread_flag(task, TIF_SSBD);
+		ssbd_ssbs_enable(task);
 		break;
 	case PR_SPEC_DISABLE:
 		if (state == ARM64_SSBD_FORCE_DISABLE)
 			return -EPERM;
 		task_set_spec_ssb_disable(task);
 		set_tsk_thread_flag(task, TIF_SSBD);
+		ssbd_ssbs_disable(task);
 		break;
 	case PR_SPEC_FORCE_DISABLE:
 		if (state == ARM64_SSBD_FORCE_DISABLE)
@@ -58,6 +79,7 @@ static int ssbd_prctl_set(struct task_st
 		task_set_spec_ssb_disable(task);
 		task_set_spec_ssb_force_disable(task);
 		set_tsk_thread_flag(task, TIF_SSBD);
+		ssbd_ssbs_disable(task);
 		break;
 	default:
 		return -ERANGE;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 070/119] KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (68 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 069/119] arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3 Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 071/119] arm64: fix SSBS sanitization Greg Kroah-Hartman
                   ` (53 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Christoffer Dall, Will Deacon,
	Catalin Marinas, Ard Biesheuvel

From: Will Deacon <will.deacon@arm.com>

[ Upstream commit 7c36447ae5a090729e7b129f24705bb231a07e0b ]

When running without VHE, it is necessary to set SCTLR_EL2.DSSBS if SSBD
has been forcefully disabled on the kernel command-line.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/kvm_host.h |   11 +++++++++++
 arch/arm64/kvm/hyp/sysreg-sr.c    |   11 +++++++++++
 2 files changed, 22 insertions(+)

--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -356,6 +356,8 @@ struct kvm_vcpu *kvm_mpidr_to_vcpu(struc
 void __kvm_set_tpidr_el2(u64 tpidr_el2);
 DECLARE_PER_CPU(kvm_cpu_context_t, kvm_host_cpu_state);
 
+void __kvm_enable_ssbs(void);
+
 static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
 				       unsigned long hyp_stack_ptr,
 				       unsigned long vector_ptr)
@@ -380,6 +382,15 @@ static inline void __cpu_init_hyp_mode(p
 		- (u64)kvm_ksym_ref(kvm_host_cpu_state);
 
 	kvm_call_hyp(__kvm_set_tpidr_el2, tpidr_el2);
+
+	/*
+	 * Disabling SSBD on a non-VHE system requires us to enable SSBS
+	 * at EL2.
+	 */
+	if (!has_vhe() && this_cpu_has_cap(ARM64_SSBS) &&
+	    arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) {
+		kvm_call_hyp(__kvm_enable_ssbs);
+	}
 }
 
 static inline void kvm_arch_hardware_unsetup(void) {}
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -188,3 +188,14 @@ void __hyp_text __kvm_set_tpidr_el2(u64
 {
 	asm("msr tpidr_el2, %0": : "r" (tpidr_el2));
 }
+
+void __hyp_text __kvm_enable_ssbs(void)
+{
+	u64 tmp;
+
+	asm volatile(
+	"mrs	%0, sctlr_el2\n"
+	"orr	%0, %0, %1\n"
+	"msr	sctlr_el2, %0"
+	: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
+}



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 071/119] arm64: fix SSBS sanitization
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (69 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 070/119] KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 072/119] arm64: Add sysfs vulnerability show for spectre-v1 Greg Kroah-Hartman
                   ` (52 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Mark Rutland, Catalin Marinas,
	Suzuki K Poulose, Will Deacon, Ard Biesheuvel

From: Mark Rutland <mark.rutland@arm.com>

[ Upstream commit f54dada8274643e3ff4436df0ea124aeedc43cae ]

In valid_user_regs() we treat SSBS as a RES0 bit, and consequently it is
unexpectedly cleared when we restore a sigframe or fiddle with GPRs via
ptrace.

This patch fixes valid_user_regs() to account for this, updating the
function to refer to the latest ARM ARM (ARM DDI 0487D.a). For AArch32
tasks, SSBS appears in bit 23 of SPSR_EL1, matching its position in the
AArch32-native PSR format, and we don't need to translate it as we have
to for DIT.

There are no other bit assignments that we need to account for today.
As the recent documentation describes the DIT bit, we can drop our
comment regarding DIT.

While removing SSBS from the RES0 masks, existing inconsistent
whitespace is corrected.

Fixes: d71be2b6c0e19180 ("arm64: cpufeature: Detect SSBS and advertise to userspace")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/ptrace.c |   15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -1402,19 +1402,20 @@ asmlinkage void syscall_trace_exit(struc
 }
 
 /*
- * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487C.a
- * We also take into account DIT (bit 24), which is not yet documented, and
- * treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may be
- * allocated an EL0 meaning in future.
+ * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.
+ * We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is
+ * not described in ARM DDI 0487D.a.
+ * We treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may
+ * be allocated an EL0 meaning in future.
  * Userspace cannot use these until they have an architectural meaning.
  * Note that this follows the SPSR_ELx format, not the AArch32 PSR format.
  * We also reserve IL for the kernel; SS is handled dynamically.
  */
 #define SPSR_EL1_AARCH64_RES0_BITS \
-	(GENMASK_ULL(63,32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
-	 GENMASK_ULL(20, 10) | GENMASK_ULL(5, 5))
+	(GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
+	 GENMASK_ULL(20, 13) | GENMASK_ULL(11, 10) | GENMASK_ULL(5, 5))
 #define SPSR_EL1_AARCH32_RES0_BITS \
-	(GENMASK_ULL(63,32) | GENMASK_ULL(23, 22) | GENMASK_ULL(20,20))
+	(GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20))
 
 static int valid_compat_regs(struct user_pt_regs *regs)
 {



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 072/119] arm64: Add sysfs vulnerability show for spectre-v1
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (70 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 071/119] arm64: fix SSBS sanitization Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 073/119] arm64: add sysfs vulnerability show for meltdown Greg Kroah-Hartman
                   ` (51 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Mian Yousaf Kaukab, Jeremy Linton,
	Andre Przywara, Catalin Marinas, Stefan Wahren, Suzuki K Poulose,
	Will Deacon, Ard Biesheuvel

From: Mian Yousaf Kaukab <ykaukab@suse.de>

[ Upstream commit 3891ebccace188af075ce143d8b072b65e90f695 ]

spectre-v1 has been mitigated and the mitigation is always active.
Report this to userspace via sysfs

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpu_errata.c |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -638,3 +638,9 @@ const struct arm64_cpu_capabilities arm6
 	{
 	}
 };
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
+			    char *buf)
+{
+	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+}



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 073/119] arm64: add sysfs vulnerability show for meltdown
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (71 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 072/119] arm64: Add sysfs vulnerability show for spectre-v1 Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 074/119] arm64: enable generic CPU vulnerabilites support Greg Kroah-Hartman
                   ` (50 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Jeremy Linton, Suzuki K Poulose,
	Andre Przywara, Catalin Marinas, Stefan Wahren, Will Deacon,
	Ard Biesheuvel

From: Jeremy Linton <jeremy.linton@arm.com>

[ Upstream commit 1b3ccf4be0e7be8c4bd8522066b6cbc92591e912 ]

We implement page table isolation as a mitigation for meltdown.
Report this to userspace via sysfs.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpufeature.c |   58 +++++++++++++++++++++++++++++++----------
 1 file changed, 44 insertions(+), 14 deletions(-)

--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -824,7 +824,7 @@ static bool has_no_fpsimd(const struct a
 					ID_AA64PFR0_FP_SHIFT) < 0;
 }
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static bool __meltdown_safe = true;
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
 static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
@@ -842,6 +842,16 @@ static bool unmap_kernel_at_el0(const st
 		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
 	};
 	char const *str = "command line option";
+	bool meltdown_safe;
+
+	meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
+
+	/* Defer to CPU feature registers */
+	if (has_cpuid_feature(entry, scope))
+		meltdown_safe = true;
+
+	if (!meltdown_safe)
+		__meltdown_safe = false;
 
 	/*
 	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
@@ -853,6 +863,19 @@ static bool unmap_kernel_at_el0(const st
 		__kpti_forced = -1;
 	}
 
+	/* Useful for KASLR robustness */
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
+		if (!__kpti_forced) {
+			str = "KASLR";
+			__kpti_forced = 1;
+		}
+	}
+
+	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
+		pr_info_once("kernel page table isolation disabled by kernel configuration\n");
+		return false;
+	}
+
 	/* Forced? */
 	if (__kpti_forced) {
 		pr_info_once("kernel page table isolation forced %s by %s\n",
@@ -860,18 +883,10 @@ static bool unmap_kernel_at_el0(const st
 		return __kpti_forced > 0;
 	}
 
-	/* Useful for KASLR robustness */
-	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
-		return true;
-
-	/* Don't force KPTI for CPUs that are not vulnerable */
-	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
-		return false;
-
-	/* Defer to CPU feature registers */
-	return !has_cpuid_feature(entry, scope);
+	return !meltdown_safe;
 }
 
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 static void
 kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 {
@@ -896,6 +911,12 @@ kpti_install_ng_mappings(const struct ar
 
 	return;
 }
+#else
+static void
+kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
+{
+}
+#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 static int __init parse_kpti(char *str)
 {
@@ -909,7 +930,6 @@ static int __init parse_kpti(char *str)
 	return 0;
 }
 early_param("kpti", parse_kpti);
-#endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
 {
@@ -1056,7 +1076,6 @@ static const struct arm64_cpu_capabiliti
 		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
 		.matches = hyp_offset_low,
 	},
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	{
 		.desc = "Kernel page table isolation (KPTI)",
 		.capability = ARM64_UNMAP_KERNEL_AT_EL0,
@@ -1072,7 +1091,6 @@ static const struct arm64_cpu_capabiliti
 		.matches = unmap_kernel_at_el0,
 		.cpu_enable = kpti_install_ng_mappings,
 	},
-#endif
 	{
 		/* FP/SIMD is not implemented */
 		.capability = ARM64_HAS_NO_FPSIMD,
@@ -1629,3 +1647,15 @@ static int __init enable_mrs_emulation(v
 }
 
 core_initcall(enable_mrs_emulation);
+
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
+			  char *buf)
+{
+	if (__meltdown_safe)
+		return sprintf(buf, "Not affected\n");
+
+	if (arm64_kernel_unmapped_at_el0())
+		return sprintf(buf, "Mitigation: PTI\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 074/119] arm64: enable generic CPU vulnerabilites support
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (72 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 073/119] arm64: add sysfs vulnerability show for meltdown Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 075/119] arm64: Always enable ssb vulnerability detection Greg Kroah-Hartman
                   ` (49 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Mian Yousaf Kaukab, Jeremy Linton,
	Andre Przywara, Catalin Marinas, Stefan Wahren, Will Deacon,
	Ard Biesheuvel

From: Mian Yousaf Kaukab <ykaukab@suse.de>

[ Upstream commit 61ae1321f06c4489c724c803e9b8363dea576da3 ]

Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
meltdown and store-bypass.

Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/Kconfig |    1 +
 1 file changed, 1 insertion(+)

--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -49,6 +49,7 @@ config ARM64
 	select GENERIC_CLOCKEVENTS
 	select GENERIC_CLOCKEVENTS_BROADCAST
 	select GENERIC_CPU_AUTOPROBE
+	select GENERIC_CPU_VULNERABILITIES
 	select GENERIC_EARLY_IOREMAP
 	select GENERIC_IDLE_POLL_SETUP
 	select GENERIC_IRQ_PROBE



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 075/119] arm64: Always enable ssb vulnerability detection
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (73 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 074/119] arm64: enable generic CPU vulnerabilites support Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 076/119] arm64: Provide a command line to disable spectre_v2 mitigation Greg Kroah-Hartman
                   ` (48 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Jeremy Linton, Andre Przywara,
	Catalin Marinas, Stefan Wahren, Will Deacon, Ard Biesheuvel

From: Jeremy Linton <jeremy.linton@arm.com>

[ Upstream commit d42281b6e49510f078ace15a8ea10f71e6262581 ]

Ensure we are always able to detect whether or not the CPU is affected
by SSB, so that we can later advertise this to userspace.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
[will: Use IS_ENABLED instead of #ifdef]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/cpufeature.h |    4 ----
 arch/arm64/kernel/cpu_errata.c      |    9 +++++----
 2 files changed, 5 insertions(+), 8 deletions(-)

--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -493,11 +493,7 @@ static inline int arm64_get_ssbd_state(v
 #endif
 }
 
-#ifdef CONFIG_ARM64_SSBD
 void arm64_set_ssbd_mitigation(bool state);
-#else
-static inline void arm64_set_ssbd_mitigation(bool state) {}
-#endif
 
 #endif /* __ASSEMBLY__ */
 
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -231,7 +231,6 @@ enable_smccc_arch_workaround_1(const str
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
-#ifdef CONFIG_ARM64_SSBD
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
 int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
@@ -304,6 +303,11 @@ void __init arm64_enable_wa2_handling(st
 
 void arm64_set_ssbd_mitigation(bool state)
 {
+	if (!IS_ENABLED(CONFIG_ARM64_SSBD)) {
+		pr_info_once("SSBD disabled by kernel configuration\n");
+		return;
+	}
+
 	if (this_cpu_has_cap(ARM64_SSBS)) {
 		if (state)
 			asm volatile(SET_PSTATE_SSBS(0));
@@ -423,7 +427,6 @@ out_printmsg:
 
 	return required;
 }
-#endif	/* CONFIG_ARM64_SSBD */
 
 #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
 	.matches = is_affected_midr_range,			\
@@ -627,14 +630,12 @@ const struct arm64_cpu_capabilities arm6
 		.cpu_enable = enable_smccc_arch_workaround_1,
 	},
 #endif
-#ifdef CONFIG_ARM64_SSBD
 	{
 		.desc = "Speculative Store Bypass Disable",
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.capability = ARM64_SSBD,
 		.matches = has_ssbd_mitigation,
 	},
-#endif
 	{
 	}
 };



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 076/119] arm64: Provide a command line to disable spectre_v2 mitigation
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (74 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 075/119] arm64: Always enable ssb vulnerability detection Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 077/119] arm64: Advertise mitigation of Spectre-v2, or lack thereof Greg Kroah-Hartman
                   ` (47 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Jeremy Linton, Suzuki K Poulose,
	Andre Przywara, Catalin Marinas, Stefan Wahren, Jonathan Corbet,
	linux-doc, Will Deacon, Ard Biesheuvel

From: Jeremy Linton <jeremy.linton@arm.com>

[ Upstream commit e5ce5e7267ddcbe13ab9ead2542524e1b7993e5a ]

There are various reasons, such as benchmarking, to disable spectrev2
mitigation on a machine. Provide a command-line option to do so.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 Documentation/admin-guide/kernel-parameters.txt |    8 ++++----
 arch/arm64/kernel/cpu_errata.c                  |   13 +++++++++++++
 2 files changed, 17 insertions(+), 4 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2745,10 +2745,10 @@
 			(bounds check bypass). With this option data leaks
 			are possible in the system.
 
-	nospectre_v2	[X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2
-			(indirect branch prediction) vulnerability. System may
-			allow data leaks with this option, which is equivalent
-			to spectre_v2=off.
+	nospectre_v2	[X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for
+			the Spectre variant 2 (indirect branch prediction)
+			vulnerability. System may allow data leaks with this
+			option.
 
 	nospec_store_bypass_disable
 			[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -181,6 +181,14 @@ static void qcom_link_stack_sanitization
 		     : "=&r" (tmp));
 }
 
+static bool __nospectre_v2;
+static int __init parse_nospectre_v2(char *str)
+{
+	__nospectre_v2 = true;
+	return 0;
+}
+early_param("nospectre_v2", parse_nospectre_v2);
+
 static void
 enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
 {
@@ -192,6 +200,11 @@ enable_smccc_arch_workaround_1(const str
 	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
 		return;
 
+	if (__nospectre_v2) {
+		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		return;
+	}
+
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
 		return;
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 077/119] arm64: Advertise mitigation of Spectre-v2, or lack thereof
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (75 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 076/119] arm64: Provide a command line to disable spectre_v2 mitigation Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 078/119] arm64: Always enable spectre-v2 vulnerability detection Greg Kroah-Hartman
                   ` (46 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Marc Zyngier, Jeremy Linton, Andre Przywara,
	Suzuki K Poulose, Catalin Marinas, Stefan Wahren, Will Deacon,
	Ard Biesheuvel

From: Marc Zyngier <marc.zyngier@arm.com>

[ Upstream commit 73f38166095947f3b86b02fbed6bd592223a7ac8 ]

We currently have a list of CPUs affected by Spectre-v2, for which
we check that the firmware implements ARCH_WORKAROUND_1. It turns
out that not all firmwares do implement the required mitigation,
and that we fail to let the user know about it.

Instead, let's slightly revamp our checks, and rely on a whitelist
of cores that are known to be non-vulnerable, and let the user know
the status of the mitigation in the kernel log.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpu_errata.c |  108 +++++++++++++++++++++--------------------
 1 file changed, 56 insertions(+), 52 deletions(-)

--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -98,9 +98,9 @@ static void __copy_hyp_vect_bpi(int slot
 	flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
 }
 
-static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
-				      const char *hyp_vecs_start,
-				      const char *hyp_vecs_end)
+static void install_bp_hardening_cb(bp_hardening_cb_t fn,
+				    const char *hyp_vecs_start,
+				    const char *hyp_vecs_end)
 {
 	static int last_slot = -1;
 	static DEFINE_SPINLOCK(bp_lock);
@@ -130,7 +130,7 @@ static void __install_bp_hardening_cb(bp
 #define __smccc_workaround_1_smc_start		NULL
 #define __smccc_workaround_1_smc_end		NULL
 
-static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
+static void install_bp_hardening_cb(bp_hardening_cb_t fn,
 				      const char *hyp_vecs_start,
 				      const char *hyp_vecs_end)
 {
@@ -138,23 +138,6 @@ static void __install_bp_hardening_cb(bp
 }
 #endif	/* CONFIG_KVM */
 
-static void  install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
-				     bp_hardening_cb_t fn,
-				     const char *hyp_vecs_start,
-				     const char *hyp_vecs_end)
-{
-	u64 pfr0;
-
-	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
-		return;
-
-	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
-	if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
-		return;
-
-	__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
-}
-
 #include <uapi/linux/psci.h>
 #include <linux/arm-smccc.h>
 #include <linux/psci.h>
@@ -189,31 +172,27 @@ static int __init parse_nospectre_v2(cha
 }
 early_param("nospectre_v2", parse_nospectre_v2);
 
-static void
-enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
+/*
+ * -1: No workaround
+ *  0: No workaround required
+ *  1: Workaround installed
+ */
+static int detect_harden_bp_fw(void)
 {
 	bp_hardening_cb_t cb;
 	void *smccc_start, *smccc_end;
 	struct arm_smccc_res res;
 	u32 midr = read_cpuid_id();
 
-	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
-		return;
-
-	if (__nospectre_v2) {
-		pr_info_once("spectrev2 mitigation disabled by command line option\n");
-		return;
-	}
-
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
-		return;
+		return -1;
 
 	switch (psci_ops.conduit) {
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 		if ((int)res.a0 < 0)
-			return;
+			return -1;
 		cb = call_hvc_arch_workaround_1;
 		/* This is a guest, no need to patch KVM vectors */
 		smccc_start = NULL;
@@ -224,23 +203,23 @@ enable_smccc_arch_workaround_1(const str
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
 		if ((int)res.a0 < 0)
-			return;
+			return -1;
 		cb = call_smc_arch_workaround_1;
 		smccc_start = __smccc_workaround_1_smc_start;
 		smccc_end = __smccc_workaround_1_smc_end;
 		break;
 
 	default:
-		return;
+		return -1;
 	}
 
 	if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
 	    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
 		cb = qcom_link_stack_sanitization;
 
-	install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
+	install_bp_hardening_cb(cb, smccc_start, smccc_end);
 
-	return;
+	return 1;
 }
 #endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
@@ -479,23 +458,48 @@ out_printmsg:
 	CAP_MIDR_RANGE_LIST(midr_list)
 
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
-
 /*
- * List of CPUs where we need to issue a psci call to
- * harden the branch predictor.
+ * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
-static const struct midr_range arm64_bp_harden_smccc_cpus[] = {
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
-	MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
-	MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
-	MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
-	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
-	MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
-	{},
+static const struct midr_range spectre_v2_safe_list[] = {
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+	{ /* sentinel */ }
 };
 
+static bool __maybe_unused
+check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	int need_wa;
+
+	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+	/* If the CPU has CSV2 set, we're safe */
+	if (cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64PFR0_EL1),
+						 ID_AA64PFR0_CSV2_SHIFT))
+		return false;
+
+	/* Alternatively, we have a list of unaffected CPUs */
+	if (is_midr_in_range_list(read_cpuid_id(), spectre_v2_safe_list))
+		return false;
+
+	/* Fallback to firmware detection */
+	need_wa = detect_harden_bp_fw();
+	if (!need_wa)
+		return false;
+
+	/* forced off */
+	if (__nospectre_v2) {
+		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		return false;
+	}
+
+	if (need_wa < 0)
+		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
+
+	return (need_wa > 0);
+}
 #endif
 
 const struct arm64_cpu_capabilities arm64_errata[] = {
@@ -639,8 +643,8 @@ const struct arm64_cpu_capabilities arm6
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
-		ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
-		.cpu_enable = enable_smccc_arch_workaround_1,
+		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+		.matches = check_branch_predictor,
 	},
 #endif
 	{



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 078/119] arm64: Always enable spectre-v2 vulnerability detection
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (76 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 077/119] arm64: Advertise mitigation of Spectre-v2, or lack thereof Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 079/119] arm64: add sysfs vulnerability show for spectre-v2 Greg Kroah-Hartman
                   ` (45 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Jeremy Linton, Andre Przywara,
	Catalin Marinas, Stefan Wahren, Will Deacon, Ard Biesheuvel

From: Jeremy Linton <jeremy.linton@arm.com>

[ Upstream commit 8c1e3d2bb44cbb998cb28ff9a18f105fee7f1eb3 ]

Ensure we are always able to detect whether or not the CPU is affected
by Spectre-v2, so that we can later advertise this to userspace.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpu_errata.c |   15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -76,7 +76,6 @@ cpu_enable_trap_ctr_access(const struct
 	config_sctlr_el1(SCTLR_EL1_UCT, 0);
 }
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 #include <asm/mmu_context.h>
 #include <asm/cacheflush.h>
 
@@ -217,11 +216,11 @@ static int detect_harden_bp_fw(void)
 	    ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
 		cb = qcom_link_stack_sanitization;
 
-	install_bp_hardening_cb(cb, smccc_start, smccc_end);
+	if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR))
+		install_bp_hardening_cb(cb, smccc_start, smccc_end);
 
 	return 1;
 }
-#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
 
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
@@ -457,7 +456,6 @@ out_printmsg:
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
 	CAP_MIDR_RANGE_LIST(midr_list)
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 /*
  * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
@@ -489,6 +487,12 @@ check_branch_predictor(const struct arm6
 	if (!need_wa)
 		return false;
 
+	if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
+		pr_warn_once("spectrev2 mitigation disabled by kernel configuration\n");
+		__hardenbp_enab = false;
+		return false;
+	}
+
 	/* forced off */
 	if (__nospectre_v2) {
 		pr_info_once("spectrev2 mitigation disabled by command line option\n");
@@ -500,7 +504,6 @@ check_branch_predictor(const struct arm6
 
 	return (need_wa > 0);
 }
-#endif
 
 const struct arm64_cpu_capabilities arm64_errata[] = {
 #if	defined(CONFIG_ARM64_ERRATUM_826319) || \
@@ -640,13 +643,11 @@ const struct arm64_cpu_capabilities arm6
 		ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
 	},
 #endif
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 	{
 		.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.matches = check_branch_predictor,
 	},
-#endif
 	{
 		.desc = "Speculative Store Bypass Disable",
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 079/119] arm64: add sysfs vulnerability show for spectre-v2
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (77 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 078/119] arm64: Always enable spectre-v2 vulnerability detection Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 080/119] arm64: add sysfs vulnerability show for speculative store bypass Greg Kroah-Hartman
                   ` (44 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Jeremy Linton, Andre Przywara,
	Catalin Marinas, Stefan Wahren, Will Deacon, Ard Biesheuvel

From: Jeremy Linton <jeremy.linton@arm.com>

[ Upstream commit d2532e27b5638bb2e2dd52b80b7ea2ec65135377 ]

Track whether all the cores in the machine are vulnerable to Spectre-v2,
and whether all the vulnerable cores have been mitigated. We then expose
this information to userspace via sysfs.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpu_errata.c |   27 ++++++++++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -456,6 +456,10 @@ out_printmsg:
 	.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,			\
 	CAP_MIDR_RANGE_LIST(midr_list)
 
+/* Track overall mitigation state. We are only mitigated if all cores are ok */
+static bool __hardenbp_enab = true;
+static bool __spectrev2_safe = true;
+
 /*
  * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
@@ -466,6 +470,10 @@ static const struct midr_range spectre_v
 	{ /* sentinel */ }
 };
 
+/*
+ * Track overall bp hardening for all heterogeneous cores in the machine.
+ * We are only considered "safe" if all booted cores are known safe.
+ */
 static bool __maybe_unused
 check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
 {
@@ -487,6 +495,8 @@ check_branch_predictor(const struct arm6
 	if (!need_wa)
 		return false;
 
+	__spectrev2_safe = false;
+
 	if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
 		pr_warn_once("spectrev2 mitigation disabled by kernel configuration\n");
 		__hardenbp_enab = false;
@@ -496,11 +506,14 @@ check_branch_predictor(const struct arm6
 	/* forced off */
 	if (__nospectre_v2) {
 		pr_info_once("spectrev2 mitigation disabled by command line option\n");
+		__hardenbp_enab = false;
 		return false;
 	}
 
-	if (need_wa < 0)
+	if (need_wa < 0) {
 		pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
+		__hardenbp_enab = false;
+	}
 
 	return (need_wa > 0);
 }
@@ -663,3 +676,15 @@ ssize_t cpu_show_spectre_v1(struct devic
 {
 	return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 }
+
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+		char *buf)
+{
+	if (__spectrev2_safe)
+		return sprintf(buf, "Not affected\n");
+
+	if (__hardenbp_enab)
+		return sprintf(buf, "Mitigation: Branch predictor hardening\n");
+
+	return sprintf(buf, "Vulnerable\n");
+}



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 080/119] arm64: add sysfs vulnerability show for speculative store bypass
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (78 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 079/119] arm64: add sysfs vulnerability show for spectre-v2 Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 081/119] arm64: ssbs: Dont treat CPUs with SSBS as unaffected by SSB Greg Kroah-Hartman
                   ` (43 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Stefan Wahren, Jeremy Linton, Will Deacon,
	Ard Biesheuvel

From: Jeremy Linton <jeremy.linton@arm.com>

[ Upstream commit 526e065dbca6df0b5a130b84b836b8b3c9f54e21 ]

Return status based on ssbd_state and __ssb_safe. If the
mitigation is disabled, or the firmware isn't responding then
return the expected machine state based on a whitelist of known
good cores.

Given a heterogeneous machine, the overall machine vulnerability
defaults to safe but is reset to unsafe when we miss the whitelist
and the firmware doesn't explicitly tell us the core is safe.
In order to make that work we delay transitioning to vulnerable
until we know the firmware isn't responding to avoid a case
where we miss the whitelist, but the firmware goes ahead and
reports the core is not vulnerable. If all the cores in the
machine have SSBS, then __ssb_safe will remain true.

Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpu_errata.c |   42 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -225,6 +225,7 @@ static int detect_harden_bp_fw(void)
 DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
 
 int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
+static bool __ssb_safe = true;
 
 static const struct ssbd_options {
 	const char	*str;
@@ -328,6 +329,7 @@ static bool has_ssbd_mitigation(const st
 	struct arm_smccc_res res;
 	bool required = true;
 	s32 val;
+	bool this_cpu_safe = false;
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
@@ -336,8 +338,14 @@ static bool has_ssbd_mitigation(const st
 		goto out_printmsg;
 	}
 
+	/* delay setting __ssb_safe until we get a firmware response */
+	if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list))
+		this_cpu_safe = true;
+
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
 		ssbd_state = ARM64_SSBD_UNKNOWN;
+		if (!this_cpu_safe)
+			__ssb_safe = false;
 		return false;
 	}
 
@@ -354,6 +362,8 @@ static bool has_ssbd_mitigation(const st
 
 	default:
 		ssbd_state = ARM64_SSBD_UNKNOWN;
+		if (!this_cpu_safe)
+			__ssb_safe = false;
 		return false;
 	}
 
@@ -362,14 +372,18 @@ static bool has_ssbd_mitigation(const st
 	switch (val) {
 	case SMCCC_RET_NOT_SUPPORTED:
 		ssbd_state = ARM64_SSBD_UNKNOWN;
+		if (!this_cpu_safe)
+			__ssb_safe = false;
 		return false;
 
+	/* machines with mixed mitigation requirements must not return this */
 	case SMCCC_RET_NOT_REQUIRED:
 		pr_info_once("%s mitigation not required\n", entry->desc);
 		ssbd_state = ARM64_SSBD_MITIGATED;
 		return false;
 
 	case SMCCC_RET_SUCCESS:
+		__ssb_safe = false;
 		required = true;
 		break;
 
@@ -379,6 +393,8 @@ static bool has_ssbd_mitigation(const st
 
 	default:
 		WARN_ON(1);
+		if (!this_cpu_safe)
+			__ssb_safe = false;
 		return false;
 	}
 
@@ -419,6 +435,14 @@ out_printmsg:
 	return required;
 }
 
+/* known invulnerable cores */
+static const struct midr_range arm64_ssb_cpus[] = {
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+	MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+	{},
+};
+
 #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max)	\
 	.matches = is_affected_midr_range,			\
 	.midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
@@ -666,6 +690,7 @@ const struct arm64_cpu_capabilities arm6
 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
 		.capability = ARM64_SSBD,
 		.matches = has_ssbd_mitigation,
+		.midr_range_list = arm64_ssb_cpus,
 	},
 	{
 	}
@@ -688,3 +713,20 @@ ssize_t cpu_show_spectre_v2(struct devic
 
 	return sprintf(buf, "Vulnerable\n");
 }
+
+ssize_t cpu_show_spec_store_bypass(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	if (__ssb_safe)
+		return sprintf(buf, "Not affected\n");
+
+	switch (ssbd_state) {
+	case ARM64_SSBD_KERNEL:
+	case ARM64_SSBD_FORCE_ENABLE:
+		if (IS_ENABLED(CONFIG_ARM64_SSBD))
+			return sprintf(buf,
+			    "Mitigation: Speculative Store Bypass disabled via prctl\n");
+	}
+
+	return sprintf(buf, "Vulnerable\n");
+}



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 081/119] arm64: ssbs: Dont treat CPUs with SSBS as unaffected by SSB
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (79 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 080/119] arm64: add sysfs vulnerability show for speculative store bypass Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:00 ` [PATCH 4.14 082/119] arm64: Force SSBS on context switch Greg Kroah-Hartman
                   ` (42 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable; +Cc: Greg Kroah-Hartman, Will Deacon, Ard Biesheuvel

From: Will Deacon <will.deacon@arm.com>

[ Upstream commit eb337cdfcd5dd3b10522c2f34140a73a4c285c30 ]

SSBS provides a relatively cheap mitigation for SSB, but it is still a
mitigation and its presence does not indicate that the CPU is unaffected
by the vulnerability.

Tweak the mitigation logic so that we report the correct string in sysfs.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpu_errata.c |   10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -333,15 +333,17 @@ static bool has_ssbd_mitigation(const st
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
+	/* delay setting __ssb_safe until we get a firmware response */
+	if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list))
+		this_cpu_safe = true;
+
 	if (this_cpu_has_cap(ARM64_SSBS)) {
+		if (!this_cpu_safe)
+			__ssb_safe = false;
 		required = false;
 		goto out_printmsg;
 	}
 
-	/* delay setting __ssb_safe until we get a firmware response */
-	if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list))
-		this_cpu_safe = true;
-
 	if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
 		ssbd_state = ARM64_SSBD_UNKNOWN;
 		if (!this_cpu_safe)



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 082/119] arm64: Force SSBS on context switch
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (80 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 081/119] arm64: ssbs: Dont treat CPUs with SSBS as unaffected by SSB Greg Kroah-Hartman
@ 2019-10-27 21:00 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 083/119] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Greg Kroah-Hartman
                   ` (41 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:00 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Marc Zyngier, Will Deacon, Ard Biesheuvel

From: Marc Zyngier <marc.zyngier@arm.com>

[ Upstream commit cbdf8a189a66001c36007bf0f5c975d0376c5c3a ]

On a CPU that doesn't support SSBS, PSTATE[12] is RES0.  In a system
where only some of the CPUs implement SSBS, we end-up losing track of
the SSBS bit across task migration.

To address this issue, let's force the SSBS bit on context switch.

Fixes: 8f04e8e6e29c ("arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3")
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[will: inverted logic and added comments]
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/include/asm/processor.h |   14 ++++++++++++--
 arch/arm64/kernel/process.c        |   29 ++++++++++++++++++++++++++++-
 2 files changed, 40 insertions(+), 3 deletions(-)

--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -148,6 +148,16 @@ static inline void start_thread_common(s
 	regs->pc = pc;
 }
 
+static inline void set_ssbs_bit(struct pt_regs *regs)
+{
+	regs->pstate |= PSR_SSBS_BIT;
+}
+
+static inline void set_compat_ssbs_bit(struct pt_regs *regs)
+{
+	regs->pstate |= PSR_AA32_SSBS_BIT;
+}
+
 static inline void start_thread(struct pt_regs *regs, unsigned long pc,
 				unsigned long sp)
 {
@@ -155,7 +165,7 @@ static inline void start_thread(struct p
 	regs->pstate = PSR_MODE_EL0t;
 
 	if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
-		regs->pstate |= PSR_SSBS_BIT;
+		set_ssbs_bit(regs);
 
 	regs->sp = sp;
 }
@@ -174,7 +184,7 @@ static inline void compat_start_thread(s
 #endif
 
 	if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
-		regs->pstate |= PSR_AA32_SSBS_BIT;
+		set_compat_ssbs_bit(regs);
 
 	regs->compat_sp = sp;
 }
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -298,7 +298,7 @@ int copy_thread(unsigned long clone_flag
 			childregs->pstate |= PSR_UAO_BIT;
 
 		if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
-			childregs->pstate |= PSR_SSBS_BIT;
+			set_ssbs_bit(childregs);
 
 		p->thread.cpu_context.x19 = stack_start;
 		p->thread.cpu_context.x20 = stk_sz;
@@ -340,6 +340,32 @@ void uao_thread_switch(struct task_struc
 }
 
 /*
+ * Force SSBS state on context-switch, since it may be lost after migrating
+ * from a CPU which treats the bit as RES0 in a heterogeneous system.
+ */
+static void ssbs_thread_switch(struct task_struct *next)
+{
+	struct pt_regs *regs = task_pt_regs(next);
+
+	/*
+	 * Nothing to do for kernel threads, but 'regs' may be junk
+	 * (e.g. idle task) so check the flags and bail early.
+	 */
+	if (unlikely(next->flags & PF_KTHREAD))
+		return;
+
+	/* If the mitigation is enabled, then we leave SSBS clear. */
+	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
+	    test_tsk_thread_flag(next, TIF_SSBD))
+		return;
+
+	if (compat_user_mode(regs))
+		set_compat_ssbs_bit(regs);
+	else if (user_mode(regs))
+		set_ssbs_bit(regs);
+}
+
+/*
  * We store our current task in sp_el0, which is clobbered by userspace. Keep a
  * shadow copy so that we can restore this upon entry from userspace.
  *
@@ -367,6 +393,7 @@ __notrace_funcgraph struct task_struct *
 	contextidr_thread_switch(next);
 	entry_task_switch(next);
 	uao_thread_switch(next);
+	ssbs_thread_switch(next);
 
 	/*
 	 * Complete any pending TLB or cache maintenance on this CPU in case



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 083/119] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (81 preceding siblings ...)
  2019-10-27 21:00 ` [PATCH 4.14 082/119] arm64: Force SSBS on context switch Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 084/119] arm64/speculation: Support mitigations= cmdline option Greg Kroah-Hartman
                   ` (40 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Marc Zyngier, Jeremy Linton, Andre Przywara,
	Catalin Marinas, Stefan Wahren, Will Deacon, Ard Biesheuvel

From: Marc Zyngier <marc.zyngier@arm.com>

[ Upstream commit 517953c2c47f9c00a002f588ac856a5bc70cede3 ]

The SMCCC ARCH_WORKAROUND_1 service can indicate that although the
firmware knows about the Spectre-v2 mitigation, this particular
CPU is not vulnerable, and it is thus not necessary to call
the firmware on this CPU.

Let's use this information to our benefit.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/arm64/kernel/cpu_errata.c |   32 +++++++++++++++++++++++---------
 1 file changed, 23 insertions(+), 9 deletions(-)

--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -190,22 +190,36 @@ static int detect_harden_bp_fw(void)
 	case PSCI_CONDUIT_HVC:
 		arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
-		if ((int)res.a0 < 0)
+		switch ((int)res.a0) {
+		case 1:
+			/* Firmware says we're just fine */
+			return 0;
+		case 0:
+			cb = call_hvc_arch_workaround_1;
+			/* This is a guest, no need to patch KVM vectors */
+			smccc_start = NULL;
+			smccc_end = NULL;
+			break;
+		default:
 			return -1;
-		cb = call_hvc_arch_workaround_1;
-		/* This is a guest, no need to patch KVM vectors */
-		smccc_start = NULL;
-		smccc_end = NULL;
+		}
 		break;
 
 	case PSCI_CONDUIT_SMC:
 		arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
 				  ARM_SMCCC_ARCH_WORKAROUND_1, &res);
-		if ((int)res.a0 < 0)
+		switch ((int)res.a0) {
+		case 1:
+			/* Firmware says we're just fine */
+			return 0;
+		case 0:
+			cb = call_smc_arch_workaround_1;
+			smccc_start = __smccc_workaround_1_smc_start;
+			smccc_end = __smccc_workaround_1_smc_end;
+			break;
+		default:
 			return -1;
-		cb = call_smc_arch_workaround_1;
-		smccc_start = __smccc_workaround_1_smc_start;
-		smccc_end = __smccc_workaround_1_smc_end;
+		}
 		break;
 
 	default:



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 084/119] arm64/speculation: Support mitigations= cmdline option
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (82 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 083/119] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 085/119] MIPS: tlbex: Fix build_restore_pagemask KScratch restore Greg Kroah-Hartman
                   ` (39 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Greg Kroah-Hartman, Josh Poimboeuf, Will Deacon, Ard Biesheuvel

From: Josh Poimboeuf <jpoimboe@redhat.com>

[ Upstream commit a111b7c0f20e13b54df2fa959b3dc0bdf1925ae6 ]

Configure arm64 runtime CPU speculation bug mitigations in accordance
with the 'mitigations=' cmdline option.  This affects Meltdown, Spectre
v2, and Speculative Store Bypass.

The default behavior is unchanged.

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
[will: reorder checks so KASLR implies KPTI and SSBS is affected by cmdline]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 Documentation/admin-guide/kernel-parameters.txt |    8 +++++---
 arch/arm64/kernel/cpu_errata.c                  |    6 +++++-
 arch/arm64/kernel/cpufeature.c                  |    8 +++++++-
 3 files changed, 17 insertions(+), 5 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2389,8 +2389,8 @@
 			http://repo.or.cz/w/linux-2.6/mini2440.git
 
 	mitigations=
-			[X86,PPC,S390] Control optional mitigations for CPU
-			vulnerabilities.  This is a set of curated,
+			[X86,PPC,S390,ARM64] Control optional mitigations for
+			CPU vulnerabilities.  This is a set of curated,
 			arch-independent options, each of which is an
 			aggregation of existing arch-specific options.
 
@@ -2399,12 +2399,14 @@
 				improves system performance, but it may also
 				expose users to several CPU vulnerabilities.
 				Equivalent to: nopti [X86,PPC]
+					       kpti=0 [ARM64]
 					       nospectre_v1 [PPC]
 					       nobp=0 [S390]
 					       nospectre_v1 [X86]
-					       nospectre_v2 [X86,PPC,S390]
+					       nospectre_v2 [X86,PPC,S390,ARM64]
 					       spectre_v2_user=off [X86]
 					       spec_store_bypass_disable=off [X86,PPC]
+					       ssbd=force-off [ARM64]
 					       l1tf=off [X86]
 					       mds=off [X86]
 
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -19,6 +19,7 @@
 #include <linux/arm-smccc.h>
 #include <linux/psci.h>
 #include <linux/types.h>
+#include <linux/cpu.h>
 #include <asm/cpu.h>
 #include <asm/cputype.h>
 #include <asm/cpufeature.h>
@@ -347,6 +348,9 @@ static bool has_ssbd_mitigation(const st
 
 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
 
+	if (cpu_mitigations_off())
+		ssbd_state = ARM64_SSBD_FORCE_DISABLE;
+
 	/* delay setting __ssb_safe until we get a firmware response */
 	if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list))
 		this_cpu_safe = true;
@@ -544,7 +548,7 @@ check_branch_predictor(const struct arm6
 	}
 
 	/* forced off */
-	if (__nospectre_v2) {
+	if (__nospectre_v2 || cpu_mitigations_off()) {
 		pr_info_once("spectrev2 mitigation disabled by command line option\n");
 		__hardenbp_enab = false;
 		return false;
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -24,6 +24,7 @@
 #include <linux/stop_machine.h>
 #include <linux/types.h>
 #include <linux/mm.h>
+#include <linux/cpu.h>
 #include <asm/cpu.h>
 #include <asm/cpufeature.h>
 #include <asm/cpu_ops.h>
@@ -841,7 +842,7 @@ static bool unmap_kernel_at_el0(const st
 		MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
 		MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
 	};
-	char const *str = "command line option";
+	char const *str = "kpti command line option";
 	bool meltdown_safe;
 
 	meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
@@ -871,6 +872,11 @@ static bool unmap_kernel_at_el0(const st
 		}
 	}
 
+	if (cpu_mitigations_off() && !__kpti_forced) {
+		str = "mitigations=off";
+		__kpti_forced = -1;
+	}
+
 	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
 		pr_info_once("kernel page table isolation disabled by kernel configuration\n");
 		return false;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 085/119] MIPS: tlbex: Fix build_restore_pagemask KScratch restore
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (83 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 084/119] arm64/speculation: Support mitigations= cmdline option Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 086/119] staging: wlan-ng: fix exit return when sme->key_idx >= NUM_WEPKEYS Greg Kroah-Hartman
                   ` (38 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Paul Burton, Dmitry Korotin, linux-mips

From: Paul Burton <paulburton@kernel.org>

commit b42aa3fd5957e4daf4b69129e5ce752a2a53e7d6 upstream.

build_restore_pagemask() will restore the value of register $1/$at when
its restore_scratch argument is non-zero, and aims to do so by filling a
branch delay slot. Commit 0b24cae4d535 ("MIPS: Add missing EHB in mtc0
-> mfc0 sequence.") added an EHB instruction (Execution Hazard Barrier)
prior to restoring $1 from a KScratch register, in order to resolve a
hazard that can result in stale values of the KScratch register being
observed. In particular, P-class CPUs from MIPS with out of order
execution pipelines such as the P5600 & P6600 are affected.

Unfortunately this EHB instruction was inserted in the branch delay slot
causing the MFC0 instruction which performs the restoration to no longer
execute along with the branch. The result is that the $1 register isn't
actually restored, ie. the TLB refill exception handler clobbers it -
which is exactly the problem the EHB is meant to avoid for the P-class
CPUs.

Similarly build_get_pgd_vmalloc() will restore the value of $1/$at when
its mode argument equals refill_scratch, and suffers from the same
problem.

Fix this by in both cases moving the EHB earlier in the emitted code.
There's no reason it needs to immediately precede the MFC0 - it simply
needs to be between the MTC0 & MFC0.

This bug only affects Cavium Octeon systems which use
build_fast_tlb_refill_handler().

Signed-off-by: Paul Burton <paulburton@kernel.org>
Fixes: 0b24cae4d535 ("MIPS: Add missing EHB in mtc0 -> mfc0 sequence.")
Cc: Dmitry Korotin <dkorotin@wavecomp.com>
Cc: stable@vger.kernel.org # v3.15+
Cc: linux-mips@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/mips/mm/tlbex.c |   23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -658,6 +658,13 @@ static void build_restore_pagemask(u32 *
 				   int restore_scratch)
 {
 	if (restore_scratch) {
+		/*
+		 * Ensure the MFC0 below observes the value written to the
+		 * KScratch register by the prior MTC0.
+		 */
+		if (scratch_reg >= 0)
+			uasm_i_ehb(p);
+
 		/* Reset default page size */
 		if (PM_DEFAULT_MASK >> 16) {
 			uasm_i_lui(p, tmp, PM_DEFAULT_MASK >> 16);
@@ -672,12 +679,10 @@ static void build_restore_pagemask(u32 *
 			uasm_i_mtc0(p, 0, C0_PAGEMASK);
 			uasm_il_b(p, r, lid);
 		}
-		if (scratch_reg >= 0) {
-			uasm_i_ehb(p);
+		if (scratch_reg >= 0)
 			UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
-		} else {
+		else
 			UASM_i_LW(p, 1, scratchpad_offset(0), 0);
-		}
 	} else {
 		/* Reset default page size */
 		if (PM_DEFAULT_MASK >> 16) {
@@ -926,6 +931,10 @@ build_get_pgd_vmalloc64(u32 **p, struct
 	}
 	if (mode != not_refill && check_for_high_segbits) {
 		uasm_l_large_segbits_fault(l, *p);
+
+		if (mode == refill_scratch && scratch_reg >= 0)
+			uasm_i_ehb(p);
+
 		/*
 		 * We get here if we are an xsseg address, or if we are
 		 * an xuseg address above (PGDIR_SHIFT+PGDIR_BITS) boundary.
@@ -942,12 +951,10 @@ build_get_pgd_vmalloc64(u32 **p, struct
 		uasm_i_jr(p, ptr);
 
 		if (mode == refill_scratch) {
-			if (scratch_reg >= 0) {
-				uasm_i_ehb(p);
+			if (scratch_reg >= 0)
 				UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
-			} else {
+			else
 				UASM_i_LW(p, 1, scratchpad_offset(0), 0);
-			}
 		} else {
 			uasm_i_nop(p);
 		}



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 086/119] staging: wlan-ng: fix exit return when sme->key_idx >= NUM_WEPKEYS
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (84 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 085/119] MIPS: tlbex: Fix build_restore_pagemask KScratch restore Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 087/119] scsi: sd: Ignore a failure to sync cache due to lack of authorization Greg Kroah-Hartman
                   ` (37 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Colin Ian King

From: Colin Ian King <colin.king@canonical.com>

commit 153c5d8191c26165dbbd2646448ca7207f7796d0 upstream.

Currently the exit return path when sme->key_idx >= NUM_WEPKEYS is via
label 'exit' and this checks if result is non-zero, however result has
not been initialized and contains garbage.  Fix this by replacing the
goto with a return with the error code.

Addresses-Coverity: ("Uninitialized scalar variable")
Fixes: 0ca6d8e74489 ("Staging: wlan-ng: replace switch-case statements with macro")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20191014110201.9874-1-colin.king@canonical.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/staging/wlan-ng/cfg80211.c |    6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

--- a/drivers/staging/wlan-ng/cfg80211.c
+++ b/drivers/staging/wlan-ng/cfg80211.c
@@ -490,10 +490,8 @@ static int prism2_connect(struct wiphy *
 	/* Set the encryption - we only support wep */
 	if (is_wep) {
 		if (sme->key) {
-			if (sme->key_idx >= NUM_WEPKEYS) {
-				err = -EINVAL;
-				goto exit;
-			}
+			if (sme->key_idx >= NUM_WEPKEYS)
+				return -EINVAL;
 
 			result = prism2_domibset_uint32(wlandev,
 				DIDmib_dot11smt_dot11PrivacyTable_dot11WEPDefaultKeyID,



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 087/119] scsi: sd: Ignore a failure to sync cache due to lack of authorization
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (85 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 086/119] staging: wlan-ng: fix exit return when sme->key_idx >= NUM_WEPKEYS Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 088/119] scsi: core: save/restore command resid for error handling Greg Kroah-Hartman
                   ` (36 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Oliver Neukum, Martin K. Petersen

From: Oliver Neukum <oneukum@suse.com>

commit 21e3d6c81179bbdfa279efc8de456c34b814cfd2 upstream.

I've got a report about a UAS drive enclosure reporting back Sense: Logical
unit access not authorized if the drive it holds is password protected.
While the drive is obviously unusable in that state as a mass storage
device, it still exists as a sd device and when the system is asked to
perform a suspend of the drive, it will be sent a SYNCHRONIZE CACHE. If
that fails due to password protection, the error must be ignored.

Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20190903101840.16483-1-oneukum@suse.com
Signed-off-by: Oliver Neukum <oneukum@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/scsi/sd.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -1658,7 +1658,8 @@ static int sd_sync_cache(struct scsi_dis
 		/* we need to evaluate the error return  */
 		if (scsi_sense_valid(sshdr) &&
 			(sshdr->asc == 0x3a ||	/* medium not present */
-			 sshdr->asc == 0x20))	/* invalid command */
+			 sshdr->asc == 0x20 ||	/* invalid command */
+			 (sshdr->asc == 0x74 && sshdr->ascq == 0x71)))	/* drive is password locked */
 				/* this is no error here */
 				return 0;
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 088/119] scsi: core: save/restore command resid for error handling
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (86 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 087/119] scsi: sd: Ignore a failure to sync cache due to lack of authorization Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 089/119] scsi: core: try to get module before removing device Greg Kroah-Hartman
                   ` (35 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Damien Le Moal, Bart Van Assche,
	Martin K. Petersen

From: Damien Le Moal <damien.lemoal@wdc.com>

commit 8f8fed0cdbbd6cdbf28d9ebe662f45765d2f7d39 upstream.

When a non-passthrough command is terminated with CHECK CONDITION, request
sense is executed by hijacking the command descriptor. Since
scsi_eh_prep_cmnd() and scsi_eh_restore_cmnd() do not save/restore the
original command resid, the value returned on failure of the original
command is lost and replaced with the value set by the execution of the
request sense command. This value may in many instances be unaligned to the
device sector size, causing sd_done() to print a warning message about the
incorrect unaligned resid before the command is retried.

Fix this problem by saving the original command residual in struct
scsi_eh_save using scsi_eh_prep_cmnd() and restoring it in
scsi_eh_restore_cmnd(). In addition, to make sure that the request sense
command is executed with a correctly initialized command structure, also
reset the residual to 0 in scsi_eh_prep_cmnd() after saving the original
command value in struct scsi_eh_save.

Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20191001074839.1994-1-damien.lemoal@wdc.com
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/scsi/scsi_error.c |    3 +++
 include/scsi/scsi_eh.h    |    1 +
 2 files changed, 4 insertions(+)

--- a/drivers/scsi/scsi_error.c
+++ b/drivers/scsi/scsi_error.c
@@ -935,6 +935,7 @@ void scsi_eh_prep_cmnd(struct scsi_cmnd
 	ses->sdb = scmd->sdb;
 	ses->next_rq = scmd->request->next_rq;
 	ses->result = scmd->result;
+	ses->resid_len = scmd->req.resid_len;
 	ses->underflow = scmd->underflow;
 	ses->prot_op = scmd->prot_op;
 	ses->eh_eflags = scmd->eh_eflags;
@@ -946,6 +947,7 @@ void scsi_eh_prep_cmnd(struct scsi_cmnd
 	memset(&scmd->sdb, 0, sizeof(scmd->sdb));
 	scmd->request->next_rq = NULL;
 	scmd->result = 0;
+	scmd->req.resid_len = 0;
 
 	if (sense_bytes) {
 		scmd->sdb.length = min_t(unsigned, SCSI_SENSE_BUFFERSIZE,
@@ -999,6 +1001,7 @@ void scsi_eh_restore_cmnd(struct scsi_cm
 	scmd->sdb = ses->sdb;
 	scmd->request->next_rq = ses->next_rq;
 	scmd->result = ses->result;
+	scmd->req.resid_len = ses->resid_len;
 	scmd->underflow = ses->underflow;
 	scmd->prot_op = ses->prot_op;
 	scmd->eh_eflags = ses->eh_eflags;
--- a/include/scsi/scsi_eh.h
+++ b/include/scsi/scsi_eh.h
@@ -32,6 +32,7 @@ extern int scsi_ioctl_reset(struct scsi_
 struct scsi_eh_save {
 	/* saved state */
 	int result;
+	unsigned int resid_len;
 	int eh_eflags;
 	enum dma_data_direction data_direction;
 	unsigned underflow;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 089/119] scsi: core: try to get module before removing device
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (87 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 088/119] scsi: core: save/restore command resid for error handling Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 090/119] scsi: ch: Make it possible to open a ch device multiple times again Greg Kroah-Hartman
                   ` (34 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Yufen Yu, Bart Van Assche,
	Martin K. Petersen

From: Yufen Yu <yuyufen@huawei.com>

commit 77c301287ebae86cc71d03eb3806f271cb14da79 upstream.

We have a test case like block/001 in blktests, which will create a scsi
device by loading scsi_debug module and then try to delete the device by
sysfs interface. At the same time, it may remove the scsi_debug module.

And getting a invalid paging request BUG_ON as following:

[   34.625854] BUG: unable to handle page fault for address: ffffffffa0016bb8
[   34.629189] Oops: 0000 [#1] SMP PTI
[   34.629618] CPU: 1 PID: 450 Comm: bash Tainted: G        W         5.4.0-rc3+ #473
[   34.632524] RIP: 0010:scsi_proc_hostdir_rm+0x5/0xa0
[   34.643555] CR2: ffffffffa0016bb8 CR3: 000000012cd88000 CR4: 00000000000006e0
[   34.644545] Call Trace:
[   34.644907]  scsi_host_dev_release+0x6b/0x1f0
[   34.645511]  device_release+0x74/0x110
[   34.646046]  kobject_put+0x116/0x390
[   34.646559]  put_device+0x17/0x30
[   34.647041]  scsi_target_dev_release+0x2b/0x40
[   34.647652]  device_release+0x74/0x110
[   34.648186]  kobject_put+0x116/0x390
[   34.648691]  put_device+0x17/0x30
[   34.649157]  scsi_device_dev_release_usercontext+0x2e8/0x360
[   34.649953]  execute_in_process_context+0x29/0x80
[   34.650603]  scsi_device_dev_release+0x20/0x30
[   34.651221]  device_release+0x74/0x110
[   34.651732]  kobject_put+0x116/0x390
[   34.652230]  sysfs_unbreak_active_protection+0x3f/0x50
[   34.652935]  sdev_store_delete.cold.4+0x71/0x8f
[   34.653579]  dev_attr_store+0x1b/0x40
[   34.654103]  sysfs_kf_write+0x3d/0x60
[   34.654603]  kernfs_fop_write+0x174/0x250
[   34.655165]  __vfs_write+0x1f/0x60
[   34.655639]  vfs_write+0xc7/0x280
[   34.656117]  ksys_write+0x6d/0x140
[   34.656591]  __x64_sys_write+0x1e/0x30
[   34.657114]  do_syscall_64+0xb1/0x400
[   34.657627]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   34.658335] RIP: 0033:0x7f156f337130

During deleting scsi target, the scsi_debug module have been removed. Then,
sdebug_driver_template belonged to the module cannot be accessd, resulting
in scsi_proc_hostdir_rm() BUG_ON.

To fix the bug, we add scsi_device_get() in sdev_store_delete() to try to
increase refcount of module, avoiding the module been removed.

Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20191015130556.18061-1-yuyufen@huawei.com
Signed-off-by: Yufen Yu <yuyufen@huawei.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/scsi/scsi_sysfs.c |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

--- a/drivers/scsi/scsi_sysfs.c
+++ b/drivers/scsi/scsi_sysfs.c
@@ -722,6 +722,14 @@ sdev_store_delete(struct device *dev, st
 		  const char *buf, size_t count)
 {
 	struct kernfs_node *kn;
+	struct scsi_device *sdev = to_scsi_device(dev);
+
+	/*
+	 * We need to try to get module, avoiding the module been removed
+	 * during delete.
+	 */
+	if (scsi_device_get(sdev))
+		return -ENODEV;
 
 	kn = sysfs_break_active_protection(&dev->kobj, &attr->attr);
 	WARN_ON_ONCE(!kn);
@@ -736,9 +744,10 @@ sdev_store_delete(struct device *dev, st
 	 * state into SDEV_DEL.
 	 */
 	device_remove_file(dev, attr);
-	scsi_remove_device(to_scsi_device(dev));
+	scsi_remove_device(sdev);
 	if (kn)
 		sysfs_unbreak_active_protection(kn);
+	scsi_device_put(sdev);
 	return count;
 };
 static DEVICE_ATTR(delete, S_IWUSR, NULL, sdev_store_delete);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 090/119] scsi: ch: Make it possible to open a ch device multiple times again
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (88 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 089/119] scsi: core: try to get module before removing device Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 091/119] Input: da9063 - fix capability and drop KEY_SLEEP Greg Kroah-Hartman
                   ` (33 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Hannes Reinecke, Rob Turk,
	Bart Van Assche, Martin K. Petersen

From: Bart Van Assche <bvanassche@acm.org>

commit 6a0990eaa768dfb7064f06777743acc6d392084b upstream.

Clearing ch->device in ch_release() is wrong because that pointer must
remain valid until ch_remove() is called. This patch fixes the following
crash the second time a ch device is opened:

BUG: kernel NULL pointer dereference, address: 0000000000000790
RIP: 0010:scsi_device_get+0x5/0x60
Call Trace:
 ch_open+0x4c/0xa0 [ch]
 chrdev_open+0xa2/0x1c0
 do_dentry_open+0x13a/0x380
 path_openat+0x591/0x1470
 do_filp_open+0x91/0x100
 do_sys_open+0x184/0x220
 do_syscall_64+0x5f/0x1a0
 entry_SYSCALL_64_after_hwframe+0x44/0xa9

Fixes: 085e56766f74 ("scsi: ch: add refcounting")
Cc: Hannes Reinecke <hare@suse.de>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20191009173536.247889-1-bvanassche@acm.org
Reported-by: Rob Turk <robtu@rtist.nl>
Suggested-by: Rob Turk <robtu@rtist.nl>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/scsi/ch.c |    1 -
 1 file changed, 1 deletion(-)

--- a/drivers/scsi/ch.c
+++ b/drivers/scsi/ch.c
@@ -578,7 +578,6 @@ ch_release(struct inode *inode, struct f
 	scsi_changer *ch = file->private_data;
 
 	scsi_device_put(ch->device);
-	ch->device = NULL;
 	file->private_data = NULL;
 	kref_put(&ch->ref, ch_destroy);
 	return 0;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 091/119] Input: da9063 - fix capability and drop KEY_SLEEP
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (89 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 090/119] scsi: ch: Make it possible to open a ch device multiple times again Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 092/119] Input: synaptics-rmi4 - avoid processing unknown IRQs Greg Kroah-Hartman
                   ` (32 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Marco Felsch, Dmitry Torokhov

From: Marco Felsch <m.felsch@pengutronix.de>

commit afce285b859cea91c182015fc9858ea58c26cd0e upstream.

Since commit f889beaaab1c ("Input: da9063 - report KEY_POWER instead of
KEY_SLEEP during power key-press") KEY_SLEEP isn't supported anymore. This
caused input device to not generate any events if "dlg,disable-key-power"
is set.

Fix this by unconditionally setting KEY_POWER capability, and not
declaring KEY_SLEEP.

Fixes: f889beaaab1c ("Input: da9063 - report KEY_POWER instead of KEY_SLEEP during power key-press")
Signed-off-by: Marco Felsch <m.felsch@pengutronix.de>
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/input/misc/da9063_onkey.c |    5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

--- a/drivers/input/misc/da9063_onkey.c
+++ b/drivers/input/misc/da9063_onkey.c
@@ -248,10 +248,7 @@ static int da9063_onkey_probe(struct pla
 	onkey->input->phys = onkey->phys;
 	onkey->input->dev.parent = &pdev->dev;
 
-	if (onkey->key_power)
-		input_set_capability(onkey->input, EV_KEY, KEY_POWER);
-
-	input_set_capability(onkey->input, EV_KEY, KEY_SLEEP);
+	input_set_capability(onkey->input, EV_KEY, KEY_POWER);
 
 	INIT_DELAYED_WORK(&onkey->work, da9063_poll_on);
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 092/119] Input: synaptics-rmi4 - avoid processing unknown IRQs
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (90 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 091/119] Input: da9063 - fix capability and drop KEY_SLEEP Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 093/119] ASoC: rsnd: Reinitialize bit clock inversion flag for every format setting Greg Kroah-Hartman
                   ` (31 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Evan Green, Andrew Duggan, Dmitry Torokhov

From: Evan Green <evgreen@chromium.org>

commit 363c53875aef8fce69d4a2d0873919ccc7d9e2ad upstream.

rmi_process_interrupt_requests() calls handle_nested_irq() for
each interrupt status bit it finds. If the irq domain mapping for
this bit had not yet been set up, then it ends up calling
handle_nested_irq(0), which causes a NULL pointer dereference.

There's already code that masks the irq_status bits coming out of the
hardware with current_irq_mask, presumably to avoid this situation.
However current_irq_mask seems to more reflect the actual mask set
in the hardware rather than the IRQs software has set up and registered
for. For example, in rmi_driver_reset_handler(), the current_irq_mask
is initialized based on what is read from the hardware. If the reset
value of this mask enables IRQs that Linux has not set up yet, then
we end up in this situation.

There appears to be a third unused bitmask that used to serve this
purpose, fn_irq_bits. Use that bitmask instead of current_irq_mask
to avoid calling handle_nested_irq() on IRQs that have not yet been
set up.

Signed-off-by: Evan Green <evgreen@chromium.org>
Reviewed-by: Andrew Duggan <aduggan@synaptics.com>
Link: https://lore.kernel.org/r/20191008223657.163366-1-evgreen@chromium.org
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/input/rmi4/rmi_driver.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

--- a/drivers/input/rmi4/rmi_driver.c
+++ b/drivers/input/rmi4/rmi_driver.c
@@ -165,7 +165,7 @@ static int rmi_process_interrupt_request
 	}
 
 	mutex_lock(&data->irq_mutex);
-	bitmap_and(data->irq_status, data->irq_status, data->current_irq_mask,
+	bitmap_and(data->irq_status, data->irq_status, data->fn_irq_bits,
 	       data->irq_count);
 	/*
 	 * At this point, irq_status has all bits that are set in the
@@ -412,6 +412,8 @@ static int rmi_driver_set_irq_bits(struc
 	bitmap_copy(data->current_irq_mask, data->new_irq_mask,
 		    data->num_of_irq_regs);
 
+	bitmap_or(data->fn_irq_bits, data->fn_irq_bits, mask, data->irq_count);
+
 error_unlock:
 	mutex_unlock(&data->irq_mutex);
 	return error;
@@ -425,6 +427,8 @@ static int rmi_driver_clear_irq_bits(str
 	struct device *dev = &rmi_dev->dev;
 
 	mutex_lock(&data->irq_mutex);
+	bitmap_andnot(data->fn_irq_bits,
+		      data->fn_irq_bits, mask, data->irq_count);
 	bitmap_andnot(data->new_irq_mask,
 		  data->current_irq_mask, mask, data->irq_count);
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 093/119] ASoC: rsnd: Reinitialize bit clock inversion flag for every format setting
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (91 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 092/119] Input: synaptics-rmi4 - avoid processing unknown IRQs Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 094/119] cfg80211: wext: avoid copying malformed SSIDs Greg Kroah-Hartman
                   ` (30 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Andrew Gabbasov, Jiada Wang,
	Timo Wischer, Junya Monden, Eugeniu Rosca, Kuninori Morimoto,
	Mark Brown

From: Junya Monden <jmonden@jp.adit-jv.com>

commit 22e58665a01006d05f0239621f7d41cacca96cc4 upstream.

Unlike other format-related DAI parameters, rdai->bit_clk_inv flag
is not properly re-initialized when setting format for new stream
processing. The inversion, if requested, is then applied not to default,
but to a previous value, which leads to SCKP bit in SSICR register being
set incorrectly.
Fix this by re-setting the flag to its initial value, determined by format.

Fixes: 1a7889ca8aba3 ("ASoC: rsnd: fixup SND_SOC_DAIFMT_xB_xF behavior")
Cc: Andrew Gabbasov <andrew_gabbasov@mentor.com>
Cc: Jiada Wang <jiada_wang@mentor.com>
Cc: Timo Wischer <twischer@de.adit-jv.com>
Cc: stable@vger.kernel.org # v3.17+
Signed-off-by: Junya Monden <jmonden@jp.adit-jv.com>
Signed-off-by: Eugeniu Rosca <erosca@de.adit-jv.com>
Acked-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Link: https://lore.kernel.org/r/20191016124255.7442-1-erosca@de.adit-jv.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 sound/soc/sh/rcar/core.c |    1 +
 1 file changed, 1 insertion(+)

--- a/sound/soc/sh/rcar/core.c
+++ b/sound/soc/sh/rcar/core.c
@@ -676,6 +676,7 @@ static int rsnd_soc_dai_set_fmt(struct s
 	}
 
 	/* set format */
+	rdai->bit_clk_inv = 0;
 	switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
 	case SND_SOC_DAIFMT_I2S:
 		rdai->sys_delay = 0;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 094/119] cfg80211: wext: avoid copying malformed SSIDs
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (92 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 093/119] ASoC: rsnd: Reinitialize bit clock inversion flag for every format setting Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 095/119] mac80211: Reject malformed SSID elements Greg Kroah-Hartman
                   ` (29 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Kees Cook, Nicolas Waisman,
	Will Deacon, Johannes Berg

From: Will Deacon <will@kernel.org>

commit 4ac2813cc867ae563a1ba5a9414bfb554e5796fa upstream.

Ensure the SSID element is bounds-checked prior to invoking memcpy()
with its length field, when copying to userspace.

Cc: <stable@vger.kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Reported-by: Nicolas Waisman <nico@semmle.com>
Signed-off-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20191004095132.15777-2-will@kernel.org
[adjust commit log a bit]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 net/wireless/wext-sme.c |    8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

--- a/net/wireless/wext-sme.c
+++ b/net/wireless/wext-sme.c
@@ -202,6 +202,7 @@ int cfg80211_mgd_wext_giwessid(struct ne
 			       struct iw_point *data, char *ssid)
 {
 	struct wireless_dev *wdev = dev->ieee80211_ptr;
+	int ret = 0;
 
 	/* call only for station! */
 	if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION))
@@ -219,7 +220,10 @@ int cfg80211_mgd_wext_giwessid(struct ne
 		if (ie) {
 			data->flags = 1;
 			data->length = ie[1];
-			memcpy(ssid, ie + 2, data->length);
+			if (data->length > IW_ESSID_MAX_SIZE)
+				ret = -EINVAL;
+			else
+				memcpy(ssid, ie + 2, data->length);
 		}
 		rcu_read_unlock();
 	} else if (wdev->wext.connect.ssid && wdev->wext.connect.ssid_len) {
@@ -229,7 +233,7 @@ int cfg80211_mgd_wext_giwessid(struct ne
 	}
 	wdev_unlock(wdev);
 
-	return 0;
+	return ret;
 }
 
 int cfg80211_mgd_wext_siwap(struct net_device *dev,



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 095/119] mac80211: Reject malformed SSID elements
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (93 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 094/119] cfg80211: wext: avoid copying malformed SSIDs Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 096/119] drm/edid: Add 6 bpc quirk for SDC panel in Lenovo G50 Greg Kroah-Hartman
                   ` (28 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Kees Cook, Nicolas Waisman,
	Will Deacon, Johannes Berg

From: Will Deacon <will@kernel.org>

commit 4152561f5da3fca92af7179dd538ea89e248f9d0 upstream.

Although this shouldn't occur in practice, it's a good idea to bounds
check the length field of the SSID element prior to using it for things
like allocations or memcpy operations.

Cc: <stable@vger.kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Reported-by: Nicolas Waisman <nico@semmle.com>
Signed-off-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20191004095132.15777-1-will@kernel.org
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 net/mac80211/mlme.c |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--- a/net/mac80211/mlme.c
+++ b/net/mac80211/mlme.c
@@ -2430,7 +2430,8 @@ struct sk_buff *ieee80211_ap_probereq_ge
 
 	rcu_read_lock();
 	ssid = ieee80211_bss_get_ie(cbss, WLAN_EID_SSID);
-	if (WARN_ON_ONCE(ssid == NULL))
+	if (WARN_ONCE(!ssid || ssid[1] > IEEE80211_MAX_SSID_LEN,
+		      "invalid SSID element (len=%d)", ssid ? ssid[1] : -1))
 		ssid_len = 0;
 	else
 		ssid_len = ssid[1];
@@ -4756,7 +4757,7 @@ int ieee80211_mgd_assoc(struct ieee80211
 
 	rcu_read_lock();
 	ssidie = ieee80211_bss_get_ie(req->bss, WLAN_EID_SSID);
-	if (!ssidie) {
+	if (!ssidie || ssidie[1] > sizeof(assoc_data->ssid)) {
 		rcu_read_unlock();
 		kfree(assoc_data);
 		return -EINVAL;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 096/119] drm/edid: Add 6 bpc quirk for SDC panel in Lenovo G50
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (94 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 095/119] mac80211: Reject malformed SSID elements Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 097/119] drm/amdgpu: Bail earlier when amdgpu.cik_/si_support is not set to 1 Greg Kroah-Hartman
                   ` (27 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Alex Deucher, Kai-Heng Feng

From: Kai-Heng Feng <kai.heng.feng@canonical.com>

commit 11bcf5f78905b90baae8fb01e16650664ed0cb00 upstream.

Another panel that needs 6BPC quirk.

BugLink: https://bugs.launchpad.net/bugs/1819968
Cc: <stable@vger.kernel.org> # v4.8+
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190402033037.21877-1-kai.heng.feng@canonical.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/gpu/drm/drm_edid.c |    3 +++
 1 file changed, 3 insertions(+)

--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -164,6 +164,9 @@ static const struct edid_quirk {
 	/* Medion MD 30217 PG */
 	{ "MED", 0x7b8, EDID_QUIRK_PREFER_LARGE_75 },
 
+	/* Lenovo G50 */
+	{ "SDC", 18514, EDID_QUIRK_FORCE_6BPC },
+
 	/* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */
 	{ "SEC", 0xd033, EDID_QUIRK_FORCE_8BPC },
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 097/119] drm/amdgpu: Bail earlier when amdgpu.cik_/si_support is not set to 1
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (95 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 096/119] drm/edid: Add 6 bpc quirk for SDC panel in Lenovo G50 Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 098/119] drivers/base/memory.c: dont access uninitialized memmaps in soft_offline_page_store() Greg Kroah-Hartman
                   ` (26 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Daniel Vetter, Hans de Goede, Alex Deucher

From: Hans de Goede <hdegoede@redhat.com>

commit 984d7a929ad68b7be9990fc9c5cfa5d5c9fc7942 upstream.

Bail from the pci_driver probe function instead of from the drm_driver
load function.

This avoid /dev/dri/card0 temporarily getting registered and then
unregistered again, sending unwanted add / remove udev events to
userspace.

Specifically this avoids triggering the (userspace) bug fixed by this
plymouth merge-request:
https://gitlab.freedesktop.org/plymouth/plymouth/merge_requests/59

Note that despite that being a userspace bug, not sending unnecessary
udev events is a good idea in general.

BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1490490
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c |   35 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |   35 --------------------------------
 2 files changed, 35 insertions(+), 35 deletions(-)

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -572,6 +572,41 @@ static int amdgpu_pci_probe(struct pci_d
 	if (ret == -EPROBE_DEFER)
 		return ret;
 
+#ifdef CONFIG_DRM_AMDGPU_SI
+	if (!amdgpu_si_support) {
+		switch (flags & AMD_ASIC_MASK) {
+		case CHIP_TAHITI:
+		case CHIP_PITCAIRN:
+		case CHIP_VERDE:
+		case CHIP_OLAND:
+		case CHIP_HAINAN:
+			dev_info(&pdev->dev,
+				 "SI support provided by radeon.\n");
+			dev_info(&pdev->dev,
+				 "Use radeon.si_support=0 amdgpu.si_support=1 to override.\n"
+				);
+			return -ENODEV;
+		}
+	}
+#endif
+#ifdef CONFIG_DRM_AMDGPU_CIK
+	if (!amdgpu_cik_support) {
+		switch (flags & AMD_ASIC_MASK) {
+		case CHIP_KAVERI:
+		case CHIP_BONAIRE:
+		case CHIP_HAWAII:
+		case CHIP_KABINI:
+		case CHIP_MULLINS:
+			dev_info(&pdev->dev,
+				 "CIK support provided by radeon.\n");
+			dev_info(&pdev->dev,
+				 "Use radeon.cik_support=0 amdgpu.cik_support=1 to override.\n"
+				);
+			return -ENODEV;
+		}
+	}
+#endif
+
 	/* Get rid of things like offb */
 	ret = amdgpu_kick_out_firmware_fb(pdev);
 	if (ret)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -87,41 +87,6 @@ int amdgpu_driver_load_kms(struct drm_de
 	struct amdgpu_device *adev;
 	int r, acpi_status;
 
-#ifdef CONFIG_DRM_AMDGPU_SI
-	if (!amdgpu_si_support) {
-		switch (flags & AMD_ASIC_MASK) {
-		case CHIP_TAHITI:
-		case CHIP_PITCAIRN:
-		case CHIP_VERDE:
-		case CHIP_OLAND:
-		case CHIP_HAINAN:
-			dev_info(dev->dev,
-				 "SI support provided by radeon.\n");
-			dev_info(dev->dev,
-				 "Use radeon.si_support=0 amdgpu.si_support=1 to override.\n"
-				);
-			return -ENODEV;
-		}
-	}
-#endif
-#ifdef CONFIG_DRM_AMDGPU_CIK
-	if (!amdgpu_cik_support) {
-		switch (flags & AMD_ASIC_MASK) {
-		case CHIP_KAVERI:
-		case CHIP_BONAIRE:
-		case CHIP_HAWAII:
-		case CHIP_KABINI:
-		case CHIP_MULLINS:
-			dev_info(dev->dev,
-				 "CIK support provided by radeon.\n");
-			dev_info(dev->dev,
-				 "Use radeon.cik_support=0 amdgpu.cik_support=1 to override.\n"
-				);
-			return -ENODEV;
-		}
-	}
-#endif
-
 	adev = kzalloc(sizeof(struct amdgpu_device), GFP_KERNEL);
 	if (adev == NULL) {
 		return -ENOMEM;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 098/119] drivers/base/memory.c: dont access uninitialized memmaps in soft_offline_page_store()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (96 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 097/119] drm/amdgpu: Bail earlier when amdgpu.cik_/si_support is not set to 1 Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 099/119] fs/proc/page.c: dont access uninitialized memmaps in fs/proc/page.c Greg Kroah-Hartman
                   ` (25 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, David Hildenbrand, Naoya Horiguchi,
	Michal Hocko, Rafael J. Wysocki, Andrew Morton, Linus Torvalds

From: David Hildenbrand <david@redhat.com>

commit 641fe2e9387a36f9ee01d7c69382d1fe147a5e98 upstream.

Uninitialized memmaps contain garbage and in the worst case trigger kernel
BUGs, especially with CONFIG_PAGE_POISONING.  They should not get touched.

Right now, when trying to soft-offline a PFN that resides on a memory
block that was never onlined, one gets a misleading error with
CONFIG_PAGE_POISONING:

  :/# echo 5637144576 > /sys/devices/system/memory/soft_offline_page
  [   23.097167] soft offline: 0x150000 page already poisoned

But the actual result depends on the garbage in the memmap.

soft_offline_page() can only work with online pages, it returns -EIO in
case of ZONE_DEVICE.  Make sure to only forward pages that are online
(iow, managed by the buddy) and, therefore, have an initialized memmap.

Add a check against pfn_to_online_page() and similarly return -EIO.

Link: http://lkml.kernel.org/r/20191010141200.8985-1-david@redhat.com
Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online")	[visible after d0dc12e86b319]
Signed-off-by: David Hildenbrand <david@redhat.com>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: <stable@vger.kernel.org>	[4.13+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/base/memory.c |    3 +++
 1 file changed, 3 insertions(+)

--- a/drivers/base/memory.c
+++ b/drivers/base/memory.c
@@ -552,6 +552,9 @@ store_soft_offline_page(struct device *d
 	pfn >>= PAGE_SHIFT;
 	if (!pfn_valid(pfn))
 		return -ENXIO;
+	/* Only online pages can be soft-offlined (esp., not ZONE_DEVICE). */
+	if (!pfn_to_online_page(pfn))
+		return -EIO;
 	ret = soft_offline_page(pfn_to_page(pfn), 0);
 	return ret == 0 ? count : ret;
 }



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 099/119] fs/proc/page.c: dont access uninitialized memmaps in fs/proc/page.c
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (97 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 098/119] drivers/base/memory.c: dont access uninitialized memmaps in soft_offline_page_store() Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 100/119] scsi: zfcp: fix reaction on bit error threshold notification Greg Kroah-Hartman
                   ` (24 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, David Hildenbrand, Qian Cai,
	Michal Hocko, Dan Williams, Alexey Dobriyan, Stephen Rothwell,
	Toshiki Fukasawa, Pankaj gupta, Mike Rapoport, Anthony Yznaga,
	Aneesh Kumar K.V, Andrew Morton, Linus Torvalds

From: David Hildenbrand <david@redhat.com>

commit aad5f69bc161af489dbb5934868bd347282f0764 upstream.

There are three places where we access uninitialized memmaps, namely:
- /proc/kpagecount
- /proc/kpageflags
- /proc/kpagecgroup

We have initialized memmaps either when the section is online or when the
page was initialized to the ZONE_DEVICE.  Uninitialized memmaps contain
garbage and in the worst case trigger kernel BUGs, especially with
CONFIG_PAGE_POISONING.

For example, not onlining a DIMM during boot and calling /proc/kpagecount
with CONFIG_PAGE_POISONING:

  :/# cat /proc/kpagecount > tmp.test
  BUG: unable to handle page fault for address: fffffffffffffffe
  #PF: supervisor read access in kernel mode
  #PF: error_code(0x0000) - not-present page
  PGD 114616067 P4D 114616067 PUD 114618067 PMD 0
  Oops: 0000 [#1] SMP NOPTI
  CPU: 0 PID: 469 Comm: cat Not tainted 5.4.0-rc1-next-20191004+ #11
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.4
  RIP: 0010:kpagecount_read+0xce/0x1e0
  Code: e8 09 83 e0 3f 48 0f a3 02 73 2d 4c 89 e7 48 c1 e7 06 48 03 3d ab 51 01 01 74 1d 48 8b 57 08 480
  RSP: 0018:ffffa14e409b7e78 EFLAGS: 00010202
  RAX: fffffffffffffffe RBX: 0000000000020000 RCX: 0000000000000000
  RDX: 0000000000000001 RSI: 00007f76b5595000 RDI: fffff35645000000
  RBP: 00007f76b5595000 R08: 0000000000000001 R09: 0000000000000000
  R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000140000
  R13: 0000000000020000 R14: 00007f76b5595000 R15: ffffa14e409b7f08
  FS:  00007f76b577d580(0000) GS:ffff8f41bd400000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  CR2: fffffffffffffffe CR3: 0000000078960000 CR4: 00000000000006f0
  Call Trace:
   proc_reg_read+0x3c/0x60
   vfs_read+0xc5/0x180
   ksys_read+0x68/0xe0
   do_syscall_64+0x5c/0xa0
   entry_SYSCALL_64_after_hwframe+0x49/0xbe

For now, let's drop support for ZONE_DEVICE from the three pseudo files
in order to fix this.  To distinguish offline memory (with garbage
memmap) from ZONE_DEVICE memory with properly initialized memmaps, we
would have to check get_dev_pagemap() and pfn_zone_device_reserved()
right now.  The usage of both (especially, special casing devmem) is
frowned upon and needs to be reworked.

The fundamental issue we have is:

	if (pfn_to_online_page(pfn)) {
		/* memmap initialized */
	} else if (pfn_valid(pfn)) {
		/*
		 * ???
		 * a) offline memory. memmap garbage.
		 * b) devmem: memmap initialized to ZONE_DEVICE.
		 * c) devmem: reserved for driver. memmap garbage.
		 * (d) devmem: memmap currently initializing - garbage)
		 */
	}

We'll leave the pfn_zone_device_reserved() check in stable_page_flags()
in place as that function is also used from memory failure.  We now no
longer dump information about pages that are not in use anymore -
offline.

Link: http://lkml.kernel.org/r/20191009142435.3975-2-david@redhat.com
Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online")	[visible after d0dc12e86b319]
Signed-off-by: David Hildenbrand <david@redhat.com>
Reported-by: Qian Cai <cai@lca.pw>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Toshiki Fukasawa <t-fukasawa@vx.jp.nec.com>
Cc: Pankaj gupta <pagupta@redhat.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Anthony Yznaga <anthony.yznaga@oracle.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: <stable@vger.kernel.org>	[4.13+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 fs/proc/page.c |   28 ++++++++++++++++------------
 1 file changed, 16 insertions(+), 12 deletions(-)

--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -42,10 +42,12 @@ static ssize_t kpagecount_read(struct fi
 		return -EINVAL;
 
 	while (count > 0) {
-		if (pfn_valid(pfn))
-			ppage = pfn_to_page(pfn);
-		else
-			ppage = NULL;
+		/*
+		 * TODO: ZONE_DEVICE support requires to identify
+		 * memmaps that were actually initialized.
+		 */
+		ppage = pfn_to_online_page(pfn);
+
 		if (!ppage || PageSlab(ppage))
 			pcount = 0;
 		else
@@ -214,10 +216,11 @@ static ssize_t kpageflags_read(struct fi
 		return -EINVAL;
 
 	while (count > 0) {
-		if (pfn_valid(pfn))
-			ppage = pfn_to_page(pfn);
-		else
-			ppage = NULL;
+		/*
+		 * TODO: ZONE_DEVICE support requires to identify
+		 * memmaps that were actually initialized.
+		 */
+		ppage = pfn_to_online_page(pfn);
 
 		if (put_user(stable_page_flags(ppage), out)) {
 			ret = -EFAULT;
@@ -259,10 +262,11 @@ static ssize_t kpagecgroup_read(struct f
 		return -EINVAL;
 
 	while (count > 0) {
-		if (pfn_valid(pfn))
-			ppage = pfn_to_page(pfn);
-		else
-			ppage = NULL;
+		/*
+		 * TODO: ZONE_DEVICE support requires to identify
+		 * memmaps that were actually initialized.
+		 */
+		ppage = pfn_to_online_page(pfn);
 
 		if (ppage)
 			ino = page_cgroup_ino(ppage);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 100/119] scsi: zfcp: fix reaction on bit error threshold notification
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (98 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 099/119] fs/proc/page.c: dont access uninitialized memmaps in fs/proc/page.c Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 101/119] mm/slub: fix a deadlock in show_slab_objects() Greg Kroah-Hartman
                   ` (23 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Jens Remus, Benjamin Block,
	Steffen Maier, Martin K. Petersen, Sasha Levin

From: Steffen Maier <maier@linux.ibm.com>

[ Upstream commit 2190168aaea42c31bff7b9a967e7b045f07df095 ]

On excessive bit errors for the FCP channel ingress fibre path, the channel
notifies us.  Previously, we only emitted a kernel message and a trace
record.  Since performance can become suboptimal with I/O timeouts due to
bit errors, we now stop using an FCP device by default on channel
notification so multipath on top can timely failover to other paths.  A new
module parameter zfcp.ber_stop can be used to get zfcp old behavior.

User explanation of new kernel message:

 * Description:
 * The FCP channel reported that its bit error threshold has been exceeded.
 * These errors might result from a problem with the physical components
 * of the local fibre link into the FCP channel.
 * The problem might be damage or malfunction of the cable or
 * cable connection between the FCP channel and
 * the adjacent fabric switch port or the point-to-point peer.
 * Find details about the errors in the HBA trace for the FCP device.
 * The zfcp device driver closed down the FCP device
 * to limit the performance impact from possible I/O command timeouts.
 * User action:
 * Check for problems on the local fibre link, ensure that fibre optics are
 * clean and functional, and all cables are properly plugged.
 * After the repair action, you can manually recover the FCP device by
 * writing "0" into its "failed" sysfs attribute.
 * If recovery through sysfs is not possible, set the CHPID of the device
 * offline and back online on the service element.

Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: <stable@vger.kernel.org> #2.6.30+
Link: https://lore.kernel.org/r/20191001104949.42810-1-maier@linux.ibm.com
Reviewed-by: Jens Remus <jremus@linux.ibm.com>
Reviewed-by: Benjamin Block <bblock@linux.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/s390/scsi/zfcp_fsf.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
index 00fb98f7b2cd0..94d1bcc83fa2e 100644
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -21,6 +21,11 @@
 
 struct kmem_cache *zfcp_fsf_qtcb_cache;
 
+static bool ber_stop = true;
+module_param(ber_stop, bool, 0600);
+MODULE_PARM_DESC(ber_stop,
+		 "Shuts down FCP devices for FCP channels that report a bit-error count in excess of its threshold (default on)");
+
 static void zfcp_fsf_request_timeout_handler(unsigned long data)
 {
 	struct zfcp_adapter *adapter = (struct zfcp_adapter *) data;
@@ -230,10 +235,15 @@ static void zfcp_fsf_status_read_handler(struct zfcp_fsf_req *req)
 	case FSF_STATUS_READ_SENSE_DATA_AVAIL:
 		break;
 	case FSF_STATUS_READ_BIT_ERROR_THRESHOLD:
-		dev_warn(&adapter->ccw_device->dev,
-			 "The error threshold for checksum statistics "
-			 "has been exceeded\n");
 		zfcp_dbf_hba_bit_err("fssrh_3", req);
+		if (ber_stop) {
+			dev_warn(&adapter->ccw_device->dev,
+				 "All paths over this FCP device are disused because of excessive bit errors\n");
+			zfcp_erp_adapter_shutdown(adapter, 0, "fssrh_b");
+		} else {
+			dev_warn(&adapter->ccw_device->dev,
+				 "The error threshold for checksum statistics has been exceeded\n");
+		}
 		break;
 	case FSF_STATUS_READ_LINK_DOWN:
 		zfcp_fsf_status_read_link_down(req);
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH 4.14 101/119] mm/slub: fix a deadlock in show_slab_objects()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (99 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 100/119] scsi: zfcp: fix reaction on bit error threshold notification Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 102/119] mm/page_owner: dont access uninitialized memmaps when reading /proc/pagetypeinfo Greg Kroah-Hartman
                   ` (22 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Qian Cai, Michal Hocko,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Tejun Heo, Vladimir Davydov, Roman Gushchin, Andrew Morton,
	Linus Torvalds

From: Qian Cai <cai@lca.pw>

commit e4f8e513c3d353c134ad4eef9fd0bba12406c7c8 upstream.

A long time ago we fixed a similar deadlock in show_slab_objects() [1].
However, it is apparently due to the commits like 01fb58bcba63 ("slab:
remove synchronous synchronize_sched() from memcg cache deactivation
path") and 03afc0e25f7f ("slab: get_online_mems for
kmem_cache_{create,destroy,shrink}"), this kind of deadlock is back by
just reading files in /sys/kernel/slab which will generate a lockdep
splat below.

Since the "mem_hotplug_lock" here is only to obtain a stable online node
mask while racing with NUMA node hotplug, in the worst case, the results
may me miscalculated while doing NUMA node hotplug, but they shall be
corrected by later reads of the same files.

  WARNING: possible circular locking dependency detected
  ------------------------------------------------------
  cat/5224 is trying to acquire lock:
  ffff900012ac3120 (mem_hotplug_lock.rw_sem){++++}, at:
  show_slab_objects+0x94/0x3a8

  but task is already holding lock:
  b8ff009693eee398 (kn->count#45){++++}, at: kernfs_seq_start+0x44/0xf0

  which lock already depends on the new lock.

  the existing dependency chain (in reverse order) is:

  -> #2 (kn->count#45){++++}:
         lock_acquire+0x31c/0x360
         __kernfs_remove+0x290/0x490
         kernfs_remove+0x30/0x44
         sysfs_remove_dir+0x70/0x88
         kobject_del+0x50/0xb0
         sysfs_slab_unlink+0x2c/0x38
         shutdown_cache+0xa0/0xf0
         kmemcg_cache_shutdown_fn+0x1c/0x34
         kmemcg_workfn+0x44/0x64
         process_one_work+0x4f4/0x950
         worker_thread+0x390/0x4bc
         kthread+0x1cc/0x1e8
         ret_from_fork+0x10/0x18

  -> #1 (slab_mutex){+.+.}:
         lock_acquire+0x31c/0x360
         __mutex_lock_common+0x16c/0xf78
         mutex_lock_nested+0x40/0x50
         memcg_create_kmem_cache+0x38/0x16c
         memcg_kmem_cache_create_func+0x3c/0x70
         process_one_work+0x4f4/0x950
         worker_thread+0x390/0x4bc
         kthread+0x1cc/0x1e8
         ret_from_fork+0x10/0x18

  -> #0 (mem_hotplug_lock.rw_sem){++++}:
         validate_chain+0xd10/0x2bcc
         __lock_acquire+0x7f4/0xb8c
         lock_acquire+0x31c/0x360
         get_online_mems+0x54/0x150
         show_slab_objects+0x94/0x3a8
         total_objects_show+0x28/0x34
         slab_attr_show+0x38/0x54
         sysfs_kf_seq_show+0x198/0x2d4
         kernfs_seq_show+0xa4/0xcc
         seq_read+0x30c/0x8a8
         kernfs_fop_read+0xa8/0x314
         __vfs_read+0x88/0x20c
         vfs_read+0xd8/0x10c
         ksys_read+0xb0/0x120
         __arm64_sys_read+0x54/0x88
         el0_svc_handler+0x170/0x240
         el0_svc+0x8/0xc

  other info that might help us debug this:

  Chain exists of:
    mem_hotplug_lock.rw_sem --> slab_mutex --> kn->count#45

   Possible unsafe locking scenario:

         CPU0                    CPU1
         ----                    ----
    lock(kn->count#45);
                                 lock(slab_mutex);
                                 lock(kn->count#45);
    lock(mem_hotplug_lock.rw_sem);

   *** DEADLOCK ***

  3 locks held by cat/5224:
   #0: 9eff00095b14b2a0 (&p->lock){+.+.}, at: seq_read+0x4c/0x8a8
   #1: 0eff008997041480 (&of->mutex){+.+.}, at: kernfs_seq_start+0x34/0xf0
   #2: b8ff009693eee398 (kn->count#45){++++}, at:
  kernfs_seq_start+0x44/0xf0

  stack backtrace:
  Call trace:
   dump_backtrace+0x0/0x248
   show_stack+0x20/0x2c
   dump_stack+0xd0/0x140
   print_circular_bug+0x368/0x380
   check_noncircular+0x248/0x250
   validate_chain+0xd10/0x2bcc
   __lock_acquire+0x7f4/0xb8c
   lock_acquire+0x31c/0x360
   get_online_mems+0x54/0x150
   show_slab_objects+0x94/0x3a8
   total_objects_show+0x28/0x34
   slab_attr_show+0x38/0x54
   sysfs_kf_seq_show+0x198/0x2d4
   kernfs_seq_show+0xa4/0xcc
   seq_read+0x30c/0x8a8
   kernfs_fop_read+0xa8/0x314
   __vfs_read+0x88/0x20c
   vfs_read+0xd8/0x10c
   ksys_read+0xb0/0x120
   __arm64_sys_read+0x54/0x88
   el0_svc_handler+0x170/0x240
   el0_svc+0x8/0xc

I think it is important to mention that this doesn't expose the
show_slab_objects to use-after-free.  There is only a single path that
might really race here and that is the slab hotplug notifier callback
__kmem_cache_shrink (via slab_mem_going_offline_callback) but that path
doesn't really destroy kmem_cache_node data structures.

[1] http://lkml.iu.edu/hypermail/linux/kernel/1101.0/02850.html

[akpm@linux-foundation.org: add comment explaining why we don't need mem_hotplug_lock]
Link: http://lkml.kernel.org/r/1570192309-10132-1-git-send-email-cai@lca.pw
Fixes: 01fb58bcba63 ("slab: remove synchronous synchronize_sched() from memcg cache deactivation path")
Fixes: 03afc0e25f7f ("slab: get_online_mems for kmem_cache_{create,destroy,shrink}")
Signed-off-by: Qian Cai <cai@lca.pw>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 mm/slub.c |   13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4790,7 +4790,17 @@ static ssize_t show_slab_objects(struct
 		}
 	}
 
-	get_online_mems();
+	/*
+	 * It is impossible to take "mem_hotplug_lock" here with "kernfs_mutex"
+	 * already held which will conflict with an existing lock order:
+	 *
+	 * mem_hotplug_lock->slab_mutex->kernfs_mutex
+	 *
+	 * We don't really need mem_hotplug_lock (to hold off
+	 * slab_mem_going_offline_callback) here because slab's memory hot
+	 * unplug code doesn't destroy the kmem_cache->node[] data.
+	 */
+
 #ifdef CONFIG_SLUB_DEBUG
 	if (flags & SO_ALL) {
 		struct kmem_cache_node *n;
@@ -4831,7 +4841,6 @@ static ssize_t show_slab_objects(struct
 			x += sprintf(buf + x, " N%d=%lu",
 					node, nodes[node]);
 #endif
-	put_online_mems();
 	kfree(nodes);
 	return x + sprintf(buf + x, "\n");
 }



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 102/119] mm/page_owner: dont access uninitialized memmaps when reading /proc/pagetypeinfo
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (100 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 101/119] mm/slub: fix a deadlock in show_slab_objects() Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 103/119] hugetlbfs: dont access uninitialized memmaps in pfn_range_valid_gigantic() Greg Kroah-Hartman
                   ` (21 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Qian Cai, David Hildenbrand,
	Michal Hocko, Vlastimil Babka, Thomas Gleixner,
	Peter Zijlstra (Intel),
	Miles Chen, Mike Rapoport, Andrew Morton, Linus Torvalds

From: Qian Cai <cai@lca.pw>

commit a26ee565b6cd8dc2bf15ff6aa70bbb28f928b773 upstream.

Uninitialized memmaps contain garbage and in the worst case trigger
kernel BUGs, especially with CONFIG_PAGE_POISONING.  They should not get
touched.

For example, when not onlining a memory block that is spanned by a zone
and reading /proc/pagetypeinfo with CONFIG_DEBUG_VM_PGFLAGS and
CONFIG_PAGE_POISONING, we can trigger a kernel BUG:

  :/# echo 1 > /sys/devices/system/memory/memory40/online
  :/# echo 1 > /sys/devices/system/memory/memory42/online
  :/# cat /proc/pagetypeinfo > test.file
   page:fffff2c585200000 is uninitialized and poisoned
   raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
   raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
   page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
   There is not page extension available.
   ------------[ cut here ]------------
   kernel BUG at include/linux/mm.h:1107!
   invalid opcode: 0000 [#1] SMP NOPTI

Please note that this change does not affect ZONE_DEVICE, because
pagetypeinfo_showmixedcount_print() is called from
mm/vmstat.c:pagetypeinfo_showmixedcount() only for populated zones, and
ZONE_DEVICE is never populated (zone->present_pages always 0).

[david@redhat.com: move check to outer loop, add comment, rephrase description]
Link: http://lkml.kernel.org/r/20191011140638.8160-1-david@redhat.com
Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online") # visible after d0dc12e86b319
Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: David Hildenbrand <david@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Miles Chen <miles.chen@mediatek.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: <stable@vger.kernel.org>	[4.13+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 mm/page_owner.c |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -273,7 +273,8 @@ void pagetypeinfo_showmixedcount_print(s
 	 * not matter as the mixed block count will still be correct
 	 */
 	for (; pfn < end_pfn; ) {
-		if (!pfn_valid(pfn)) {
+		page = pfn_to_online_page(pfn);
+		if (!page) {
 			pfn = ALIGN(pfn + 1, MAX_ORDER_NR_PAGES);
 			continue;
 		}
@@ -281,13 +282,13 @@ void pagetypeinfo_showmixedcount_print(s
 		block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
 		block_end_pfn = min(block_end_pfn, end_pfn);
 
-		page = pfn_to_page(pfn);
 		pageblock_mt = get_pageblock_migratetype(page);
 
 		for (; pfn < block_end_pfn; pfn++) {
 			if (!pfn_valid_within(pfn))
 				continue;
 
+			/* The pageblock is online, no need to recheck. */
 			page = pfn_to_page(pfn);
 
 			if (page_zone(page) != zone)



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 103/119] hugetlbfs: dont access uninitialized memmaps in pfn_range_valid_gigantic()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (101 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 102/119] mm/page_owner: dont access uninitialized memmaps when reading /proc/pagetypeinfo Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 104/119] xtensa: drop EXPORT_SYMBOL for outs*/ins* Greg Kroah-Hartman
                   ` (20 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, David Hildenbrand, Michal Hocko,
	Michal Hocko, Mike Kravetz, Anshuman Khandual, Andrew Morton,
	Linus Torvalds

From: David Hildenbrand <david@redhat.com>

commit f231fe4235e22e18d847e05cbe705deaca56580a upstream.

Uninitialized memmaps contain garbage and in the worst case trigger
kernel BUGs, especially with CONFIG_PAGE_POISONING.  They should not get
touched.

Let's make sure that we only consider online memory (managed by the
buddy) that has initialized memmaps.  ZONE_DEVICE is not applicable.

page_zone() will call page_to_nid(), which will trigger
VM_BUG_ON_PGFLAGS(PagePoisoned(page), page) with CONFIG_PAGE_POISONING
and CONFIG_DEBUG_VM_PGFLAGS when called on uninitialized memmaps.  This
can be the case when an offline memory block (e.g., never onlined) is
spanned by a zone.

Note: As explained by Michal in [1], alloc_contig_range() will verify
the range.  So it boils down to the wrong access in this function.

[1] http://lkml.kernel.org/r/20180423000943.GO17484@dhcp22.suse.cz

Link: http://lkml.kernel.org/r/20191015120717.4858-1-david@redhat.com
Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online")	[visible after d0dc12e86b319]
Signed-off-by: David Hildenbrand <david@redhat.com>
Reported-by: Michal Hocko <mhocko@kernel.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: <stable@vger.kernel.org>	[4.13+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 mm/hugetlb.c |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1081,11 +1081,10 @@ static bool pfn_range_valid_gigantic(str
 	struct page *page;
 
 	for (i = start_pfn; i < end_pfn; i++) {
-		if (!pfn_valid(i))
+		page = pfn_to_online_page(i);
+		if (!page)
 			return false;
 
-		page = pfn_to_page(i);
-
 		if (page_zone(page) != z)
 			return false;
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 104/119] xtensa: drop EXPORT_SYMBOL for outs*/ins*
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (102 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 103/119] hugetlbfs: dont access uninitialized memmaps in pfn_range_valid_gigantic() Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 105/119] parisc: Fix vmap memory leak in ioremap()/iounmap() Greg Kroah-Hartman
                   ` (19 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Max Filippov

From: Max Filippov <jcmvbkbc@gmail.com>

commit 8b39da985194aac2998dd9e3a22d00b596cebf1e upstream.

Custom outs*/ins* implementations are long gone from the xtensa port,
remove matching EXPORT_SYMBOLs.
This fixes the following build warnings issued by modpost since commit
15bfc2348d54 ("modpost: check for static EXPORT_SYMBOL* functions"):

  WARNING: "insb" [vmlinux] is a static EXPORT_SYMBOL
  WARNING: "insw" [vmlinux] is a static EXPORT_SYMBOL
  WARNING: "insl" [vmlinux] is a static EXPORT_SYMBOL
  WARNING: "outsb" [vmlinux] is a static EXPORT_SYMBOL
  WARNING: "outsw" [vmlinux] is a static EXPORT_SYMBOL
  WARNING: "outsl" [vmlinux] is a static EXPORT_SYMBOL

Cc: stable@vger.kernel.org
Fixes: d38efc1f150f ("xtensa: adopt generic io routines")
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/xtensa/kernel/xtensa_ksyms.c |    7 -------
 1 file changed, 7 deletions(-)

--- a/arch/xtensa/kernel/xtensa_ksyms.c
+++ b/arch/xtensa/kernel/xtensa_ksyms.c
@@ -114,13 +114,6 @@ EXPORT_SYMBOL(__invalidate_icache_range)
 // FIXME EXPORT_SYMBOL(screen_info);
 #endif
 
-EXPORT_SYMBOL(outsb);
-EXPORT_SYMBOL(outsw);
-EXPORT_SYMBOL(outsl);
-EXPORT_SYMBOL(insb);
-EXPORT_SYMBOL(insw);
-EXPORT_SYMBOL(insl);
-
 extern long common_exception_return;
 EXPORT_SYMBOL(common_exception_return);
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 105/119] parisc: Fix vmap memory leak in ioremap()/iounmap()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (103 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 104/119] xtensa: drop EXPORT_SYMBOL for outs*/ins* Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 106/119] CIFS: avoid using MID 0xFFFF Greg Kroah-Hartman
                   ` (18 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Helge Deller, Sven Schnelle

From: Helge Deller <deller@gmx.de>

commit 513f7f747e1cba81f28a436911fba0b485878ebd upstream.

Sven noticed that calling ioremap() and iounmap() multiple times leads
to a vmap memory leak:
	vmap allocation for size 4198400 failed:
	use vmalloc=<size> to increase size

It seems we missed calling vunmap() in iounmap().

Signed-off-by: Helge Deller <deller@gmx.de>
Noticed-by: Sven Schnelle <svens@stackframe.org>
Cc: <stable@vger.kernel.org> # v3.16+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/parisc/mm/ioremap.c |   12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

--- a/arch/parisc/mm/ioremap.c
+++ b/arch/parisc/mm/ioremap.c
@@ -3,7 +3,7 @@
  * arch/parisc/mm/ioremap.c
  *
  * (C) Copyright 1995 1996 Linus Torvalds
- * (C) Copyright 2001-2006 Helge Deller <deller@gmx.de>
+ * (C) Copyright 2001-2019 Helge Deller <deller@gmx.de>
  * (C) Copyright 2005 Kyle McMartin <kyle@parisc-linux.org>
  */
 
@@ -84,7 +84,7 @@ void __iomem * __ioremap(unsigned long p
 	addr = (void __iomem *) area->addr;
 	if (ioremap_page_range((unsigned long)addr, (unsigned long)addr + size,
 			       phys_addr, pgprot)) {
-		vfree(addr);
+		vunmap(addr);
 		return NULL;
 	}
 
@@ -92,9 +92,11 @@ void __iomem * __ioremap(unsigned long p
 }
 EXPORT_SYMBOL(__ioremap);
 
-void iounmap(const volatile void __iomem *addr)
+void iounmap(const volatile void __iomem *io_addr)
 {
-	if (addr > high_memory)
-		return vfree((void *) (PAGE_MASK & (unsigned long __force) addr));
+	unsigned long addr = (unsigned long)io_addr & PAGE_MASK;
+
+	if (is_vmalloc_addr((void *)addr))
+		vunmap((void *)addr);
 }
 EXPORT_SYMBOL(iounmap);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 106/119] CIFS: avoid using MID 0xFFFF
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (104 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 105/119] parisc: Fix vmap memory leak in ioremap()/iounmap() Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 107/119] x86/boot/64: Make level2_kernel_pgt pages invalid outside kernel area Greg Kroah-Hartman
                   ` (17 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Roberto Bergantinos Corpas,
	Ronnie Sahlberg, Aurelien Aptel, Steve French

From: Roberto Bergantinos Corpas <rbergant@redhat.com>

commit 03d9a9fe3f3aec508e485dd3dcfa1e99933b4bdb upstream.

According to MS-CIFS specification MID 0xFFFF should not be used by the
CIFS client, but we actually do. Besides, this has proven to cause races
leading to oops between SendReceive2/cifs_demultiplex_thread. On SMB1,
MID is a 2 byte value easy to reach in CurrentMid which may conflict with
an oplock break notification request coming from server

Signed-off-by: Roberto Bergantinos Corpas <rbergant@redhat.com>
Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com>
Reviewed-by: Aurelien Aptel <aaptel@suse.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
CC: Stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 fs/cifs/smb1ops.c |    3 +++
 1 file changed, 3 insertions(+)

--- a/fs/cifs/smb1ops.c
+++ b/fs/cifs/smb1ops.c
@@ -181,6 +181,9 @@ cifs_get_next_mid(struct TCP_Server_Info
 	/* we do not want to loop forever */
 	last_mid = cur_mid;
 	cur_mid++;
+	/* avoid 0xFFFF MID */
+	if (cur_mid == 0xffff)
+		cur_mid++;
 
 	/*
 	 * This nested loop looks more expensive than it is.



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 107/119] x86/boot/64: Make level2_kernel_pgt pages invalid outside kernel area
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (105 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 106/119] CIFS: avoid using MID 0xFFFF Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 108/119] pinctrl: armada-37xx: fix control of pins 32 and up Greg Kroah-Hartman
                   ` (16 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Steve Wahl, Borislav Petkov,
	Dave Hansen, Kirill A. Shutemov, Baoquan He, Brijesh Singh,
	dimitri.sivanich, Feng Tang, H. Peter Anvin, Ingo Molnar,
	Jordan Borgner, Juergen Gross, mike.travis, russ.anderson,
	Thomas Gleixner, x86-ml, Zhenzhong Duan

From: Steve Wahl <steve.wahl@hpe.com>

commit 2aa85f246c181b1fa89f27e8e20c5636426be624 upstream.

Our hardware (UV aka Superdome Flex) has address ranges marked
reserved by the BIOS. Access to these ranges is caught as an error,
causing the BIOS to halt the system.

Initial page tables mapped a large range of physical addresses that
were not checked against the list of BIOS reserved addresses, and
sometimes included reserved addresses in part of the mapped range.
Including the reserved range in the map allowed processor speculative
accesses to the reserved range, triggering a BIOS halt.

Used early in booting, the page table level2_kernel_pgt addresses 1
GiB divided into 2 MiB pages, and it was set up to linearly map a full
 1 GiB of physical addresses that included the physical address range
of the kernel image, as chosen by KASLR.  But this also included a
large range of unused addresses on either side of the kernel image.
And unlike the kernel image's physical address range, this extra
mapped space was not checked against the BIOS tables of usable RAM
addresses.  So there were times when the addresses chosen by KASLR
would result in processor accessible mappings of BIOS reserved
physical addresses.

The kernel code did not directly access any of this extra mapped
space, but having it mapped allowed the processor to issue speculative
accesses into reserved memory, causing system halts.

This was encountered somewhat rarely on a normal system boot, and much
more often when starting the crash kernel if "crashkernel=512M,high"
was specified on the command line (this heavily restricts the physical
address of the crash kernel, in our case usually within 1 GiB of
reserved space).

The solution is to invalidate the pages of this table outside the kernel
image's space before the page table is activated. It fixes this problem
on our hardware.

 [ bp: Touchups. ]

Signed-off-by: Steve Wahl <steve.wahl@hpe.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: dimitri.sivanich@hpe.com
Cc: Feng Tang <feng.tang@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jordan Borgner <mail@jordan-borgner.de>
Cc: Juergen Gross <jgross@suse.com>
Cc: mike.travis@hpe.com
Cc: russ.anderson@hpe.com
Cc: stable@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Cc: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Link: https://lkml.kernel.org/r/9c011ee51b081534a7a15065b1681d200298b530.1569358539.git.steve.wahl@hpe.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/x86/kernel/head64.c |   22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -145,13 +145,31 @@ unsigned long __head __startup_64(unsign
 	 * we might write invalid pmds, when the kernel is relocated
 	 * cleanup_highmap() fixes this up along with the mappings
 	 * beyond _end.
+	 *
+	 * Only the region occupied by the kernel image has so far
+	 * been checked against the table of usable memory regions
+	 * provided by the firmware, so invalidate pages outside that
+	 * region. A page table entry that maps to a reserved area of
+	 * memory would allow processor speculation into that area,
+	 * and on some hardware (particularly the UV platform) even
+	 * speculative access to some reserved areas is caught as an
+	 * error, causing the BIOS to halt the system.
 	 */
 
 	pmd = fixup_pointer(level2_kernel_pgt, physaddr);
-	for (i = 0; i < PTRS_PER_PMD; i++) {
+
+	/* invalidate pages before the kernel image */
+	for (i = 0; i < pmd_index((unsigned long)_text); i++)
+		pmd[i] &= ~_PAGE_PRESENT;
+
+	/* fixup pages that are part of the kernel image */
+	for (; i <= pmd_index((unsigned long)_end); i++)
 		if (pmd[i] & _PAGE_PRESENT)
 			pmd[i] += load_delta;
-	}
+
+	/* invalidate pages after the kernel image */
+	for (; i < PTRS_PER_PMD; i++)
+		pmd[i] &= ~_PAGE_PRESENT;
 
 	/*
 	 * Fixup phys_base - remove the memory encryption mask to obtain



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 108/119] pinctrl: armada-37xx: fix control of pins 32 and up
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (106 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 107/119] x86/boot/64: Make level2_kernel_pgt pages invalid outside kernel area Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 109/119] pinctrl: armada-37xx: swap polarity on LED group Greg Kroah-Hartman
                   ` (15 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Patrick Williams, Gregory CLEMENT,
	Linus Walleij

From: Patrick Williams <alpawi@amazon.com>

commit 20504fa1d2ffd5d03cdd9dc9c9dd4ed4579b97ef upstream.

The 37xx configuration registers are only 32 bits long, so
pins 32-35 spill over into the next register.  The calculation
for the register address was done, but the bitmask was not, so
any configuration to pin 32 or above resulted in a bitmask that
overflowed and performed no action.

Fix the register / offset calculation to also adjust the offset.

Fixes: 5715092a458c ("pinctrl: armada-37xx: Add gpio support")
Signed-off-by: Patrick Williams <alpawi@amazon.com>
Acked-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20191001154634.96165-1-alpawi@amazon.com
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/pinctrl/mvebu/pinctrl-armada-37xx.c |   18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
@@ -205,11 +205,11 @@ static const struct armada_37xx_pin_data
 };
 
 static inline void armada_37xx_update_reg(unsigned int *reg,
-					  unsigned int offset)
+					  unsigned int *offset)
 {
 	/* We never have more than 2 registers */
-	if (offset >= GPIO_PER_REG) {
-		offset -= GPIO_PER_REG;
+	if (*offset >= GPIO_PER_REG) {
+		*offset -= GPIO_PER_REG;
 		*reg += sizeof(u32);
 	}
 }
@@ -373,7 +373,7 @@ static inline void armada_37xx_irq_updat
 {
 	int offset = irqd_to_hwirq(d);
 
-	armada_37xx_update_reg(reg, offset);
+	armada_37xx_update_reg(reg, &offset);
 }
 
 static int armada_37xx_gpio_direction_input(struct gpio_chip *chip,
@@ -383,7 +383,7 @@ static int armada_37xx_gpio_direction_in
 	unsigned int reg = OUTPUT_EN;
 	unsigned int mask;
 
-	armada_37xx_update_reg(&reg, offset);
+	armada_37xx_update_reg(&reg, &offset);
 	mask = BIT(offset);
 
 	return regmap_update_bits(info->regmap, reg, mask, 0);
@@ -396,7 +396,7 @@ static int armada_37xx_gpio_get_directio
 	unsigned int reg = OUTPUT_EN;
 	unsigned int val, mask;
 
-	armada_37xx_update_reg(&reg, offset);
+	armada_37xx_update_reg(&reg, &offset);
 	mask = BIT(offset);
 	regmap_read(info->regmap, reg, &val);
 
@@ -410,7 +410,7 @@ static int armada_37xx_gpio_direction_ou
 	unsigned int reg = OUTPUT_EN;
 	unsigned int mask, val, ret;
 
-	armada_37xx_update_reg(&reg, offset);
+	armada_37xx_update_reg(&reg, &offset);
 	mask = BIT(offset);
 
 	ret = regmap_update_bits(info->regmap, reg, mask, mask);
@@ -431,7 +431,7 @@ static int armada_37xx_gpio_get(struct g
 	unsigned int reg = INPUT_VAL;
 	unsigned int val, mask;
 
-	armada_37xx_update_reg(&reg, offset);
+	armada_37xx_update_reg(&reg, &offset);
 	mask = BIT(offset);
 
 	regmap_read(info->regmap, reg, &val);
@@ -446,7 +446,7 @@ static void armada_37xx_gpio_set(struct
 	unsigned int reg = OUTPUT_VAL;
 	unsigned int mask, val;
 
-	armada_37xx_update_reg(&reg, offset);
+	armada_37xx_update_reg(&reg, &offset);
 	mask = BIT(offset);
 	val = value ? mask : 0;
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 109/119] pinctrl: armada-37xx: swap polarity on LED group
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (107 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 108/119] pinctrl: armada-37xx: fix control of pins 32 and up Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 110/119] btrfs: block-group: Fix a memory leak due to missing btrfs_put_block_group() Greg Kroah-Hartman
                   ` (14 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Patrick Williams, Linus Walleij

From: Patrick Williams <alpawi@amazon.com>

commit b835d6953009dc350d61402a854b5a7178d8c615 upstream.

The configuration registers for the LED group have inverted
polarity, which puts the GPIO into open-drain state when used in
GPIO mode.  Switch to '0' for GPIO and '1' for LED modes.

Fixes: 87466ccd9401 ("pinctrl: armada-37xx: Add pin controller support for Armada 37xx")
Signed-off-by: Patrick Williams <alpawi@amazon.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20191001155154.99710-1-alpawi@amazon.com
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/pinctrl/mvebu/pinctrl-armada-37xx.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
@@ -170,10 +170,10 @@ static struct armada_37xx_pin_group arma
 	PIN_GRP_EXTRA("uart2", 9, 2, BIT(1) | BIT(13) | BIT(14) | BIT(19),
 		      BIT(1) | BIT(13) | BIT(14), BIT(1) | BIT(19),
 		      18, 2, "gpio", "uart"),
-	PIN_GRP_GPIO("led0_od", 11, 1, BIT(20), "led"),
-	PIN_GRP_GPIO("led1_od", 12, 1, BIT(21), "led"),
-	PIN_GRP_GPIO("led2_od", 13, 1, BIT(22), "led"),
-	PIN_GRP_GPIO("led3_od", 14, 1, BIT(23), "led"),
+	PIN_GRP_GPIO_2("led0_od", 11, 1, BIT(20), BIT(20), 0, "led"),
+	PIN_GRP_GPIO_2("led1_od", 12, 1, BIT(21), BIT(21), 0, "led"),
+	PIN_GRP_GPIO_2("led2_od", 13, 1, BIT(22), BIT(22), 0, "led"),
+	PIN_GRP_GPIO_2("led3_od", 14, 1, BIT(23), BIT(23), 0, "led"),
 
 };
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 110/119] btrfs: block-group: Fix a memory leak due to missing btrfs_put_block_group()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (108 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 109/119] pinctrl: armada-37xx: swap polarity on LED group Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 111/119] memstick: jmb38x_ms: Fix an error handling path in jmb38x_ms_probe() Greg Kroah-Hartman
                   ` (13 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Anand Jain, Johannes Thumshirn,
	Qu Wenruo, David Sterba

From: Qu Wenruo <wqu@suse.com>

commit 4b654acdae850f48b8250b9a578a4eaa518c7a6f upstream.

In btrfs_read_block_groups(), if we have an invalid block group which
has mixed type (DATA|METADATA) while the fs doesn't have MIXED_GROUPS
feature, we error out without freeing the block group cache.

This patch will add the missing btrfs_put_block_group() to prevent
memory leak.

Note for stable backports: the file to patch in versions <= 5.3 is
fs/btrfs/extent-tree.c

Fixes: 49303381f19a ("Btrfs: bail out if block group has different mixed flag")
CC: stable@vger.kernel.org # 4.9+
Reviewed-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 fs/btrfs/extent-tree.c |    1 +
 1 file changed, 1 insertion(+)

--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -10255,6 +10255,7 @@ int btrfs_read_block_groups(struct btrfs
 			btrfs_err(info,
 "bg %llu is a mixed block group but filesystem hasn't enabled mixed block groups",
 				  cache->key.objectid);
+			btrfs_put_block_group(cache);
 			ret = -EINVAL;
 			goto error;
 		}



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 111/119] memstick: jmb38x_ms: Fix an error handling path in jmb38x_ms_probe()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (109 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 110/119] btrfs: block-group: Fix a memory leak due to missing btrfs_put_block_group() Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 112/119] cpufreq: Avoid cpufreq_suspend() deadlock on system shutdown Greg Kroah-Hartman
                   ` (12 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel; +Cc: Greg Kroah-Hartman, stable, Christophe JAILLET, Ulf Hansson

From: Christophe JAILLET <christophe.jaillet@wanadoo.fr>

commit 28c9fac09ab0147158db0baeec630407a5e9b892 upstream.

If 'jmb38x_ms_count_slots()' returns 0, we must undo the previous
'pci_request_regions()' call.

Goto 'err_out_int' to fix it.

Fixes: 60fdd931d577 ("memstick: add support for JMicron jmb38x MemoryStick host controller")
Cc: stable@vger.kernel.org
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/memstick/host/jmb38x_ms.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/memstick/host/jmb38x_ms.c
+++ b/drivers/memstick/host/jmb38x_ms.c
@@ -947,7 +947,7 @@ static int jmb38x_ms_probe(struct pci_de
 	if (!cnt) {
 		rc = -ENODEV;
 		pci_dev_busy = 1;
-		goto err_out;
+		goto err_out_int;
 	}
 
 	jm = kzalloc(sizeof(struct jmb38x_ms)



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 112/119] cpufreq: Avoid cpufreq_suspend() deadlock on system shutdown
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (110 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 111/119] memstick: jmb38x_ms: Fix an error handling path in jmb38x_ms_probe() Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 113/119] xen/netback: fix error path of xenvif_connect_data() Greg Kroah-Hartman
                   ` (11 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Ville Syrjälä,
	Rafael J. Wysocki, Viresh Kumar

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

commit 65650b35133ff20f0c9ef0abd5c3c66dbce3ae57 upstream.

It is incorrect to set the cpufreq syscore shutdown callback pointer
to cpufreq_suspend(), because that function cannot be run in the
syscore stage of system shutdown for two reasons: (a) it may attempt
to carry out actions depending on devices that have already been shut
down at that point and (b) the RCU synchronization carried out by it
may not be able to make progress then.

The latter issue has been present since commit 45975c7d21a1 ("rcu:
Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds"),
but the former one has been there since commit 90de2a4aa9f3 ("cpufreq:
suspend cpufreq governors on shutdown") regardless.

Fix that by dropping cpufreq_syscore_ops altogether and making
device_shutdown() call cpufreq_suspend() directly before shutting
down devices, which is along the lines of what system-wide power
management does.

Fixes: 45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")
Fixes: 90de2a4aa9f3 ("cpufreq: suspend cpufreq governors on shutdown")
Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: 4.0+ <stable@vger.kernel.org> # 4.0+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/base/core.c       |    3 +++
 drivers/cpufreq/cpufreq.c |   10 ----------
 2 files changed, 3 insertions(+), 10 deletions(-)

--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -10,6 +10,7 @@
  *
  */
 
+#include <linux/cpufreq.h>
 #include <linux/device.h>
 #include <linux/err.h>
 #include <linux/fwnode.h>
@@ -2845,6 +2846,8 @@ void device_shutdown(void)
 	wait_for_device_probe();
 	device_block_probing();
 
+	cpufreq_suspend();
+
 	spin_lock(&devices_kset->list_lock);
 	/*
 	 * Walk the devices list backward, shutting down each in turn.
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -2570,14 +2570,6 @@ int cpufreq_unregister_driver(struct cpu
 }
 EXPORT_SYMBOL_GPL(cpufreq_unregister_driver);
 
-/*
- * Stop cpufreq at shutdown to make sure it isn't holding any locks
- * or mutexes when secondary CPUs are halted.
- */
-static struct syscore_ops cpufreq_syscore_ops = {
-	.shutdown = cpufreq_suspend,
-};
-
 struct kobject *cpufreq_global_kobject;
 EXPORT_SYMBOL(cpufreq_global_kobject);
 
@@ -2589,8 +2581,6 @@ static int __init cpufreq_core_init(void
 	cpufreq_global_kobject = kobject_create_and_add("cpufreq", &cpu_subsys.dev_root->kobj);
 	BUG_ON(!cpufreq_global_kobject);
 
-	register_syscore_ops(&cpufreq_syscore_ops);
-
 	return 0;
 }
 module_param(off, int, 0444);



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 113/119] xen/netback: fix error path of xenvif_connect_data()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (111 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 112/119] cpufreq: Avoid cpufreq_suspend() deadlock on system shutdown Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 114/119] PCI: PM: Fix pci_power_up() Greg Kroah-Hartman
                   ` (10 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Juergen Gross, Paul Durrant, Wei Liu,
	David S. Miller

From: Juergen Gross <jgross@suse.com>

commit 3d5c1a037d37392a6859afbde49be5ba6a70a6b3 upstream.

xenvif_connect_data() calls module_put() in case of error. This is
wrong as there is no related module_get().

Remove the superfluous module_put().

Fixes: 279f438e36c0a7 ("xen-netback: Don't destroy the netdev until the vif is shut down")
Cc: <stable@vger.kernel.org> # 3.12
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Wei Liu <wei.liu@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/net/xen-netback/interface.c |    1 -
 1 file changed, 1 deletion(-)

--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -718,7 +718,6 @@ err_unmap:
 	xenvif_unmap_frontend_data_rings(queue);
 	netif_napi_del(&queue->napi);
 err:
-	module_put(THIS_MODULE);
 	return err;
 }
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 114/119] PCI: PM: Fix pci_power_up()
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (112 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 113/119] xen/netback: fix error path of xenvif_connect_data() Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 115/119] KVM: X86: introduce invalidate_gpa argument to tlb flush Greg Kroah-Hartman
                   ` (9 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Daniel Drake, Rafael J. Wysocki,
	Bjorn Helgaas

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

commit 45144d42f299455911cc29366656c7324a3a7c97 upstream.

There is an arbitrary difference between the system resume and
runtime resume code paths for PCI devices regarding the delay to
apply when switching the devices from D3cold to D0.

Namely, pci_restore_standard_config() used in the runtime resume
code path calls pci_set_power_state() which in turn invokes
__pci_start_power_transition() to power up the device through the
platform firmware and that function applies the transition delay
(as per PCI Express Base Specification Revision 2.0, Section 6.6.1).
However, pci_pm_default_resume_early() used in the system resume
code path calls pci_power_up() which doesn't apply the delay at
all and that causes issues to occur during resume from
suspend-to-idle on some systems where the delay is required.

Since there is no reason for that difference to exist, modify
pci_power_up() to follow pci_set_power_state() more closely and
invoke __pci_start_power_transition() from there to call the
platform firmware to power up the device (in case that's necessary).

Fixes: db288c9c5f9d ("PCI / PM: restore the original behavior of pci_set_power_state()")
Reported-by: Daniel Drake <drake@endlessm.com>
Tested-by: Daniel Drake <drake@endlessm.com>
Link: https://lore.kernel.org/linux-pm/CAD8Lp44TYxrMgPLkHCqF9hv6smEurMXvmmvmtyFhZ6Q4SE+dig@mail.gmail.com/T/#m21be74af263c6a34f36e0fc5c77c5449d9406925
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: 3.10+ <stable@vger.kernel.org> # 3.10+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/pci/pci.c |   24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -749,19 +749,6 @@ void pci_update_current_state(struct pci
 }
 
 /**
- * pci_power_up - Put the given device into D0 forcibly
- * @dev: PCI device to power up
- */
-void pci_power_up(struct pci_dev *dev)
-{
-	if (platform_pci_power_manageable(dev))
-		platform_pci_set_power_state(dev, PCI_D0);
-
-	pci_raw_set_power_state(dev, PCI_D0);
-	pci_update_current_state(dev, PCI_D0);
-}
-
-/**
  * pci_platform_power_transition - Use platform to change device power state
  * @dev: PCI device to handle.
  * @state: State to put the device into.
@@ -940,6 +927,17 @@ int pci_set_power_state(struct pci_dev *
 EXPORT_SYMBOL(pci_set_power_state);
 
 /**
+ * pci_power_up - Put the given device into D0 forcibly
+ * @dev: PCI device to power up
+ */
+void pci_power_up(struct pci_dev *dev)
+{
+	__pci_start_power_transition(dev, PCI_D0);
+	pci_raw_set_power_state(dev, PCI_D0);
+	pci_update_current_state(dev, PCI_D0);
+}
+
+/**
  * pci_choose_state - Choose the power state of a PCI device
  * @dev: PCI device to be suspended
  * @state: target sleep state for the whole system. This is the value



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 115/119] KVM: X86: introduce invalidate_gpa argument to tlb flush
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (113 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 114/119] PCI: PM: Fix pci_power_up() Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 116/119] kvm: vmx: Introduce lapic_mode enumeration Greg Kroah-Hartman
                   ` (8 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Paolo Bonzini,
	Radim Krčmář,
	Peter Zijlstra, Jitindar SIngh, Suraj, Wanpeng Li

From: Wanpeng Li <wanpeng.li@hotmail.com>

commit c2ba05ccfde2f069a66c0462e5b5ef8a517dcc9c upstream.

Introduce a new bool invalidate_gpa argument to kvm_x86_ops->tlb_flush,
it will be used by later patches to just flush guest tlb.

For VMX, this will use INVVPID instead of INVEPT, which will invalidate
combined mappings while keeping guest-physical mappings.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Jitindar SIngh, Suraj" <surajjs@amazon.com>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/x86/include/asm/kvm_host.h |    2 +-
 arch/x86/kvm/svm.c              |   14 +++++++-------
 arch/x86/kvm/vmx.c              |   21 +++++++++++----------
 arch/x86/kvm/x86.c              |    6 +++---
 4 files changed, 22 insertions(+), 21 deletions(-)

--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -973,7 +973,7 @@ struct kvm_x86_ops {
 	unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
 	void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
 
-	void (*tlb_flush)(struct kvm_vcpu *vcpu);
+	void (*tlb_flush)(struct kvm_vcpu *vcpu, bool invalidate_gpa);
 
 	void (*run)(struct kvm_vcpu *vcpu);
 	int (*handle_exit)(struct kvm_vcpu *vcpu);
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -299,7 +299,7 @@ static int vgif = true;
 module_param(vgif, int, 0444);
 
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
-static void svm_flush_tlb(struct kvm_vcpu *vcpu);
+static void svm_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa);
 static void svm_complete_interrupts(struct vcpu_svm *svm);
 
 static int nested_svm_exit_handled(struct vcpu_svm *svm);
@@ -2097,7 +2097,7 @@ static int svm_set_cr4(struct kvm_vcpu *
 		return 1;
 
 	if (npt_enabled && ((old_cr4 ^ cr4) & X86_CR4_PGE))
-		svm_flush_tlb(vcpu);
+		svm_flush_tlb(vcpu, true);
 
 	vcpu->arch.cr4 = cr4;
 	if (!npt_enabled)
@@ -2438,7 +2438,7 @@ static void nested_svm_set_tdp_cr3(struc
 
 	svm->vmcb->control.nested_cr3 = __sme_set(root);
 	mark_dirty(svm->vmcb, VMCB_NPT);
-	svm_flush_tlb(vcpu);
+	svm_flush_tlb(vcpu, true);
 }
 
 static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu,
@@ -3111,7 +3111,7 @@ static bool nested_svm_vmrun(struct vcpu
 	svm->nested.intercept_exceptions = nested_vmcb->control.intercept_exceptions;
 	svm->nested.intercept            = nested_vmcb->control.intercept;
 
-	svm_flush_tlb(&svm->vcpu);
+	svm_flush_tlb(&svm->vcpu, true);
 	svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK;
 	if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK)
 		svm->vcpu.arch.hflags |= HF_VINTR_MASK;
@@ -4947,7 +4947,7 @@ static int svm_set_tss_addr(struct kvm *
 	return 0;
 }
 
-static void svm_flush_tlb(struct kvm_vcpu *vcpu)
+static void svm_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5288,7 +5288,7 @@ static void svm_set_cr3(struct kvm_vcpu
 
 	svm->vmcb->save.cr3 = __sme_set(root);
 	mark_dirty(svm->vmcb, VMCB_CR);
-	svm_flush_tlb(vcpu);
+	svm_flush_tlb(vcpu, true);
 }
 
 static void set_tdp_cr3(struct kvm_vcpu *vcpu, unsigned long root)
@@ -5302,7 +5302,7 @@ static void set_tdp_cr3(struct kvm_vcpu
 	svm->vmcb->save.cr3 = kvm_read_cr3(vcpu);
 	mark_dirty(svm->vmcb, VMCB_CR);
 
-	svm_flush_tlb(vcpu);
+	svm_flush_tlb(vcpu, true);
 }
 
 static int is_disabled(void)
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4427,9 +4427,10 @@ static void exit_lmode(struct kvm_vcpu *
 
 #endif
 
-static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid)
+static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid,
+				bool invalidate_gpa)
 {
-	if (enable_ept) {
+	if (enable_ept && (invalidate_gpa || !enable_vpid)) {
 		if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
 			return;
 		ept_sync_context(construct_eptp(vcpu, vcpu->arch.mmu.root_hpa));
@@ -4438,15 +4439,15 @@ static inline void __vmx_flush_tlb(struc
 	}
 }
 
-static void vmx_flush_tlb(struct kvm_vcpu *vcpu)
+static void vmx_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
 {
-	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid);
+	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);
 }
 
 static void vmx_flush_tlb_ept_only(struct kvm_vcpu *vcpu)
 {
 	if (enable_ept)
-		vmx_flush_tlb(vcpu);
+		vmx_flush_tlb(vcpu, true);
 }
 
 static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)
@@ -4644,7 +4645,7 @@ static void vmx_set_cr3(struct kvm_vcpu
 		ept_load_pdptrs(vcpu);
 	}
 
-	vmx_flush_tlb(vcpu);
+	vmx_flush_tlb(vcpu, true);
 	vmcs_writel(GUEST_CR3, guest_cr3);
 }
 
@@ -8314,7 +8315,7 @@ static int handle_invvpid(struct kvm_vcp
 		return kvm_skip_emulated_instruction(vcpu);
 	}
 
-	__vmx_flush_tlb(vcpu, vmx->nested.vpid02);
+	__vmx_flush_tlb(vcpu, vmx->nested.vpid02, true);
 	nested_vmx_succeed(vcpu);
 
 	return kvm_skip_emulated_instruction(vcpu);
@@ -11214,11 +11215,11 @@ static int prepare_vmcs02(struct kvm_vcp
 			vmcs_write16(VIRTUAL_PROCESSOR_ID, vmx->nested.vpid02);
 			if (vmcs12->virtual_processor_id != vmx->nested.last_vpid) {
 				vmx->nested.last_vpid = vmcs12->virtual_processor_id;
-				__vmx_flush_tlb(vcpu, to_vmx(vcpu)->nested.vpid02);
+				__vmx_flush_tlb(vcpu, to_vmx(vcpu)->nested.vpid02, true);
 			}
 		} else {
 			vmcs_write16(VIRTUAL_PROCESSOR_ID, vmx->vpid);
-			vmx_flush_tlb(vcpu);
+			vmx_flush_tlb(vcpu, true);
 		}
 
 	}
@@ -11921,7 +11922,7 @@ static void load_vmcs12_host_state(struc
 		 * L1's vpid. TODO: move to a more elaborate solution, giving
 		 * each L2 its own vpid and exposing the vpid feature to L1.
 		 */
-		vmx_flush_tlb(vcpu);
+		vmx_flush_tlb(vcpu, true);
 	}
 	/* Restore posted intr vector. */
 	if (nested_cpu_has_posted_intr(vmcs12))
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6943,10 +6943,10 @@ static void vcpu_scan_ioapic(struct kvm_
 	kvm_x86_ops->load_eoi_exitmap(vcpu, eoi_exit_bitmap);
 }
 
-static void kvm_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
+static void kvm_vcpu_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
 {
 	++vcpu->stat.tlb_flush;
-	kvm_x86_ops->tlb_flush(vcpu);
+	kvm_x86_ops->tlb_flush(vcpu, invalidate_gpa);
 }
 
 void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
@@ -7017,7 +7017,7 @@ static int vcpu_enter_guest(struct kvm_v
 		if (kvm_check_request(KVM_REQ_MMU_SYNC, vcpu))
 			kvm_mmu_sync_roots(vcpu);
 		if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu))
-			kvm_vcpu_flush_tlb(vcpu);
+			kvm_vcpu_flush_tlb(vcpu, true);
 		if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
 			vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
 			r = 0;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 116/119] kvm: vmx: Introduce lapic_mode enumeration
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (114 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 115/119] KVM: X86: introduce invalidate_gpa argument to tlb flush Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 117/119] kvm: apic: Flush TLB after APIC mode/address change if VPIDs are in use Greg Kroah-Hartman
                   ` (7 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Jim Mattson, Krish Sadhukhan,
	Paolo Bonzini, Jitindar SIngh, Suraj

From: Jim Mattson <jmattson@google.com>

commit 588716494258899389206fa50426e78cc9df89b9 upstream.

The local APIC can be in one of three modes: disabled, xAPIC or
x2APIC. (A fourth mode, "invalid," is included for completeness.)

Using the new enumeration can make some of the APIC mode logic easier
to read. In kvm_set_apic_base, for instance, it is clear that one
cannot transition directly from x2APIC mode to xAPIC mode or directly
from APIC disabled to x2APIC mode.

Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
[Check invalid bits even if msr_info->host_initiated.  Reported by
 Wanpeng Li. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Jitindar SIngh, Suraj" <surajjs@amazon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/x86/kvm/lapic.h |   14 ++++++++++++++
 arch/x86/kvm/x86.c   |   26 +++++++++++++++-----------
 2 files changed, 29 insertions(+), 11 deletions(-)

--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -16,6 +16,13 @@
 #define APIC_BUS_CYCLE_NS       1
 #define APIC_BUS_FREQUENCY      (1000000000ULL / APIC_BUS_CYCLE_NS)
 
+enum lapic_mode {
+	LAPIC_MODE_DISABLED = 0,
+	LAPIC_MODE_INVALID = X2APIC_ENABLE,
+	LAPIC_MODE_XAPIC = MSR_IA32_APICBASE_ENABLE,
+	LAPIC_MODE_X2APIC = MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE,
+};
+
 struct kvm_timer {
 	struct hrtimer timer;
 	s64 period; 				/* unit: ns */
@@ -89,6 +96,7 @@ u64 kvm_get_apic_base(struct kvm_vcpu *v
 int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
 int kvm_apic_get_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s);
 int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s);
+enum lapic_mode kvm_get_apic_mode(struct kvm_vcpu *vcpu);
 int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu);
 
 u64 kvm_get_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu);
@@ -220,4 +228,10 @@ void kvm_lapic_switch_to_hv_timer(struct
 void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu);
 bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu);
 void kvm_lapic_restart_hv_timer(struct kvm_vcpu *vcpu);
+
+static inline enum lapic_mode kvm_apic_mode(u64 apic_base)
+{
+	return apic_base & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE);
+}
+
 #endif
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -306,23 +306,27 @@ u64 kvm_get_apic_base(struct kvm_vcpu *v
 }
 EXPORT_SYMBOL_GPL(kvm_get_apic_base);
 
+enum lapic_mode kvm_get_apic_mode(struct kvm_vcpu *vcpu)
+{
+	return kvm_apic_mode(kvm_get_apic_base(vcpu));
+}
+EXPORT_SYMBOL_GPL(kvm_get_apic_mode);
+
 int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
-	u64 old_state = vcpu->arch.apic_base &
-		(MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE);
-	u64 new_state = msr_info->data &
-		(MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE);
+	enum lapic_mode old_mode = kvm_get_apic_mode(vcpu);
+	enum lapic_mode new_mode = kvm_apic_mode(msr_info->data);
 	u64 reserved_bits = ((~0ULL) << cpuid_maxphyaddr(vcpu)) | 0x2ff |
 		(guest_cpuid_has(vcpu, X86_FEATURE_X2APIC) ? 0 : X2APIC_ENABLE);
 
-	if ((msr_info->data & reserved_bits) || new_state == X2APIC_ENABLE)
-		return 1;
-	if (!msr_info->host_initiated &&
-	    ((new_state == MSR_IA32_APICBASE_ENABLE &&
-	      old_state == (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE)) ||
-	     (new_state == (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE) &&
-	      old_state == 0)))
+	if ((msr_info->data & reserved_bits) != 0 || new_mode == LAPIC_MODE_INVALID)
 		return 1;
+	if (!msr_info->host_initiated) {
+		if (old_mode == LAPIC_MODE_X2APIC && new_mode == LAPIC_MODE_XAPIC)
+			return 1;
+		if (old_mode == LAPIC_MODE_DISABLED && new_mode == LAPIC_MODE_X2APIC)
+			return 1;
+	}
 
 	kvm_lapic_set_base(vcpu, msr_info->data);
 	return 0;



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 117/119] kvm: apic: Flush TLB after APIC mode/address change if VPIDs are in use
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (115 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 116/119] kvm: vmx: Introduce lapic_mode enumeration Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 118/119] kvm: vmx: Basic APIC virtualization controls have three settings Greg Kroah-Hartman
                   ` (6 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Junaid Shahid, Jim Mattson,
	Radim Krčmář,
	Jitindar SIngh, Suraj

From: Junaid Shahid <junaids@google.com>

commit a468f2dbf921d02f5107378501693137a812999b upstream.

Currently, KVM flushes the TLB after a change to the APIC access page
address or the APIC mode when EPT mode is enabled. However, even in
shadow paging mode, a TLB flush is needed if VPIDs are being used, as
specified in the Intel SDM Section 29.4.5.

So replace vmx_flush_tlb_ept_only() with vmx_flush_tlb(), which will
flush if either EPT or VPIDs are in use.

Signed-off-by: Junaid Shahid <junaids@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Cc: "Jitindar SIngh, Suraj" <surajjs@amazon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/x86/kvm/vmx.c |   14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4444,12 +4444,6 @@ static void vmx_flush_tlb(struct kvm_vcp
 	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);
 }
 
-static void vmx_flush_tlb_ept_only(struct kvm_vcpu *vcpu)
-{
-	if (enable_ept)
-		vmx_flush_tlb(vcpu, true);
-}
-
 static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)
 {
 	ulong cr0_guest_owned_bits = vcpu->arch.cr0_guest_owned_bits;
@@ -9320,7 +9314,7 @@ static void vmx_set_virtual_x2apic_mode(
 	} else {
 		sec_exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
 		sec_exec_control |= SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
-		vmx_flush_tlb_ept_only(vcpu);
+		vmx_flush_tlb(vcpu, true);
 	}
 	vmcs_write32(SECONDARY_VM_EXEC_CONTROL, sec_exec_control);
 
@@ -9348,7 +9342,7 @@ static void vmx_set_apic_access_page_add
 	    !nested_cpu_has2(get_vmcs12(&vmx->vcpu),
 			     SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
 		vmcs_write64(APIC_ACCESS_ADDR, hpa);
-		vmx_flush_tlb_ept_only(vcpu);
+		vmx_flush_tlb(vcpu, true);
 	}
 }
 
@@ -11243,7 +11237,7 @@ static int prepare_vmcs02(struct kvm_vcp
 		}
 	} else if (nested_cpu_has2(vmcs12,
 				   SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
-		vmx_flush_tlb_ept_only(vcpu);
+		vmx_flush_tlb(vcpu, true);
 	}
 
 	/*
@@ -12198,7 +12192,7 @@ static void nested_vmx_vmexit(struct kvm
 	} else if (!nested_cpu_has_ept(vmcs12) &&
 		   nested_cpu_has2(vmcs12,
 				   SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
-		vmx_flush_tlb_ept_only(vcpu);
+		vmx_flush_tlb(vcpu, true);
 	}
 
 	/* This is needed for same reason as it was needed in prepare_vmcs02 */



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 118/119] kvm: vmx: Basic APIC virtualization controls have three settings
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (116 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 117/119] kvm: apic: Flush TLB after APIC mode/address change if VPIDs are in use Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-27 21:01 ` [PATCH 4.14 119/119] RDMA/cxgb4: Do not dma memory off of the stack Greg Kroah-Hartman
                   ` (5 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Jim Mattson, Krish Sadhukhan,
	Paolo Bonzini, Jitindar SIngh, Suraj

From: Jim Mattson <jmattson@google.com>

commit 8d860bbeedef97fe981d28fa7b71d77f3b29563f upstream.

Previously, we toggled between SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE
and SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES, depending on whether or
not the EXTD bit was set in MSR_IA32_APICBASE. However, if the local
APIC is disabled, we should not set either of these APIC
virtualization control bits.

Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Jitindar SIngh, Suraj" <surajjs@amazon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/x86/include/asm/kvm_host.h |    2 -
 arch/x86/kvm/lapic.c            |   12 ++++------
 arch/x86/kvm/svm.c              |    4 +--
 arch/x86/kvm/vmx.c              |   48 +++++++++++++++++++++++++---------------
 4 files changed, 38 insertions(+), 28 deletions(-)

--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -998,7 +998,7 @@ struct kvm_x86_ops {
 	void (*hwapic_irr_update)(struct kvm_vcpu *vcpu, int max_irr);
 	void (*hwapic_isr_update)(struct kvm_vcpu *vcpu, int isr);
 	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
-	void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
+	void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu);
 	void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu, hpa_t hpa);
 	void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
 	int (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1967,13 +1967,11 @@ void kvm_lapic_set_base(struct kvm_vcpu
 		}
 	}
 
-	if ((old_value ^ value) & X2APIC_ENABLE) {
-		if (value & X2APIC_ENABLE) {
-			kvm_apic_set_x2apic_id(apic, vcpu->vcpu_id);
-			kvm_x86_ops->set_virtual_x2apic_mode(vcpu, true);
-		} else
-			kvm_x86_ops->set_virtual_x2apic_mode(vcpu, false);
-	}
+	if (((old_value ^ value) & X2APIC_ENABLE) && (value & X2APIC_ENABLE))
+		kvm_apic_set_x2apic_id(apic, vcpu->vcpu_id);
+
+	if ((old_value ^ value) & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE))
+		kvm_x86_ops->set_virtual_apic_mode(vcpu);
 
 	apic->base_address = apic->vcpu->arch.apic_base &
 			     MSR_IA32_APICBASE_BASE;
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4589,7 +4589,7 @@ static void update_cr8_intercept(struct
 		set_cr_intercept(svm, INTERCEPT_CR8_WRITE);
 }
 
-static void svm_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
+static void svm_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 {
 	return;
 }
@@ -5713,7 +5713,7 @@ static struct kvm_x86_ops svm_x86_ops __
 	.enable_nmi_window = enable_nmi_window,
 	.enable_irq_window = enable_irq_window,
 	.update_cr8_intercept = update_cr8_intercept,
-	.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
+	.set_virtual_apic_mode = svm_set_virtual_apic_mode,
 	.get_enable_apicv = svm_get_enable_apicv,
 	.refresh_apicv_exec_ctrl = svm_refresh_apicv_exec_ctrl,
 	.load_eoi_exitmap = svm_load_eoi_exitmap,
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -591,7 +591,8 @@ struct nested_vmx {
 	 */
 	bool sync_shadow_vmcs;
 
-	bool change_vmcs01_virtual_x2apic_mode;
+	bool change_vmcs01_virtual_apic_mode;
+
 	/* L2 must run next, and mustn't decide to exit to L1. */
 	bool nested_run_pending;
 
@@ -9290,31 +9291,43 @@ static void update_cr8_intercept(struct
 	vmcs_write32(TPR_THRESHOLD, irr);
 }
 
-static void vmx_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
+static void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 {
 	u32 sec_exec_control;
 
+	if (!lapic_in_kernel(vcpu))
+		return;
+
 	/* Postpone execution until vmcs01 is the current VMCS. */
 	if (is_guest_mode(vcpu)) {
-		to_vmx(vcpu)->nested.change_vmcs01_virtual_x2apic_mode = true;
+		to_vmx(vcpu)->nested.change_vmcs01_virtual_apic_mode = true;
 		return;
 	}
 
-	if (!cpu_has_vmx_virtualize_x2apic_mode())
-		return;
-
 	if (!cpu_need_tpr_shadow(vcpu))
 		return;
 
 	sec_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
+	sec_exec_control &= ~(SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+			      SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE);
 
-	if (set) {
-		sec_exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
-		sec_exec_control |= SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
-	} else {
-		sec_exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
-		sec_exec_control |= SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
-		vmx_flush_tlb(vcpu, true);
+	switch (kvm_get_apic_mode(vcpu)) {
+	case LAPIC_MODE_INVALID:
+		WARN_ONCE(true, "Invalid local APIC state");
+	case LAPIC_MODE_DISABLED:
+		break;
+	case LAPIC_MODE_XAPIC:
+		if (flexpriority_enabled) {
+			sec_exec_control |=
+				SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+			vmx_flush_tlb(vcpu, true);
+		}
+		break;
+	case LAPIC_MODE_X2APIC:
+		if (cpu_has_vmx_virtualize_x2apic_mode())
+			sec_exec_control |=
+				SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
+		break;
 	}
 	vmcs_write32(SECONDARY_VM_EXEC_CONTROL, sec_exec_control);
 
@@ -12185,10 +12198,9 @@ static void nested_vmx_vmexit(struct kvm
 	if (kvm_has_tsc_control)
 		decache_tsc_multiplier(vmx);
 
-	if (vmx->nested.change_vmcs01_virtual_x2apic_mode) {
-		vmx->nested.change_vmcs01_virtual_x2apic_mode = false;
-		vmx_set_virtual_x2apic_mode(vcpu,
-				vcpu->arch.apic_base & X2APIC_ENABLE);
+	if (vmx->nested.change_vmcs01_virtual_apic_mode) {
+		vmx->nested.change_vmcs01_virtual_apic_mode = false;
+		vmx_set_virtual_apic_mode(vcpu);
 	} else if (!nested_cpu_has_ept(vmcs12) &&
 		   nested_cpu_has2(vmcs12,
 				   SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
@@ -12749,7 +12761,7 @@ static struct kvm_x86_ops vmx_x86_ops __
 	.enable_nmi_window = enable_nmi_window,
 	.enable_irq_window = enable_irq_window,
 	.update_cr8_intercept = update_cr8_intercept,
-	.set_virtual_x2apic_mode = vmx_set_virtual_x2apic_mode,
+	.set_virtual_apic_mode = vmx_set_virtual_apic_mode,
 	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
 	.get_enable_apicv = vmx_get_enable_apicv,
 	.refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,



^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH 4.14 119/119] RDMA/cxgb4: Do not dma memory off of the stack
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (117 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 118/119] kvm: vmx: Basic APIC virtualization controls have three settings Greg Kroah-Hartman
@ 2019-10-27 21:01 ` Greg Kroah-Hartman
  2019-10-28  2:35 ` [PATCH 4.14 000/119] 4.14.151-stable review kernelci.org bot
                   ` (4 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-27 21:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Nicolas Waisman, Potnuri Bharat Teja,
	Jason Gunthorpe

From: Greg KH <gregkh@linuxfoundation.org>

commit 3840c5b78803b2b6cc1ff820100a74a092c40cbb upstream.

Nicolas pointed out that the cxgb4 driver is doing dma off of the stack,
which is generally considered a very bad thing.  On some architectures it
could be a security problem, but odds are none of them actually run this
driver, so it's just a "normal" bug.

Resolve this by allocating the memory for a message off of the heap
instead of the stack.  kmalloc() always will give us a proper memory
location that DMA will work correctly from.

Link: https://lore.kernel.org/r/20191001165611.GA3542072@kroah.com
Reported-by: Nicolas Waisman <nico@semmle.com>
Tested-by: Potnuri Bharat Teja <bharat@chelsio.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/infiniband/hw/cxgb4/mem.c |   28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

--- a/drivers/infiniband/hw/cxgb4/mem.c
+++ b/drivers/infiniband/hw/cxgb4/mem.c
@@ -260,13 +260,17 @@ static int write_tpt_entry(struct c4iw_r
 			   struct sk_buff *skb)
 {
 	int err;
-	struct fw_ri_tpte tpt;
+	struct fw_ri_tpte *tpt;
 	u32 stag_idx;
 	static atomic_t key;
 
 	if (c4iw_fatal_error(rdev))
 		return -EIO;
 
+	tpt = kmalloc(sizeof(*tpt), GFP_KERNEL);
+	if (!tpt)
+		return -ENOMEM;
+
 	stag_state = stag_state > 0;
 	stag_idx = (*stag) >> 8;
 
@@ -276,6 +280,7 @@ static int write_tpt_entry(struct c4iw_r
 			mutex_lock(&rdev->stats.lock);
 			rdev->stats.stag.fail++;
 			mutex_unlock(&rdev->stats.lock);
+			kfree(tpt);
 			return -ENOMEM;
 		}
 		mutex_lock(&rdev->stats.lock);
@@ -290,28 +295,28 @@ static int write_tpt_entry(struct c4iw_r
 
 	/* write TPT entry */
 	if (reset_tpt_entry)
-		memset(&tpt, 0, sizeof(tpt));
+		memset(tpt, 0, sizeof(*tpt));
 	else {
-		tpt.valid_to_pdid = cpu_to_be32(FW_RI_TPTE_VALID_F |
+		tpt->valid_to_pdid = cpu_to_be32(FW_RI_TPTE_VALID_F |
 			FW_RI_TPTE_STAGKEY_V((*stag & FW_RI_TPTE_STAGKEY_M)) |
 			FW_RI_TPTE_STAGSTATE_V(stag_state) |
 			FW_RI_TPTE_STAGTYPE_V(type) | FW_RI_TPTE_PDID_V(pdid));
-		tpt.locread_to_qpid = cpu_to_be32(FW_RI_TPTE_PERM_V(perm) |
+		tpt->locread_to_qpid = cpu_to_be32(FW_RI_TPTE_PERM_V(perm) |
 			(bind_enabled ? FW_RI_TPTE_MWBINDEN_F : 0) |
 			FW_RI_TPTE_ADDRTYPE_V((zbva ? FW_RI_ZERO_BASED_TO :
 						      FW_RI_VA_BASED_TO))|
 			FW_RI_TPTE_PS_V(page_size));
-		tpt.nosnoop_pbladdr = !pbl_size ? 0 : cpu_to_be32(
+		tpt->nosnoop_pbladdr = !pbl_size ? 0 : cpu_to_be32(
 			FW_RI_TPTE_PBLADDR_V(PBL_OFF(rdev, pbl_addr)>>3));
-		tpt.len_lo = cpu_to_be32((u32)(len & 0xffffffffUL));
-		tpt.va_hi = cpu_to_be32((u32)(to >> 32));
-		tpt.va_lo_fbo = cpu_to_be32((u32)(to & 0xffffffffUL));
-		tpt.dca_mwbcnt_pstag = cpu_to_be32(0);
-		tpt.len_hi = cpu_to_be32((u32)(len >> 32));
+		tpt->len_lo = cpu_to_be32((u32)(len & 0xffffffffUL));
+		tpt->va_hi = cpu_to_be32((u32)(to >> 32));
+		tpt->va_lo_fbo = cpu_to_be32((u32)(to & 0xffffffffUL));
+		tpt->dca_mwbcnt_pstag = cpu_to_be32(0);
+		tpt->len_hi = cpu_to_be32((u32)(len >> 32));
 	}
 	err = write_adapter_mem(rdev, stag_idx +
 				(rdev->lldi.vr->stag.start >> 5),
-				sizeof(tpt), &tpt, skb);
+				sizeof(*tpt), tpt, skb);
 
 	if (reset_tpt_entry) {
 		c4iw_put_resource(&rdev->resource.tpt_table, stag_idx);
@@ -319,6 +324,7 @@ static int write_tpt_entry(struct c4iw_r
 		rdev->stats.stag.cur -= 32;
 		mutex_unlock(&rdev->stats.lock);
 	}
+	kfree(tpt);
 	return err;
 }
 



^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 000/119] 4.14.151-stable review
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (118 preceding siblings ...)
  2019-10-27 21:01 ` [PATCH 4.14 119/119] RDMA/cxgb4: Do not dma memory off of the stack Greg Kroah-Hartman
@ 2019-10-28  2:35 ` kernelci.org bot
  2019-10-28  5:59 ` Didik Setiawan
                   ` (3 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: kernelci.org bot @ 2019-10-28  2:35 UTC (permalink / raw)
  To: Greg Kroah-Hartman, linux-kernel
  Cc: Greg Kroah-Hartman, torvalds, akpm, linux, shuah, patches,
	ben.hutchings, lkft-triage, stable

stable-rc/linux-4.14.y boot: 117 boots: 0 failed, 110 passed with 7 offline (v4.14.150-120-gea1df089eebe)

Full Boot Summary: https://kernelci.org/boot/all/job/stable-rc/branch/linux-4.14.y/kernel/v4.14.150-120-gea1df089eebe/
Full Build Summary: https://kernelci.org/build/stable-rc/branch/linux-4.14.y/kernel/v4.14.150-120-gea1df089eebe/

Tree: stable-rc
Branch: linux-4.14.y
Git Describe: v4.14.150-120-gea1df089eebe
Git Commit: ea1df089eebe2babf969ff53de3fefe3898c2362
Git URL: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
Tested: 63 unique boards, 21 SoC families, 13 builds out of 201

Offline Platforms:

arm:

    multi_v7_defconfig:
        gcc-8
            qcom-apq8064-cm-qs600: 1 offline lab
            sun5i-r8-chip: 1 offline lab
            sun7i-a20-bananapi: 1 offline lab

    sunxi_defconfig:
        gcc-8
            sun5i-r8-chip: 1 offline lab
            sun7i-a20-bananapi: 1 offline lab

    davinci_all_defconfig:
        gcc-8
            dm365evm,legacy: 1 offline lab

    qcom_defconfig:
        gcc-8
            qcom-apq8064-cm-qs600: 1 offline lab

---
For more info write to <info@kernelci.org>

^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 000/119] 4.14.151-stable review
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (119 preceding siblings ...)
  2019-10-28  2:35 ` [PATCH 4.14 000/119] 4.14.151-stable review kernelci.org bot
@ 2019-10-28  5:59 ` Didik Setiawan
  2019-10-28 13:35 ` Guenter Roeck
                   ` (2 subsequent siblings)
  123 siblings, 0 replies; 135+ messages in thread
From: Didik Setiawan @ 2019-10-28  5:59 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-kernel, torvalds, akpm, linux, shuah, patches,
	ben.hutchings, lkft-triage, stable

On Sun, Oct 27, 2019 at 09:59:37PM +0100, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.14.151 release.
> There are 119 patches in this series, all will be posted as a response
> to this one.  If anyone has any issues with these being applied, please
> let me know.
> 
> Responses should be made by Tue 29 Oct 2019 08:27:02 PM UTC.
> Anything received after that time might be too late.
> 
> The whole patch series can be found in one patch at:
> 	https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.151-rc1.gz
> or in the git tree and branch at:
> 	git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y
> and the diffstat can be found below.
> 
> thanks,
> 
> greg k-h

Compiled, booted, and no regressions found on my x86_64 system.

Thanks,
Didik Setiawan


^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 000/119] 4.14.151-stable review
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (120 preceding siblings ...)
  2019-10-28  5:59 ` Didik Setiawan
@ 2019-10-28 13:35 ` Guenter Roeck
  2019-10-28 21:39 ` Jon Hunter
  2019-10-29  7:15 ` Naresh Kamboju
  123 siblings, 0 replies; 135+ messages in thread
From: Guenter Roeck @ 2019-10-28 13:35 UTC (permalink / raw)
  To: Greg Kroah-Hartman, linux-kernel
  Cc: torvalds, akpm, shuah, patches, ben.hutchings, lkft-triage, stable

On 10/27/19 1:59 PM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.14.151 release.
> There are 119 patches in this series, all will be posted as a response
> to this one.  If anyone has any issues with these being applied, please
> let me know.
> 
> Responses should be made by Tue 29 Oct 2019 08:27:02 PM UTC.
> Anything received after that time might be too late.
> 


Build results:
	total: 172 pass: 159 fail: 13
Failed builds:
	All mips
Qemu test results:
	total: 372 pass: 312 fail: 60
Failed tests:
	All mips

Guenter

^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 000/119] 4.14.151-stable review
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (121 preceding siblings ...)
  2019-10-28 13:35 ` Guenter Roeck
@ 2019-10-28 21:39 ` Jon Hunter
  2019-10-29  7:15 ` Naresh Kamboju
  123 siblings, 0 replies; 135+ messages in thread
From: Jon Hunter @ 2019-10-28 21:39 UTC (permalink / raw)
  To: Greg Kroah-Hartman, linux-kernel
  Cc: torvalds, akpm, linux, shuah, patches, ben.hutchings,
	lkft-triage, stable, linux-tegra


On 27/10/2019 20:59, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.14.151 release.
> There are 119 patches in this series, all will be posted as a response
> to this one.  If anyone has any issues with these being applied, please
> let me know.
> 
> Responses should be made by Tue 29 Oct 2019 08:27:02 PM UTC.
> Anything received after that time might be too late.
> 
> The whole patch series can be found in one patch at:
> 	https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.151-rc1.gz
> or in the git tree and branch at:
> 	git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y
> and the diffstat can be found below.
> 
> thanks,
> 
> greg k-h

All tests are passing for Tegra ...

Test results for stable-v4.14:
    8 builds:	8 pass, 0 fail
    16 boots:	16 pass, 0 fail
    24 tests:	24 pass, 0 fail

Linux version:	4.14.151-rc1-g22148a87efce
Boards tested:	tegra124-jetson-tk1, tegra20-ventana,
                tegra210-p2371-2180, tegra30-cardhu-a04

Cheers
Jon

-- 
nvpublic

^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 000/119] 4.14.151-stable review
  2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
                   ` (122 preceding siblings ...)
  2019-10-28 21:39 ` Jon Hunter
@ 2019-10-29  7:15 ` Naresh Kamboju
  123 siblings, 0 replies; 135+ messages in thread
From: Naresh Kamboju @ 2019-10-29  7:15 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: open list, Shuah Khan, patches, lkft-triage, Ben Hutchings,
	linux- stable, Andrew Morton, Linus Torvalds, Guenter Roeck,
	LTP List

On Mon, 28 Oct 2019 at 02:38, Greg Kroah-Hartman
<gregkh@linuxfoundation.org> wrote:
>
> This is the start of the stable review cycle for the 4.14.151 release.
> There are 119 patches in this series, all will be posted as a response
> to this one.  If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Tue 29 Oct 2019 08:27:02 PM UTC.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
>         https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.151-rc1.gz
> or in the git tree and branch at:
>         git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h

Results from Linaro’s test farm.
No regressions on arm64, arm, x86_64, and i386.

Note:
The new test case  from LTP version upgrade syscalls sync_file_range02 is an
intermittent failure. We are investigating this case.
The listed fixes in the below section are due to LTP upgrade to v20190930.

Summary
------------------------------------------------------------------------

kernel: 4.14.151-rc2
git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
git branch: linux-4.14.y
git commit: 80117985de0635c8d7fa58fa198b7bbbd465542d
git describe: v4.14.150-118-g80117985de06
Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-4.14-oe/build/v4.14.150-118-g80117985de06

No regressions (compared to build v4.14.149-66-g66f69184d722)


Fixes (compared to build v4.14.149-66-g66f69184d722)
------------------------------------------------------------------------

ltp-syscalls-tests:
    * ustat02
    * ioctl_ns05
    * ioctl_ns06

Ran 17364 total tests in the following environments and test suites.

Environments
--------------
- dragonboard-410c - arm64
- hi6220-hikey - arm64
- i386
- juno-r2 - arm64
- qemu_arm
- qemu_arm64
- qemu_i386
- qemu_x86_64
- x15 - arm
- x86_64

Test Suites
-----------
* build
* install-android-platform-tools-r2600
* kselftest
* libhugetlbfs
* ltp-cap_bounds-tests
* ltp-commands-tests
* ltp-containers-tests
* ltp-cpuhotplug-tests
* ltp-cve-tests
* ltp-dio-tests
* ltp-fcntl-locktests-tests
* ltp-filecaps-tests
* ltp-fs-tests
* ltp-fs_bind-tests
* ltp-fs_perms_simple-tests
* ltp-fsx-tests
* ltp-hugetlb-tests
* ltp-io-tests
* ltp-ipc-tests
* ltp-math-tests
* ltp-mm-tests
* ltp-nptl-tests
* ltp-pty-tests
* ltp-sched-tests
* ltp-securebits-tests
* ltp-syscalls-tests
* perf
* spectre-meltdown-checker-test
* v4l2-compliance
* network-basic-tests
* ltp-open-posix-tests
* ssuite
* kselftest-vsyscall-mode-native
* kselftest-vsyscall-mode-none
* kvm-unit-tests

-- 
Linaro LKFT
https://lkft.linaro.org

^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 027/119] MIPS: elf_hwcap: Export userspace ASEs
  2019-10-27 21:00 ` [PATCH 4.14 027/119] MIPS: elf_hwcap: Export userspace ASEs Greg Kroah-Hartman
@ 2019-10-29 10:50   ` Jiaxun Yang
  2019-10-30  9:02     ` Greg Kroah-Hartman
  0 siblings, 1 reply; 135+ messages in thread
From: Jiaxun Yang @ 2019-10-29 10:50 UTC (permalink / raw)
  To: Greg Kroah-Hartman, linux-kernel
  Cc: stable, Meng Zhuo, linux-mips, Paul Burton, Sasha Levin


在 2019/10/28 上午5:00, Greg Kroah-Hartman 写道:
> From: Jiaxun Yang <jiaxun.yang@flygoat.com>
>
> [ Upstream commit 38dffe1e4dde1d3174fdce09d67370412843ebb5 ]
>
> A Golang developer reported MIPS hwcap isn't reflecting instructions
> that the processor actually supported so programs can't apply optimized
> code at runtime.
>
> Thus we export the ASEs that can be used in userspace programs.
>
> Reported-by: Meng Zhuo <mengzhuo1203@gmail.com>
> Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
> Cc: linux-mips@vger.kernel.org
> Cc: Paul Burton <paul.burton@mips.com>
> Cc: <stable@vger.kernel.org> # 4.14+
> Signed-off-by: Paul Burton <paul.burton@mips.com>
> Signed-off-by: Sasha Levin <sashal@kernel.org>
> ---
>   arch/mips/include/uapi/asm/hwcap.h | 11 ++++++++++
>   arch/mips/kernel/cpu-probe.c       | 33 ++++++++++++++++++++++++++++++
>   2 files changed, 44 insertions(+)
>
> diff --git a/arch/mips/include/uapi/asm/hwcap.h b/arch/mips/include/uapi/asm/hwcap.h
> index 600ad8fd68356..2475294c3d185 100644
> --- a/arch/mips/include/uapi/asm/hwcap.h
> +++ b/arch/mips/include/uapi/asm/hwcap.h
> @@ -5,5 +5,16 @@
>   /* HWCAP flags */
>   #define HWCAP_MIPS_R6		(1 << 0)
>   #define HWCAP_MIPS_MSA		(1 << 1)
> +#define HWCAP_MIPS_MIPS16	(1 << 3)
> +#define HWCAP_MIPS_MDMX     (1 << 4)
> +#define HWCAP_MIPS_MIPS3D   (1 << 5)
> +#define HWCAP_MIPS_SMARTMIPS (1 << 6)
> +#define HWCAP_MIPS_DSP      (1 << 7)
> +#define HWCAP_MIPS_DSP2     (1 << 8)
> +#define HWCAP_MIPS_DSP3     (1 << 9)
> +#define HWCAP_MIPS_MIPS16E2 (1 << 10)
> +#define HWCAP_LOONGSON_MMI  (1 << 11)
> +#define HWCAP_LOONGSON_EXT  (1 << 12)
> +#define HWCAP_LOONGSON_EXT2 (1 << 13)
>   
>   #endif /* _UAPI_ASM_HWCAP_H */
> diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
> index 3007ae1bb616a..c38cd62879f4e 100644
> --- a/arch/mips/kernel/cpu-probe.c
> +++ b/arch/mips/kernel/cpu-probe.c
> @@ -2080,6 +2080,39 @@ void cpu_probe(void)
>   		elf_hwcap |= HWCAP_MIPS_MSA;
>   	}
>   
> +	if (cpu_has_mips16)
> +		elf_hwcap |= HWCAP_MIPS_MIPS16;
> +
> +	if (cpu_has_mdmx)
> +		elf_hwcap |= HWCAP_MIPS_MDMX;
> +
> +	if (cpu_has_mips3d)
> +		elf_hwcap |= HWCAP_MIPS_MIPS3D;
> +
> +	if (cpu_has_smartmips)
> +		elf_hwcap |= HWCAP_MIPS_SMARTMIPS;
> +
> +	if (cpu_has_dsp)
> +		elf_hwcap |= HWCAP_MIPS_DSP;
> +
> +	if (cpu_has_dsp2)
> +		elf_hwcap |= HWCAP_MIPS_DSP2;
> +
> +	if (cpu_has_dsp3)
> +		elf_hwcap |= HWCAP_MIPS_DSP3;
> +
> +	if (cpu_has_loongson_mmi)
> +		elf_hwcap |= HWCAP_LOONGSON_MMI;
> +
> +	if (cpu_has_loongson_mmi)
> +		elf_hwcap |= HWCAP_LOONGSON_CAM;

Hi:

Sorry, there is a typo causing build failure.

Should be:

---
  arch/mips/include/uapi/asm/hwcap.h | 11 ++++++++++
  arch/mips/kernel/cpu-probe.c       | 33 ++++++++++++++++++++++++++++++
  2 files changed, 44 insertions(+)

diff --git a/arch/mips/include/uapi/asm/hwcap.h b/arch/mips/include/uapi/asm/hwcap.h
index a2aba4b059e6..1ade1daa4921 100644
--- a/arch/mips/include/uapi/asm/hwcap.h
+++ b/arch/mips/include/uapi/asm/hwcap.h
@@ -6,5 +6,16 @@
  #define HWCAP_MIPS_R6		(1 << 0)
  #define HWCAP_MIPS_MSA		(1 << 1)
  #define HWCAP_MIPS_CRC32	(1 << 2)
+#define HWCAP_MIPS_MIPS16	(1 << 3)
+#define HWCAP_MIPS_MDMX     (1 << 4)
+#define HWCAP_MIPS_MIPS3D   (1 << 5)
+#define HWCAP_MIPS_SMARTMIPS (1 << 6)
+#define HWCAP_MIPS_DSP      (1 << 7)
+#define HWCAP_MIPS_DSP2     (1 << 8)
+#define HWCAP_MIPS_DSP3     (1 << 9)
+#define HWCAP_MIPS_MIPS16E2 (1 << 10)
+#define HWCAP_LOONGSON_MMI  (1 << 11)
+#define HWCAP_LOONGSON_EXT  (1 << 12)
+#define HWCAP_LOONGSON_EXT2 (1 << 13)
  
  #endif /* _UAPI_ASM_HWCAP_H */
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index c2eb392597bf..f521cbf934e7 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -2180,6 +2180,39 @@ void cpu_probe(void)
  		elf_hwcap |= HWCAP_MIPS_MSA;
  	}
  
+	if (cpu_has_mips16)
+		elf_hwcap |= HWCAP_MIPS_MIPS16;
+
+	if (cpu_has_mdmx)
+		elf_hwcap |= HWCAP_MIPS_MDMX;
+
+	if (cpu_has_mips3d)
+		elf_hwcap |= HWCAP_MIPS_MIPS3D;
+
+	if (cpu_has_smartmips)
+		elf_hwcap |= HWCAP_MIPS_SMARTMIPS;
+
+	if (cpu_has_dsp)
+		elf_hwcap |= HWCAP_MIPS_DSP;
+
+	if (cpu_has_dsp2)
+		elf_hwcap |= HWCAP_MIPS_DSP2;
+
+	if (cpu_has_dsp3)
+		elf_hwcap |= HWCAP_MIPS_DSP3;
+
+	if (cpu_has_mips16e2)
+		elf_hwcap |= HWCAP_MIPS_MIPS16E2;
+
+	if (cpu_has_loongson_mmi)
+		elf_hwcap |= HWCAP_LOONGSON_MMI;
+
+	if (cpu_has_loongson_ext)
+		elf_hwcap |= HWCAP_LOONGSON_EXT;
+
+	if (cpu_has_loongson_ext2)
+		elf_hwcap |= HWCAP_LOONGSON_EXT2;
+
  	if (cpu_has_vz)
  		cpu_probe_vz(c);
  
-- 2.23.0



^ permalink raw reply related	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 027/119] MIPS: elf_hwcap: Export userspace ASEs
  2019-10-29 10:50   ` Jiaxun Yang
@ 2019-10-30  9:02     ` Greg Kroah-Hartman
  2019-10-30 13:22       ` [PATCH] " Jiaxun Yang
  2019-10-30 13:23       ` Jiaxun Yang
  0 siblings, 2 replies; 135+ messages in thread
From: Greg Kroah-Hartman @ 2019-10-30  9:02 UTC (permalink / raw)
  To: Jiaxun Yang
  Cc: linux-kernel, stable, Meng Zhuo, linux-mips, Paul Burton, Sasha Levin

On Tue, Oct 29, 2019 at 06:50:38PM +0800, Jiaxun Yang wrote:
> 
> 在 2019/10/28 上午5:00, Greg Kroah-Hartman 写道:
> > From: Jiaxun Yang <jiaxun.yang@flygoat.com>
> > 
> > [ Upstream commit 38dffe1e4dde1d3174fdce09d67370412843ebb5 ]
> > 
> > A Golang developer reported MIPS hwcap isn't reflecting instructions
> > that the processor actually supported so programs can't apply optimized
> > code at runtime.
> > 
> > Thus we export the ASEs that can be used in userspace programs.
> > 
> > Reported-by: Meng Zhuo <mengzhuo1203@gmail.com>
> > Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
> > Cc: linux-mips@vger.kernel.org
> > Cc: Paul Burton <paul.burton@mips.com>
> > Cc: <stable@vger.kernel.org> # 4.14+
> > Signed-off-by: Paul Burton <paul.burton@mips.com>
> > Signed-off-by: Sasha Levin <sashal@kernel.org>
> > ---
> >   arch/mips/include/uapi/asm/hwcap.h | 11 ++++++++++
> >   arch/mips/kernel/cpu-probe.c       | 33 ++++++++++++++++++++++++++++++
> >   2 files changed, 44 insertions(+)
> > 
> > diff --git a/arch/mips/include/uapi/asm/hwcap.h b/arch/mips/include/uapi/asm/hwcap.h
> > index 600ad8fd68356..2475294c3d185 100644
> > --- a/arch/mips/include/uapi/asm/hwcap.h
> > +++ b/arch/mips/include/uapi/asm/hwcap.h
> > @@ -5,5 +5,16 @@
> >   /* HWCAP flags */
> >   #define HWCAP_MIPS_R6		(1 << 0)
> >   #define HWCAP_MIPS_MSA		(1 << 1)
> > +#define HWCAP_MIPS_MIPS16	(1 << 3)
> > +#define HWCAP_MIPS_MDMX     (1 << 4)
> > +#define HWCAP_MIPS_MIPS3D   (1 << 5)
> > +#define HWCAP_MIPS_SMARTMIPS (1 << 6)
> > +#define HWCAP_MIPS_DSP      (1 << 7)
> > +#define HWCAP_MIPS_DSP2     (1 << 8)
> > +#define HWCAP_MIPS_DSP3     (1 << 9)
> > +#define HWCAP_MIPS_MIPS16E2 (1 << 10)
> > +#define HWCAP_LOONGSON_MMI  (1 << 11)
> > +#define HWCAP_LOONGSON_EXT  (1 << 12)
> > +#define HWCAP_LOONGSON_EXT2 (1 << 13)
> >   #endif /* _UAPI_ASM_HWCAP_H */
> > diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
> > index 3007ae1bb616a..c38cd62879f4e 100644
> > --- a/arch/mips/kernel/cpu-probe.c
> > +++ b/arch/mips/kernel/cpu-probe.c
> > @@ -2080,6 +2080,39 @@ void cpu_probe(void)
> >   		elf_hwcap |= HWCAP_MIPS_MSA;
> >   	}
> > +	if (cpu_has_mips16)
> > +		elf_hwcap |= HWCAP_MIPS_MIPS16;
> > +
> > +	if (cpu_has_mdmx)
> > +		elf_hwcap |= HWCAP_MIPS_MDMX;
> > +
> > +	if (cpu_has_mips3d)
> > +		elf_hwcap |= HWCAP_MIPS_MIPS3D;
> > +
> > +	if (cpu_has_smartmips)
> > +		elf_hwcap |= HWCAP_MIPS_SMARTMIPS;
> > +
> > +	if (cpu_has_dsp)
> > +		elf_hwcap |= HWCAP_MIPS_DSP;
> > +
> > +	if (cpu_has_dsp2)
> > +		elf_hwcap |= HWCAP_MIPS_DSP2;
> > +
> > +	if (cpu_has_dsp3)
> > +		elf_hwcap |= HWCAP_MIPS_DSP3;
> > +
> > +	if (cpu_has_loongson_mmi)
> > +		elf_hwcap |= HWCAP_LOONGSON_MMI;
> > +
> > +	if (cpu_has_loongson_mmi)
> > +		elf_hwcap |= HWCAP_LOONGSON_CAM;
> 
> Hi:
> 
> Sorry, there is a typo causing build failure.
> 
> Should be:

Can you resend this in a format we can apply it in?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 135+ messages in thread

* [PATCH] MIPS: elf_hwcap: Export userspace ASEs
  2019-10-30  9:02     ` Greg Kroah-Hartman
@ 2019-10-30 13:22       ` Jiaxun Yang
  2019-12-03 12:25         ` Greg KH
  2019-10-30 13:23       ` Jiaxun Yang
  1 sibling, 1 reply; 135+ messages in thread
From: Jiaxun Yang @ 2019-10-30 13:22 UTC (permalink / raw)
  To: gregkh; +Cc: stable

A Golang developer reported MIPS hwcap isn't reflecting instructions
that the processor actually supported so programs can't apply optimized
code at runtime.

Thus we export the ASEs that can be used in userspace programs.

Reported-by: Meng Zhuo <mengzhuo1203@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-mips@vger.kernel.org
Cc: Paul Burton <paul.burton@mips.com>
Cc: <stable@vger.kernel.org> # 4.14+
Signed-off-by: Paul Burton <paul.burton@mips.com>
---
 arch/mips/include/uapi/asm/hwcap.h | 11 ++++++++++
 arch/mips/kernel/cpu-probe.c       | 33 ++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/arch/mips/include/uapi/asm/hwcap.h b/arch/mips/include/uapi/asm/hwcap.h
index a2aba4b059e63..1ade1daa49210 100644
--- a/arch/mips/include/uapi/asm/hwcap.h
+++ b/arch/mips/include/uapi/asm/hwcap.h
@@ -6,5 +6,16 @@
 #define HWCAP_MIPS_R6		(1 << 0)
 #define HWCAP_MIPS_MSA		(1 << 1)
 #define HWCAP_MIPS_CRC32	(1 << 2)
+#define HWCAP_MIPS_MIPS16	(1 << 3)
+#define HWCAP_MIPS_MDMX     (1 << 4)
+#define HWCAP_MIPS_MIPS3D   (1 << 5)
+#define HWCAP_MIPS_SMARTMIPS (1 << 6)
+#define HWCAP_MIPS_DSP      (1 << 7)
+#define HWCAP_MIPS_DSP2     (1 << 8)
+#define HWCAP_MIPS_DSP3     (1 << 9)
+#define HWCAP_MIPS_MIPS16E2 (1 << 10)
+#define HWCAP_LOONGSON_MMI  (1 << 11)
+#define HWCAP_LOONGSON_EXT  (1 << 12)
+#define HWCAP_LOONGSON_EXT2 (1 << 13)
 
 #endif /* _UAPI_ASM_HWCAP_H */
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index c2eb392597bf6..f521cbf934e76 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -2180,6 +2180,39 @@ void cpu_probe(void)
 		elf_hwcap |= HWCAP_MIPS_MSA;
 	}
 
+	if (cpu_has_mips16)
+		elf_hwcap |= HWCAP_MIPS_MIPS16;
+
+	if (cpu_has_mdmx)
+		elf_hwcap |= HWCAP_MIPS_MDMX;
+
+	if (cpu_has_mips3d)
+		elf_hwcap |= HWCAP_MIPS_MIPS3D;
+
+	if (cpu_has_smartmips)
+		elf_hwcap |= HWCAP_MIPS_SMARTMIPS;
+
+	if (cpu_has_dsp)
+		elf_hwcap |= HWCAP_MIPS_DSP;
+
+	if (cpu_has_dsp2)
+		elf_hwcap |= HWCAP_MIPS_DSP2;
+
+	if (cpu_has_dsp3)
+		elf_hwcap |= HWCAP_MIPS_DSP3;
+
+	if (cpu_has_mips16e2)
+		elf_hwcap |= HWCAP_MIPS_MIPS16E2;
+
+	if (cpu_has_loongson_mmi)
+		elf_hwcap |= HWCAP_LOONGSON_MMI;
+
+	if (cpu_has_loongson_ext)
+		elf_hwcap |= HWCAP_LOONGSON_EXT;
+
+	if (cpu_has_loongson_ext2)
+		elf_hwcap |= HWCAP_LOONGSON_EXT2;
+
 	if (cpu_has_vz)
 		cpu_probe_vz(c);
 

^ permalink raw reply related	[flat|nested] 135+ messages in thread

* [PATCH] MIPS: elf_hwcap: Export userspace ASEs
  2019-10-30  9:02     ` Greg Kroah-Hartman
  2019-10-30 13:22       ` [PATCH] " Jiaxun Yang
@ 2019-10-30 13:23       ` Jiaxun Yang
  1 sibling, 0 replies; 135+ messages in thread
From: Jiaxun Yang @ 2019-10-30 13:23 UTC (permalink / raw)
  To: gregkh; +Cc: stable

A Golang developer reported MIPS hwcap isn't reflecting instructions
that the processor actually supported so programs can't apply optimized
code at runtime.

Thus we export the ASEs that can be used in userspace programs.

Reported-by: Meng Zhuo <mengzhuo1203@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-mips@vger.kernel.org
Cc: Paul Burton <paul.burton@mips.com>
Cc: <stable@vger.kernel.org> # 4.14+
Signed-off-by: Paul Burton <paul.burton@mips.com>
---
 arch/mips/include/uapi/asm/hwcap.h | 11 ++++++++++
 arch/mips/kernel/cpu-probe.c       | 33 ++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/arch/mips/include/uapi/asm/hwcap.h b/arch/mips/include/uapi/asm/hwcap.h
index a2aba4b059e63..1ade1daa49210 100644
--- a/arch/mips/include/uapi/asm/hwcap.h
+++ b/arch/mips/include/uapi/asm/hwcap.h
@@ -6,5 +6,16 @@
 #define HWCAP_MIPS_R6		(1 << 0)
 #define HWCAP_MIPS_MSA		(1 << 1)
 #define HWCAP_MIPS_CRC32	(1 << 2)
+#define HWCAP_MIPS_MIPS16	(1 << 3)
+#define HWCAP_MIPS_MDMX     (1 << 4)
+#define HWCAP_MIPS_MIPS3D   (1 << 5)
+#define HWCAP_MIPS_SMARTMIPS (1 << 6)
+#define HWCAP_MIPS_DSP      (1 << 7)
+#define HWCAP_MIPS_DSP2     (1 << 8)
+#define HWCAP_MIPS_DSP3     (1 << 9)
+#define HWCAP_MIPS_MIPS16E2 (1 << 10)
+#define HWCAP_LOONGSON_MMI  (1 << 11)
+#define HWCAP_LOONGSON_EXT  (1 << 12)
+#define HWCAP_LOONGSON_EXT2 (1 << 13)
 
 #endif /* _UAPI_ASM_HWCAP_H */
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index c2eb392597bf6..f521cbf934e76 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -2180,6 +2180,39 @@ void cpu_probe(void)
 		elf_hwcap |= HWCAP_MIPS_MSA;
 	}
 
+	if (cpu_has_mips16)
+		elf_hwcap |= HWCAP_MIPS_MIPS16;
+
+	if (cpu_has_mdmx)
+		elf_hwcap |= HWCAP_MIPS_MDMX;
+
+	if (cpu_has_mips3d)
+		elf_hwcap |= HWCAP_MIPS_MIPS3D;
+
+	if (cpu_has_smartmips)
+		elf_hwcap |= HWCAP_MIPS_SMARTMIPS;
+
+	if (cpu_has_dsp)
+		elf_hwcap |= HWCAP_MIPS_DSP;
+
+	if (cpu_has_dsp2)
+		elf_hwcap |= HWCAP_MIPS_DSP2;
+
+	if (cpu_has_dsp3)
+		elf_hwcap |= HWCAP_MIPS_DSP3;
+
+	if (cpu_has_mips16e2)
+		elf_hwcap |= HWCAP_MIPS_MIPS16E2;
+
+	if (cpu_has_loongson_mmi)
+		elf_hwcap |= HWCAP_LOONGSON_MMI;
+
+	if (cpu_has_loongson_ext)
+		elf_hwcap |= HWCAP_LOONGSON_EXT;
+
+	if (cpu_has_loongson_ext2)
+		elf_hwcap |= HWCAP_LOONGSON_EXT2;
+
 	if (cpu_has_vz)
 		cpu_probe_vz(c);
 

^ permalink raw reply related	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 024/119] sctp: change sctp_prot .no_autobind with true
  2019-10-27 21:00 ` [PATCH 4.14 024/119] sctp: change sctp_prot .no_autobind with true Greg Kroah-Hartman
@ 2019-10-31  7:54   ` Rantala, Tommi T. (Nokia - FI/Espoo)
  2019-10-31  9:14     ` Xin Long
  0 siblings, 1 reply; 135+ messages in thread
From: Rantala, Tommi T. (Nokia - FI/Espoo) @ 2019-10-31  7:54 UTC (permalink / raw)
  To: gregkh, linux-kernel
  Cc: lucien.xin, stable, davem, syzbot+d44f7bbebdea49dbc84a, marcelo.leitner

On Sun, 2019-10-27 at 22:00 +0100, Greg Kroah-Hartman wrote:
> From: Xin Long <lucien.xin@gmail.com>
> 
> [ Upstream commit 63dfb7938b13fa2c2fbcb45f34d065769eb09414 ]
> 
> syzbot reported a memory leak:
> 
>   BUG: memory leak, unreferenced object 0xffff888120b3d380 (size 64):
>   backtrace:
> 
>     [...] slab_alloc mm/slab.c:3319 [inline]
>     [...] kmem_cache_alloc+0x13f/0x2c0 mm/slab.c:3483
>     [...] sctp_bucket_create net/sctp/socket.c:8523 [inline]
>     [...] sctp_get_port_local+0x189/0x5a0 net/sctp/socket.c:8270
>     [...] sctp_do_bind+0xcc/0x200 net/sctp/socket.c:402
>     [...] sctp_bindx_add+0x4b/0xd0 net/sctp/socket.c:497
>     [...] sctp_setsockopt_bindx+0x156/0x1b0 net/sctp/socket.c:1022
>     [...] sctp_setsockopt net/sctp/socket.c:4641 [inline]
>     [...] sctp_setsockopt+0xaea/0x2dc0 net/sctp/socket.c:4611
>     [...] sock_common_setsockopt+0x38/0x50 net/core/sock.c:3147
>     [...] __sys_setsockopt+0x10f/0x220 net/socket.c:2084
>     [...] __do_sys_setsockopt net/socket.c:2100 [inline]
> 
> It was caused by when sending msgs without binding a port, in the path:
> inet_sendmsg() -> inet_send_prepare() -> inet_autobind() ->
> .get_port/sctp_get_port(), sp->bind_hash will be set while bp->port is
> not. Later when binding another port by sctp_setsockopt_bindx(), a new
> bucket will be created as bp->port is not set.
> 
> sctp's autobind is supposed to call sctp_autobind() where it does all
> things including setting bp->port. Since sctp_autobind() is called in
> sctp_sendmsg() if the sk is not yet bound, it should have skipped the
> auto bind.
> 
> THis patch is to avoid calling inet_autobind() in inet_send_prepare()
> by changing sctp_prot .no_autobind with true, also remove the unused
> .get_port.

Hi,

I'm seeing SCTP oops in 4.14.151, reproducible easily with iperf:

# iperf3 -s -1 &
# iperf3 -c localhost --sctp

This patch was also included in 4.19.81, but there it seems to be working
fine.

Any ideas if this patch is valid for 4.14, or what's missing in 4.14 to
make this work?


[   29.179116] sctp: Hash tables configured (bind 256/256)
[   29.188846] BUG: unable to handle kernel NULL pointer dereference
at           (null)
[   29.190189] IP:           (null)
[   29.190758] PGD 0 P4D 0 
[   29.191224] Oops: 0010 [#1] SMP PTI
[   29.191786] Modules linked in: hmac sctp libcrc32c isofs kvm_intel kvm
irqbypass sch_fq_codel pcbc aesni_intel aes_x86_64 crypto_simd cryptd
glue_helper ata_piix dm_mirror dm_region_hash dm_log dm_mod dax autofs4
[   29.194585] CPU: 5 PID: 733 Comm: iperf3 Not tainted 4.14.151-1.x86_64
#1
[   29.195689] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
1.12.0-2.fc30 04/01/2014
[   29.197009] task: ffff93edb0e65bc0 task.stack: ffff9fcdc11b8000
[   29.197916] RIP: 0010:          (null)
[   29.198532] RSP: 0018:ffff9fcdc11bbe50 EFLAGS: 00010246
[   29.199349] RAX: 0000000000000000 RBX: ffff93edb02d0680 RCX:
0000000000000002
[   29.200426] RDX: 0000000000000001 RSI: 0000000000000000 RDI:
ffff93edb02d0680
[   29.201497] RBP: 000000000000001c R08: 0100000000000000 R09:
0000564277abb4e8
[   29.202577] R10: 0000000000000000 R11: 0000000000000000 R12:
ffff9fcdc11bbe90
[   29.203656] R13: 0000564277abb4e0 R14: 0000000000000000 R15:
0000000000000000
[   29.204737] FS:  00007f0f6242cb80(0000) GS:ffff93edbfd40000(0000)
knlGS:0000000000000000
[   29.205967] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   29.206863] CR2: 0000000000000000 CR3: 000000023037c002 CR4:
00000000003606e0
[   29.207958] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[   29.209079] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
0000000000000400
[   29.210162] Call Trace:
[   29.210577]  inet_autobind+0x2c/0x60
[   29.211172]  inet_dgram_connect+0x45/0x80
[   29.211808]  SYSC_connect+0x89/0xb0
[   29.212384]  ? sock_map_fd+0x3d/0x60
[   29.212960]  do_syscall_64+0x74/0x190
[   29.213517]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[   29.214212] RIP: 0033:0x7f0f626b5758
[   29.214710] RSP: 002b:00007ffc7ca624f8 EFLAGS: 00000246 ORIG_RAX:
000000000000002a
[   29.215727] RAX: ffffffffffffffda RBX: 0000564277aba260 RCX:
00007f0f626b5758
[   29.216660] RDX: 000000000000001c RSI: 0000564277abb4e0 RDI:
0000000000000005
[   29.217613] RBP: 0000000000000005 R08: 0000564277abc9d0 R09:
0000564277abb4e8
[   29.218604] R10: 0000000000000000 R11: 0000000000000246 R12:
00007f0f627a7170
[   29.219606] R13: 00007ffc7ca62520 R14: 0000564277aba260 R15:
0000000000000001
[   29.220596] Code:  Bad RIP value.
[   29.221075] RIP:           (null) RSP: ffff9fcdc11bbe50
[   29.221772] CR2: 0000000000000000
[   29.222260] ---[ end trace 831c4c1f11109ca0 ]---


> Reported-by: syzbot+d44f7bbebdea49dbc84a@syzkaller.appspotmail.com
> Signed-off-by: Xin Long <lucien.xin@gmail.com>
> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
> Signed-off-by: David S. Miller <davem@davemloft.net>
> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> ---
>  net/sctp/socket.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> --- a/net/sctp/socket.c
> +++ b/net/sctp/socket.c
> @@ -8313,7 +8313,7 @@ struct proto sctp_prot = {
>  	.backlog_rcv =	sctp_backlog_rcv,
>  	.hash        =	sctp_hash,
>  	.unhash      =	sctp_unhash,
> -	.get_port    =	sctp_get_port,
> +	.no_autobind =	true,
>  	.obj_size    =  sizeof(struct sctp_sock),
>  	.sysctl_mem  =  sysctl_sctp_mem,
>  	.sysctl_rmem =  sysctl_sctp_rmem,
> @@ -8352,7 +8352,7 @@ struct proto sctpv6_prot = {
>  	.backlog_rcv	= sctp_backlog_rcv,
>  	.hash		= sctp_hash,
>  	.unhash		= sctp_unhash,
> -	.get_port	= sctp_get_port,
> +	.no_autobind	= true,
>  	.obj_size	= sizeof(struct sctp6_sock),
>  	.sysctl_mem	= sysctl_sctp_mem,
>  	.sysctl_rmem	= sysctl_sctp_rmem,
> 
> 


^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 024/119] sctp: change sctp_prot .no_autobind with true
  2019-10-31  7:54   ` Rantala, Tommi T. (Nokia - FI/Espoo)
@ 2019-10-31  9:14     ` Xin Long
  2019-10-31 12:09       ` Sasha Levin
  0 siblings, 1 reply; 135+ messages in thread
From: Xin Long @ 2019-10-31  9:14 UTC (permalink / raw)
  To: Rantala, Tommi T. (Nokia - FI/Espoo)
  Cc: gregkh, linux-kernel, stable, davem, syzbot+d44f7bbebdea49dbc84a,
	marcelo.leitner

On Thu, Oct 31, 2019 at 3:54 PM Rantala, Tommi T. (Nokia - FI/Espoo)
<tommi.t.rantala@nokia.com> wrote:
>
> On Sun, 2019-10-27 at 22:00 +0100, Greg Kroah-Hartman wrote:
> > From: Xin Long <lucien.xin@gmail.com>
> >
> > [ Upstream commit 63dfb7938b13fa2c2fbcb45f34d065769eb09414 ]
> >
> > syzbot reported a memory leak:
> >
> >   BUG: memory leak, unreferenced object 0xffff888120b3d380 (size 64):
> >   backtrace:
> >
> >     [...] slab_alloc mm/slab.c:3319 [inline]
> >     [...] kmem_cache_alloc+0x13f/0x2c0 mm/slab.c:3483
> >     [...] sctp_bucket_create net/sctp/socket.c:8523 [inline]
> >     [...] sctp_get_port_local+0x189/0x5a0 net/sctp/socket.c:8270
> >     [...] sctp_do_bind+0xcc/0x200 net/sctp/socket.c:402
> >     [...] sctp_bindx_add+0x4b/0xd0 net/sctp/socket.c:497
> >     [...] sctp_setsockopt_bindx+0x156/0x1b0 net/sctp/socket.c:1022
> >     [...] sctp_setsockopt net/sctp/socket.c:4641 [inline]
> >     [...] sctp_setsockopt+0xaea/0x2dc0 net/sctp/socket.c:4611
> >     [...] sock_common_setsockopt+0x38/0x50 net/core/sock.c:3147
> >     [...] __sys_setsockopt+0x10f/0x220 net/socket.c:2084
> >     [...] __do_sys_setsockopt net/socket.c:2100 [inline]
> >
> > It was caused by when sending msgs without binding a port, in the path:
> > inet_sendmsg() -> inet_send_prepare() -> inet_autobind() ->
> > .get_port/sctp_get_port(), sp->bind_hash will be set while bp->port is
> > not. Later when binding another port by sctp_setsockopt_bindx(), a new
> > bucket will be created as bp->port is not set.
> >
> > sctp's autobind is supposed to call sctp_autobind() where it does all
> > things including setting bp->port. Since sctp_autobind() is called in
> > sctp_sendmsg() if the sk is not yet bound, it should have skipped the
> > auto bind.
> >
> > THis patch is to avoid calling inet_autobind() in inet_send_prepare()
> > by changing sctp_prot .no_autobind with true, also remove the unused
> > .get_port.
>
> Hi,
>
> I'm seeing SCTP oops in 4.14.151, reproducible easily with iperf:
>
> # iperf3 -s -1 &
> # iperf3 -c localhost --sctp
>
> This patch was also included in 4.19.81, but there it seems to be working
> fine.
>
> Any ideas if this patch is valid for 4.14, or what's missing in 4.14 to
> make this work?
pls get this commit into 4.14, which has been in 4.19:

commit 644fbdeacf1d3edd366e44b8ba214de9d1dd66a9
Author: Xin Long <lucien.xin@gmail.com>
Date:   Sun May 20 16:39:10 2018 +0800

    sctp: fix the issue that flags are ignored when using kernel_connect

>
>
> [   29.179116] sctp: Hash tables configured (bind 256/256)
> [   29.188846] BUG: unable to handle kernel NULL pointer dereference
> at           (null)
> [   29.190189] IP:           (null)
> [   29.190758] PGD 0 P4D 0
> [   29.191224] Oops: 0010 [#1] SMP PTI
> [   29.191786] Modules linked in: hmac sctp libcrc32c isofs kvm_intel kvm
> irqbypass sch_fq_codel pcbc aesni_intel aes_x86_64 crypto_simd cryptd
> glue_helper ata_piix dm_mirror dm_region_hash dm_log dm_mod dax autofs4
> [   29.194585] CPU: 5 PID: 733 Comm: iperf3 Not tainted 4.14.151-1.x86_64
> #1
> [   29.195689] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
> 1.12.0-2.fc30 04/01/2014
> [   29.197009] task: ffff93edb0e65bc0 task.stack: ffff9fcdc11b8000
> [   29.197916] RIP: 0010:          (null)
> [   29.198532] RSP: 0018:ffff9fcdc11bbe50 EFLAGS: 00010246
> [   29.199349] RAX: 0000000000000000 RBX: ffff93edb02d0680 RCX:
> 0000000000000002
> [   29.200426] RDX: 0000000000000001 RSI: 0000000000000000 RDI:
> ffff93edb02d0680
> [   29.201497] RBP: 000000000000001c R08: 0100000000000000 R09:
> 0000564277abb4e8
> [   29.202577] R10: 0000000000000000 R11: 0000000000000000 R12:
> ffff9fcdc11bbe90
> [   29.203656] R13: 0000564277abb4e0 R14: 0000000000000000 R15:
> 0000000000000000
> [   29.204737] FS:  00007f0f6242cb80(0000) GS:ffff93edbfd40000(0000)
> knlGS:0000000000000000
> [   29.205967] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [   29.206863] CR2: 0000000000000000 CR3: 000000023037c002 CR4:
> 00000000003606e0
> [   29.207958] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [   29.209079] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
> 0000000000000400
> [   29.210162] Call Trace:
> [   29.210577]  inet_autobind+0x2c/0x60
> [   29.211172]  inet_dgram_connect+0x45/0x80
> [   29.211808]  SYSC_connect+0x89/0xb0
> [   29.212384]  ? sock_map_fd+0x3d/0x60
> [   29.212960]  do_syscall_64+0x74/0x190
> [   29.213517]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
> [   29.214212] RIP: 0033:0x7f0f626b5758
> [   29.214710] RSP: 002b:00007ffc7ca624f8 EFLAGS: 00000246 ORIG_RAX:
> 000000000000002a
> [   29.215727] RAX: ffffffffffffffda RBX: 0000564277aba260 RCX:
> 00007f0f626b5758
> [   29.216660] RDX: 000000000000001c RSI: 0000564277abb4e0 RDI:
> 0000000000000005
> [   29.217613] RBP: 0000000000000005 R08: 0000564277abc9d0 R09:
> 0000564277abb4e8
> [   29.218604] R10: 0000000000000000 R11: 0000000000000246 R12:
> 00007f0f627a7170
> [   29.219606] R13: 00007ffc7ca62520 R14: 0000564277aba260 R15:
> 0000000000000001
> [   29.220596] Code:  Bad RIP value.
> [   29.221075] RIP:           (null) RSP: ffff9fcdc11bbe50
> [   29.221772] CR2: 0000000000000000
> [   29.222260] ---[ end trace 831c4c1f11109ca0 ]---
>
>
> > Reported-by: syzbot+d44f7bbebdea49dbc84a@syzkaller.appspotmail.com
> > Signed-off-by: Xin Long <lucien.xin@gmail.com>
> > Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
> > Signed-off-by: David S. Miller <davem@davemloft.net>
> > Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > ---
> >  net/sctp/socket.c |    4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > --- a/net/sctp/socket.c
> > +++ b/net/sctp/socket.c
> > @@ -8313,7 +8313,7 @@ struct proto sctp_prot = {
> >       .backlog_rcv =  sctp_backlog_rcv,
> >       .hash        =  sctp_hash,
> >       .unhash      =  sctp_unhash,
> > -     .get_port    =  sctp_get_port,
> > +     .no_autobind =  true,
> >       .obj_size    =  sizeof(struct sctp_sock),
> >       .sysctl_mem  =  sysctl_sctp_mem,
> >       .sysctl_rmem =  sysctl_sctp_rmem,
> > @@ -8352,7 +8352,7 @@ struct proto sctpv6_prot = {
> >       .backlog_rcv    = sctp_backlog_rcv,
> >       .hash           = sctp_hash,
> >       .unhash         = sctp_unhash,
> > -     .get_port       = sctp_get_port,
> > +     .no_autobind    = true,
> >       .obj_size       = sizeof(struct sctp6_sock),
> >       .sysctl_mem     = sysctl_sctp_mem,
> >       .sysctl_rmem    = sysctl_sctp_rmem,
> >
> >
>

^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 024/119] sctp: change sctp_prot .no_autobind with true
  2019-10-31  9:14     ` Xin Long
@ 2019-10-31 12:09       ` Sasha Levin
  2019-11-01 17:58         ` Xin Long
  0 siblings, 1 reply; 135+ messages in thread
From: Sasha Levin @ 2019-10-31 12:09 UTC (permalink / raw)
  To: Xin Long
  Cc: Rantala, Tommi T. (Nokia - FI/Espoo),
	gregkh, linux-kernel, stable, davem, syzbot+d44f7bbebdea49dbc84a,
	marcelo.leitner

On Thu, Oct 31, 2019 at 05:14:15PM +0800, Xin Long wrote:
>On Thu, Oct 31, 2019 at 3:54 PM Rantala, Tommi T. (Nokia - FI/Espoo)
><tommi.t.rantala@nokia.com> wrote:
>>
>> On Sun, 2019-10-27 at 22:00 +0100, Greg Kroah-Hartman wrote:
>> > From: Xin Long <lucien.xin@gmail.com>
>> >
>> > [ Upstream commit 63dfb7938b13fa2c2fbcb45f34d065769eb09414 ]
>> >
>> > syzbot reported a memory leak:
>> >
>> >   BUG: memory leak, unreferenced object 0xffff888120b3d380 (size 64):
>> >   backtrace:
>> >
>> >     [...] slab_alloc mm/slab.c:3319 [inline]
>> >     [...] kmem_cache_alloc+0x13f/0x2c0 mm/slab.c:3483
>> >     [...] sctp_bucket_create net/sctp/socket.c:8523 [inline]
>> >     [...] sctp_get_port_local+0x189/0x5a0 net/sctp/socket.c:8270
>> >     [...] sctp_do_bind+0xcc/0x200 net/sctp/socket.c:402
>> >     [...] sctp_bindx_add+0x4b/0xd0 net/sctp/socket.c:497
>> >     [...] sctp_setsockopt_bindx+0x156/0x1b0 net/sctp/socket.c:1022
>> >     [...] sctp_setsockopt net/sctp/socket.c:4641 [inline]
>> >     [...] sctp_setsockopt+0xaea/0x2dc0 net/sctp/socket.c:4611
>> >     [...] sock_common_setsockopt+0x38/0x50 net/core/sock.c:3147
>> >     [...] __sys_setsockopt+0x10f/0x220 net/socket.c:2084
>> >     [...] __do_sys_setsockopt net/socket.c:2100 [inline]
>> >
>> > It was caused by when sending msgs without binding a port, in the path:
>> > inet_sendmsg() -> inet_send_prepare() -> inet_autobind() ->
>> > .get_port/sctp_get_port(), sp->bind_hash will be set while bp->port is
>> > not. Later when binding another port by sctp_setsockopt_bindx(), a new
>> > bucket will be created as bp->port is not set.
>> >
>> > sctp's autobind is supposed to call sctp_autobind() where it does all
>> > things including setting bp->port. Since sctp_autobind() is called in
>> > sctp_sendmsg() if the sk is not yet bound, it should have skipped the
>> > auto bind.
>> >
>> > THis patch is to avoid calling inet_autobind() in inet_send_prepare()
>> > by changing sctp_prot .no_autobind with true, also remove the unused
>> > .get_port.
>>
>> Hi,
>>
>> I'm seeing SCTP oops in 4.14.151, reproducible easily with iperf:
>>
>> # iperf3 -s -1 &
>> # iperf3 -c localhost --sctp
>>
>> This patch was also included in 4.19.81, but there it seems to be working
>> fine.
>>
>> Any ideas if this patch is valid for 4.14, or what's missing in 4.14 to
>> make this work?
>pls get this commit into 4.14, which has been in 4.19:
>
>commit 644fbdeacf1d3edd366e44b8ba214de9d1dd66a9
>Author: Xin Long <lucien.xin@gmail.com>
>Date:   Sun May 20 16:39:10 2018 +0800
>
>    sctp: fix the issue that flags are ignored when using kernel_connect

Care to send a backport?

-- 
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 024/119] sctp: change sctp_prot .no_autobind with true
  2019-10-31 12:09       ` Sasha Levin
@ 2019-11-01 17:58         ` Xin Long
  2019-11-01 18:48           ` Sasha Levin
  0 siblings, 1 reply; 135+ messages in thread
From: Xin Long @ 2019-11-01 17:58 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Rantala, Tommi T. (Nokia - FI/Espoo),
	gregkh, linux-kernel, stable, davem, syzbot+d44f7bbebdea49dbc84a,
	marcelo.leitner

On Thu, Oct 31, 2019 at 8:10 PM Sasha Levin <sashal@kernel.org> wrote:
>
> On Thu, Oct 31, 2019 at 05:14:15PM +0800, Xin Long wrote:
> >On Thu, Oct 31, 2019 at 3:54 PM Rantala, Tommi T. (Nokia - FI/Espoo)
> ><tommi.t.rantala@nokia.com> wrote:
> >>
> >> On Sun, 2019-10-27 at 22:00 +0100, Greg Kroah-Hartman wrote:
> >> > From: Xin Long <lucien.xin@gmail.com>
> >> >
> >> > [ Upstream commit 63dfb7938b13fa2c2fbcb45f34d065769eb09414 ]
> >> >
> >> > syzbot reported a memory leak:
> >> >
> >> >   BUG: memory leak, unreferenced object 0xffff888120b3d380 (size 64):
> >> >   backtrace:
> >> >
> >> >     [...] slab_alloc mm/slab.c:3319 [inline]
> >> >     [...] kmem_cache_alloc+0x13f/0x2c0 mm/slab.c:3483
> >> >     [...] sctp_bucket_create net/sctp/socket.c:8523 [inline]
> >> >     [...] sctp_get_port_local+0x189/0x5a0 net/sctp/socket.c:8270
> >> >     [...] sctp_do_bind+0xcc/0x200 net/sctp/socket.c:402
> >> >     [...] sctp_bindx_add+0x4b/0xd0 net/sctp/socket.c:497
> >> >     [...] sctp_setsockopt_bindx+0x156/0x1b0 net/sctp/socket.c:1022
> >> >     [...] sctp_setsockopt net/sctp/socket.c:4641 [inline]
> >> >     [...] sctp_setsockopt+0xaea/0x2dc0 net/sctp/socket.c:4611
> >> >     [...] sock_common_setsockopt+0x38/0x50 net/core/sock.c:3147
> >> >     [...] __sys_setsockopt+0x10f/0x220 net/socket.c:2084
> >> >     [...] __do_sys_setsockopt net/socket.c:2100 [inline]
> >> >
> >> > It was caused by when sending msgs without binding a port, in the path:
> >> > inet_sendmsg() -> inet_send_prepare() -> inet_autobind() ->
> >> > .get_port/sctp_get_port(), sp->bind_hash will be set while bp->port is
> >> > not. Later when binding another port by sctp_setsockopt_bindx(), a new
> >> > bucket will be created as bp->port is not set.
> >> >
> >> > sctp's autobind is supposed to call sctp_autobind() where it does all
> >> > things including setting bp->port. Since sctp_autobind() is called in
> >> > sctp_sendmsg() if the sk is not yet bound, it should have skipped the
> >> > auto bind.
> >> >
> >> > THis patch is to avoid calling inet_autobind() in inet_send_prepare()
> >> > by changing sctp_prot .no_autobind with true, also remove the unused
> >> > .get_port.
> >>
> >> Hi,
> >>
> >> I'm seeing SCTP oops in 4.14.151, reproducible easily with iperf:
> >>
> >> # iperf3 -s -1 &
> >> # iperf3 -c localhost --sctp
> >>
> >> This patch was also included in 4.19.81, but there it seems to be working
> >> fine.
> >>
> >> Any ideas if this patch is valid for 4.14, or what's missing in 4.14 to
> >> make this work?
> >pls get this commit into 4.14, which has been in 4.19:
> >
> >commit 644fbdeacf1d3edd366e44b8ba214de9d1dd66a9
> >Author: Xin Long <lucien.xin@gmail.com>
> >Date:   Sun May 20 16:39:10 2018 +0800
> >
> >    sctp: fix the issue that flags are ignored when using kernel_connect
>
> Care to send a backport?
Sure, I haven't yet sent a backport for 4.14.y
After I do the cherry-pick, what's the next step? Post it upstream
with CCing someone ?

>
> --
> Thanks,
> Sasha

^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH 4.14 024/119] sctp: change sctp_prot .no_autobind with true
  2019-11-01 17:58         ` Xin Long
@ 2019-11-01 18:48           ` Sasha Levin
  0 siblings, 0 replies; 135+ messages in thread
From: Sasha Levin @ 2019-11-01 18:48 UTC (permalink / raw)
  To: Xin Long
  Cc: Rantala, Tommi T. (Nokia - FI/Espoo),
	gregkh, linux-kernel, stable, davem, syzbot+d44f7bbebdea49dbc84a,
	marcelo.leitner

On Sat, Nov 02, 2019 at 01:58:33AM +0800, Xin Long wrote:
>On Thu, Oct 31, 2019 at 8:10 PM Sasha Levin <sashal@kernel.org> wrote:
>>
>> On Thu, Oct 31, 2019 at 05:14:15PM +0800, Xin Long wrote:
>> >On Thu, Oct 31, 2019 at 3:54 PM Rantala, Tommi T. (Nokia - FI/Espoo)
>> ><tommi.t.rantala@nokia.com> wrote:
>> >>
>> >> On Sun, 2019-10-27 at 22:00 +0100, Greg Kroah-Hartman wrote:
>> >> > From: Xin Long <lucien.xin@gmail.com>
>> >> >
>> >> > [ Upstream commit 63dfb7938b13fa2c2fbcb45f34d065769eb09414 ]
>> >> >
>> >> > syzbot reported a memory leak:
>> >> >
>> >> >   BUG: memory leak, unreferenced object 0xffff888120b3d380 (size 64):
>> >> >   backtrace:
>> >> >
>> >> >     [...] slab_alloc mm/slab.c:3319 [inline]
>> >> >     [...] kmem_cache_alloc+0x13f/0x2c0 mm/slab.c:3483
>> >> >     [...] sctp_bucket_create net/sctp/socket.c:8523 [inline]
>> >> >     [...] sctp_get_port_local+0x189/0x5a0 net/sctp/socket.c:8270
>> >> >     [...] sctp_do_bind+0xcc/0x200 net/sctp/socket.c:402
>> >> >     [...] sctp_bindx_add+0x4b/0xd0 net/sctp/socket.c:497
>> >> >     [...] sctp_setsockopt_bindx+0x156/0x1b0 net/sctp/socket.c:1022
>> >> >     [...] sctp_setsockopt net/sctp/socket.c:4641 [inline]
>> >> >     [...] sctp_setsockopt+0xaea/0x2dc0 net/sctp/socket.c:4611
>> >> >     [...] sock_common_setsockopt+0x38/0x50 net/core/sock.c:3147
>> >> >     [...] __sys_setsockopt+0x10f/0x220 net/socket.c:2084
>> >> >     [...] __do_sys_setsockopt net/socket.c:2100 [inline]
>> >> >
>> >> > It was caused by when sending msgs without binding a port, in the path:
>> >> > inet_sendmsg() -> inet_send_prepare() -> inet_autobind() ->
>> >> > .get_port/sctp_get_port(), sp->bind_hash will be set while bp->port is
>> >> > not. Later when binding another port by sctp_setsockopt_bindx(), a new
>> >> > bucket will be created as bp->port is not set.
>> >> >
>> >> > sctp's autobind is supposed to call sctp_autobind() where it does all
>> >> > things including setting bp->port. Since sctp_autobind() is called in
>> >> > sctp_sendmsg() if the sk is not yet bound, it should have skipped the
>> >> > auto bind.
>> >> >
>> >> > THis patch is to avoid calling inet_autobind() in inet_send_prepare()
>> >> > by changing sctp_prot .no_autobind with true, also remove the unused
>> >> > .get_port.
>> >>
>> >> Hi,
>> >>
>> >> I'm seeing SCTP oops in 4.14.151, reproducible easily with iperf:
>> >>
>> >> # iperf3 -s -1 &
>> >> # iperf3 -c localhost --sctp
>> >>
>> >> This patch was also included in 4.19.81, but there it seems to be working
>> >> fine.
>> >>
>> >> Any ideas if this patch is valid for 4.14, or what's missing in 4.14 to
>> >> make this work?
>> >pls get this commit into 4.14, which has been in 4.19:
>> >
>> >commit 644fbdeacf1d3edd366e44b8ba214de9d1dd66a9
>> >Author: Xin Long <lucien.xin@gmail.com>
>> >Date:   Sun May 20 16:39:10 2018 +0800
>> >
>> >    sctp: fix the issue that flags are ignored when using kernel_connect
>>
>> Care to send a backport?
>Sure, I haven't yet sent a backport for 4.14.y
>After I do the cherry-pick, what's the next step? Post it upstream
>with CCing someone ?

Just make sure stable@vger.kernel.org is Cc'ed.

-- 
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 135+ messages in thread

* Re: [PATCH] MIPS: elf_hwcap: Export userspace ASEs
  2019-10-30 13:22       ` [PATCH] " Jiaxun Yang
@ 2019-12-03 12:25         ` Greg KH
  0 siblings, 0 replies; 135+ messages in thread
From: Greg KH @ 2019-12-03 12:25 UTC (permalink / raw)
  To: Jiaxun Yang; +Cc: stable

On Wed, Oct 30, 2019 at 09:22:24PM +0800, Jiaxun Yang wrote:
> A Golang developer reported MIPS hwcap isn't reflecting instructions
> that the processor actually supported so programs can't apply optimized
> code at runtime.
> 
> Thus we export the ASEs that can be used in userspace programs.
> 
> Reported-by: Meng Zhuo <mengzhuo1203@gmail.com>
> Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
> Cc: linux-mips@vger.kernel.org
> Cc: Paul Burton <paul.burton@mips.com>
> Cc: <stable@vger.kernel.org> # 4.14+
> Signed-off-by: Paul Burton <paul.burton@mips.com>
> ---
>  arch/mips/include/uapi/asm/hwcap.h | 11 ++++++++++
>  arch/mips/kernel/cpu-probe.c       | 33 ++++++++++++++++++++++++++++++
>  2 files changed, 44 insertions(+)

This fails to apply to the current 4.14.y tree:

	checking file arch/mips/include/uapi/asm/hwcap.h
	Hunk #1 FAILED at 6.
	1 out of 1 hunk FAILED
	checking file arch/mips/kernel/cpu-probe.c
	Hunk #1 succeeded at 2076 (offset -104 lines).

Can you refresh it and resend?  Also remember to include the git id that
this patch is in Linus's tree, I had to look it up by hand :(

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 135+ messages in thread

end of thread, other threads:[~2019-12-03 12:25 UTC | newest]

Thread overview: 135+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-27 20:59 [PATCH 4.14 000/119] 4.14.151-stable review Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 001/119] scsi: ufs: skip shutdown if hba is not powered Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 002/119] scsi: megaraid: disable device when probe failed after enabled device Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 003/119] scsi: qla2xxx: Fix unbound sleep in fcport delete path Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 004/119] ARM: OMAP2+: Fix missing reset done flag for am3 and am43 Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 005/119] ieee802154: ca8210: prevent memory leak Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 006/119] ARM: dts: am4372: Set memory bandwidth limit for DISPC Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 007/119] net: dsa: qca8k: Use up to 7 ports for all operations Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 008/119] MIPS: dts: ar9331: fix interrupt-controller size Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 009/119] xen/efi: Set nonblocking callbacks Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 010/119] nl80211: fix null pointer dereference Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 011/119] mac80211: fix txq " Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 012/119] mips: Loongson: Fix the link time qualifier of serial_exit() Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 013/119] net: hisilicon: Fix usage of uninitialized variable in function mdio_sc_cfg_reg_write() Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 014/119] r8152: Set macpassthru in reset_resume callback Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 015/119] namespace: fix namespace.pl script to support relative paths Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 016/119] md/raid0: fix warning message for parameter default_layout Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 017/119] Revert "drm/radeon: Fix EEH during kexec" Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 018/119] ocfs2: fix panic due to ocfs2_wq is null Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 019/119] ipv4: Return -ENETUNREACH if we cant create route but saddr is valid Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 020/119] net: bcmgenet: Fix RGMII_MODE_EN value for GENET v1/2/3 Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 021/119] net: bcmgenet: Set phydev->dev_flags only for internal PHYs Greg Kroah-Hartman
2019-10-27 20:59 ` [PATCH 4.14 022/119] net: i82596: fix dma_alloc_attr for sni_82596 Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 023/119] net: stmmac: disable/enable ptp_ref_clk in suspend/resume flow Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 024/119] sctp: change sctp_prot .no_autobind with true Greg Kroah-Hartman
2019-10-31  7:54   ` Rantala, Tommi T. (Nokia - FI/Espoo)
2019-10-31  9:14     ` Xin Long
2019-10-31 12:09       ` Sasha Levin
2019-11-01 17:58         ` Xin Long
2019-11-01 18:48           ` Sasha Levin
2019-10-27 21:00 ` [PATCH 4.14 025/119] net: avoid potential infinite loop in tc_ctl_action() Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 026/119] MIPS: Treat Loongson Extensions as ASEs Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 027/119] MIPS: elf_hwcap: Export userspace ASEs Greg Kroah-Hartman
2019-10-29 10:50   ` Jiaxun Yang
2019-10-30  9:02     ` Greg Kroah-Hartman
2019-10-30 13:22       ` [PATCH] " Jiaxun Yang
2019-12-03 12:25         ` Greg KH
2019-10-30 13:23       ` Jiaxun Yang
2019-10-27 21:00 ` [PATCH 4.14 028/119] loop: Add LOOP_SET_DIRECT_IO to compat ioctl Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 029/119] memfd: Fix locking when tagging pins Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 030/119] USB: legousbtower: fix memleak on disconnect Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 031/119] ALSA: hda/realtek - Add support for ALC711 Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 032/119] usb: udc: lpc32xx: fix bad bit shift operation Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 033/119] USB: serial: ti_usb_3410_5052: fix port-close races Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 034/119] USB: ldusb: fix memleak on disconnect Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 035/119] USB: usblp: fix use-after-free " Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 036/119] USB: ldusb: fix read info leaks Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 037/119] arm64: sysreg: Move to use definitions for all the SCTLR bits Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 038/119] arm64: Expose support for optional ARMv8-A features Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 039/119] arm64: Fix the feature type for ID register fields Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 040/119] arm64: v8.4: Support for new floating point multiplication instructions Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 041/119] arm64: Documentation: cpu-feature-registers: Remove RES0 fields Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 042/119] arm64: Expose Arm v8.4 features Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 043/119] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h> Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 044/119] arm64: add PSR_AA32_* definitions Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 045/119] arm64: Introduce sysreg_clear_set() Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 046/119] arm64: capabilities: Update prototype for enable call back Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 047/119] arm64: capabilities: Move errata work around check on boot CPU Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 048/119] arm64: capabilities: Move errata processing code Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 049/119] arm64: capabilities: Prepare for fine grained capabilities Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 050/119] arm64: capabilities: Add flags to handle the conflicts on late CPU Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 051/119] arm64: capabilities: Unify the verification Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 052/119] arm64: capabilities: Filter the entries based on a given mask Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 053/119] arm64: capabilities: Prepare for grouping features and errata work arounds Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 054/119] arm64: capabilities: Split the processing of " Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 055/119] arm64: capabilities: Allow features based on local CPU scope Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 056/119] arm64: capabilities: Group handling of features and errata workarounds Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 057/119] arm64: capabilities: Introduce weak features based on local CPU Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 058/119] arm64: capabilities: Restrict KPTI detection to boot-time CPUs Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 059/119] arm64: capabilities: Add support for features enabled early Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 060/119] arm64: capabilities: Change scope of VHE to Boot CPU feature Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 061/119] arm64: capabilities: Clean up midr range helpers Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 062/119] arm64: Add helpers for checking CPU MIDR against a range Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 063/119] arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35 Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 064/119] arm64: capabilities: Add support for checks based on a list of MIDRs Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 065/119] arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 066/119] arm64: dont zero DIT on signal return Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 067/119] arm64: Get rid of __smccc_workaround_1_hvc_* Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 068/119] arm64: cpufeature: Detect SSBS and advertise to userspace Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 069/119] arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3 Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 070/119] KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 071/119] arm64: fix SSBS sanitization Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 072/119] arm64: Add sysfs vulnerability show for spectre-v1 Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 073/119] arm64: add sysfs vulnerability show for meltdown Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 074/119] arm64: enable generic CPU vulnerabilites support Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 075/119] arm64: Always enable ssb vulnerability detection Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 076/119] arm64: Provide a command line to disable spectre_v2 mitigation Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 077/119] arm64: Advertise mitigation of Spectre-v2, or lack thereof Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 078/119] arm64: Always enable spectre-v2 vulnerability detection Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 079/119] arm64: add sysfs vulnerability show for spectre-v2 Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 080/119] arm64: add sysfs vulnerability show for speculative store bypass Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 081/119] arm64: ssbs: Dont treat CPUs with SSBS as unaffected by SSB Greg Kroah-Hartman
2019-10-27 21:00 ` [PATCH 4.14 082/119] arm64: Force SSBS on context switch Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 083/119] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 084/119] arm64/speculation: Support mitigations= cmdline option Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 085/119] MIPS: tlbex: Fix build_restore_pagemask KScratch restore Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 086/119] staging: wlan-ng: fix exit return when sme->key_idx >= NUM_WEPKEYS Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 087/119] scsi: sd: Ignore a failure to sync cache due to lack of authorization Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 088/119] scsi: core: save/restore command resid for error handling Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 089/119] scsi: core: try to get module before removing device Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 090/119] scsi: ch: Make it possible to open a ch device multiple times again Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 091/119] Input: da9063 - fix capability and drop KEY_SLEEP Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 092/119] Input: synaptics-rmi4 - avoid processing unknown IRQs Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 093/119] ASoC: rsnd: Reinitialize bit clock inversion flag for every format setting Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 094/119] cfg80211: wext: avoid copying malformed SSIDs Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 095/119] mac80211: Reject malformed SSID elements Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 096/119] drm/edid: Add 6 bpc quirk for SDC panel in Lenovo G50 Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 097/119] drm/amdgpu: Bail earlier when amdgpu.cik_/si_support is not set to 1 Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 098/119] drivers/base/memory.c: dont access uninitialized memmaps in soft_offline_page_store() Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 099/119] fs/proc/page.c: dont access uninitialized memmaps in fs/proc/page.c Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 100/119] scsi: zfcp: fix reaction on bit error threshold notification Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 101/119] mm/slub: fix a deadlock in show_slab_objects() Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 102/119] mm/page_owner: dont access uninitialized memmaps when reading /proc/pagetypeinfo Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 103/119] hugetlbfs: dont access uninitialized memmaps in pfn_range_valid_gigantic() Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 104/119] xtensa: drop EXPORT_SYMBOL for outs*/ins* Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 105/119] parisc: Fix vmap memory leak in ioremap()/iounmap() Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 106/119] CIFS: avoid using MID 0xFFFF Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 107/119] x86/boot/64: Make level2_kernel_pgt pages invalid outside kernel area Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 108/119] pinctrl: armada-37xx: fix control of pins 32 and up Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 109/119] pinctrl: armada-37xx: swap polarity on LED group Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 110/119] btrfs: block-group: Fix a memory leak due to missing btrfs_put_block_group() Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 111/119] memstick: jmb38x_ms: Fix an error handling path in jmb38x_ms_probe() Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 112/119] cpufreq: Avoid cpufreq_suspend() deadlock on system shutdown Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 113/119] xen/netback: fix error path of xenvif_connect_data() Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 114/119] PCI: PM: Fix pci_power_up() Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 115/119] KVM: X86: introduce invalidate_gpa argument to tlb flush Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 116/119] kvm: vmx: Introduce lapic_mode enumeration Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 117/119] kvm: apic: Flush TLB after APIC mode/address change if VPIDs are in use Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 118/119] kvm: vmx: Basic APIC virtualization controls have three settings Greg Kroah-Hartman
2019-10-27 21:01 ` [PATCH 4.14 119/119] RDMA/cxgb4: Do not dma memory off of the stack Greg Kroah-Hartman
2019-10-28  2:35 ` [PATCH 4.14 000/119] 4.14.151-stable review kernelci.org bot
2019-10-28  5:59 ` Didik Setiawan
2019-10-28 13:35 ` Guenter Roeck
2019-10-28 21:39 ` Jon Hunter
2019-10-29  7:15 ` Naresh Kamboju

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).