1. 25 Jan, 2017 3 commits
  2. 20 Jan, 2017 5 commits
  3. 19 Jan, 2017 3 commits
    • Mark Rutland's avatar
      arm64: avoid returning from bad_mode · 7d9e8f71
      Mark Rutland authored
      Generally, taking an unexpected exception should be a fatal event, and
      bad_mode is intended to cater for this. However, it should be possible
      to contain unexpected synchronous exceptions from EL0 without bringing
      the kernel down, by sending a SIGILL to the task.
      
      We tried to apply this approach in commit 9955ac47
      
       ("arm64:
      don't kill the kernel on a bad esr from el0"), by sending a signal for
      any bad_mode call resulting from an EL0 exception.
      
      However, this also applies to other unexpected exceptions, such as
      SError and FIQ. The entry paths for these exceptions branch to bad_mode
      without configuring the link register, and have no kernel_exit. Thus, if
      we take one of these exceptions from EL0, bad_mode will eventually
      return to the original user link register value.
      
      This patch fixes this by introducing a new bad_el0_sync handler to cater
      for the recoverable case, and restoring bad_mode to its original state,
      whereby it calls panic() and never returns. The recoverable case
      branches to bad_el0_sync with a bl, and returns to userspace via the
      usual ret_to_user mechanism.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Fixes: 9955ac47
      
       ("arm64: don't kill the kernel on a bad esr from el0")
      Reported-by: default avatarMark Salter <msalter@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      7d9e8f71
    • Vineet Gupta's avatar
      ARC: Revert "ARC: mm: IOC: Don't enable IOC by default" · d0e73e2a
      Vineet Gupta authored
      The programming model has been fixed with prev patches so re-enable it
      by default
      
      This reverts commit 23cb1f64
      
      .
      Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
      d0e73e2a
    • Vineet Gupta's avatar
      ARC: mm: split arc_cache_init to allow __init reaping of bulk · 76894a72
      Vineet Gupta authored
      
      arc_cache_init() is called for each core so can't be tagged __init.
      However bulk of it is only executed by master core and thus is candidate
      for __init reaping.
      
      So split it up to allow that.
      Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
      76894a72
  4. 18 Jan, 2017 17 commits
  5. 17 Jan, 2017 6 commits
    • Adam Ford's avatar
      ARM: dts: omap3: Fix Card Detect and Write Protect on Logic PD SOM-LV · 1ea6af32
      Adam Ford authored
      This fixes commit ab8dd3ae ("ARM: DTS: Add minimal Support for
      Logic PD DM3730 SOM-LV") where the Card Detect and Write Protect
      pins were improperly configured.
      
      Fixes: ab8dd3ae
      
       ("ARM: DTS: Add minimal Support for
      Logic PD DM3730 SOM-LV")
      Signed-off-by: default avatarAdam Ford <aford173@gmail.com>
      Signed-off-by: default avatarTony Lindgren <tony@atomide.com>
      1ea6af32
    • Neil Armstrong's avatar
      ARM64: dts: meson-gxbb-odroidc2: Disable SCPI DVFS · f7bcd4b6
      Neil Armstrong authored
      The current hardware is not able to run with all cores enabled at a
      cluster frequency superior at 1536MHz.
      But the currently shipped u-boot for the platform still reports an OPP
      table with possible DVFS frequency up to 2GHz, and will not change since
      the off-tree linux tree supports limiting the OPPs with a kernel parameter.
      A recent u-boot change reports the boot-time DVFS around 100MHz and
      the default performance cpufreq governor sets the maximum frequency.
      Previous version of u-boot reported to be already at the max OPP and
      left the OPP as is.
      Nevertheless, other governors like ondemand could setup the max frequency
      and make the system crash.
      
      This patch disables the DVFS clock and disables cpufreq.
      
      Fixes: 70db166a
      
       ("ARM64: dts: meson-gxbb: Add SCPI with cpufreq & sensors Nodes")
      Signed-off-by: default avatarNeil Armstrong <narmstrong@baylibre.com>
      Signed-off-by: default avatarKevin Hilman <khilman@baylibre.com>
      Signed-off-by: default avatarOlof Johansson <olof@lixom.net>
      f7bcd4b6
    • Dmitry Vyukov's avatar
      KVM: x86: fix fixing of hypercalls · ce2e852e
      Dmitry Vyukov authored
      
      emulator_fix_hypercall() replaces hypercall with vmcall instruction,
      but it does not handle GP exception properly when writes the new instruction.
      It can return X86EMUL_PROPAGATE_FAULT without setting exception information.
      This leads to incorrect emulation and triggers
      WARN_ON(ctxt->exception.vector > 0x1f) in x86_emulate_insn()
      as discovered by syzkaller fuzzer:
      
      WARNING: CPU: 2 PID: 18646 at arch/x86/kvm/emulate.c:5558
      Call Trace:
       warn_slowpath_null+0x2c/0x40 kernel/panic.c:582
       x86_emulate_insn+0x16a5/0x4090 arch/x86/kvm/emulate.c:5572
       x86_emulate_instruction+0x403/0x1cc0 arch/x86/kvm/x86.c:5618
       emulate_instruction arch/x86/include/asm/kvm_host.h:1127 [inline]
       handle_exception+0x594/0xfd0 arch/x86/kvm/vmx.c:5762
       vmx_handle_exit+0x2b7/0x38b0 arch/x86/kvm/vmx.c:8625
       vcpu_enter_guest arch/x86/kvm/x86.c:6888 [inline]
       vcpu_run arch/x86/kvm/x86.c:6947 [inline]
      
      Set exception information when write in emulator_fix_hypercall() fails.
      Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Cc: kvm@vger.kernel.org
      Cc: syzkaller@googlegroups.com
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      ce2e852e
    • Alexander Graf's avatar
      arm64: Fix swiotlb fallback allocation · 524dabe1
      Alexander Graf authored
      Commit b67a8b29 introduced logic to skip swiotlb allocation when all memory
      is DMA accessible anyway.
      
      While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
      to allocate memory when there is no CMA memory available. The swiotlb code is
      called to ensure that we at least try get_free_pages().
      
      Without initialization, swiotlb allocation code tries to access io_tlb_list
      which is NULL. That results in a stack trace like this:
      
        Unable to handle kernel NULL pointer dereference at virtual address 00000000
        [...]
        [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
        [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
        [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
        [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
        [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
        [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
        [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
        [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
        [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
        [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
        [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
        [...]
      
      Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
      option. This patch configures the swiotlb code to use that if we decide not to
      initialize the swiotlb framework.
      
      Fixes: b67a8b29
      
       ("arm64: mm: only initialize swiotlb when necessary")
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      CC: Jisheng Zhang <jszhang@marvell.com>
      CC: Geert Uytterhoeven <geert+renesas@glider.be>
      CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      524dabe1
    • Zhou Chengming's avatar
      perf/x86/intel: Handle exclusive threadid correctly on CPU hotplug · 4e71de79
      Zhou Chengming authored
      
      The CPU hotplug function intel_pmu_cpu_starting() sets
      cpu_hw_events.excl_thread_id unconditionally to 1 when the shared exclusive
      counters data structure is already availabe for the sibling thread.
      
      This works during the boot process because the first sibling gets threadid
      0 assigned and the second sibling which shares the data structure gets 1.
      
      But when the first thread of the core is offlined and onlined again it
      shares the data structure with the second thread and gets exclusive thread
      id 1 assigned as well.
      
      Prevent this by checking the threadid of the already online thread.
      
      [ tglx: Rewrote changelog ]
      Signed-off-by: default avatarZhou Chengming <zhouchengming1@huawei.com>
      Cc: NuoHan Qiao <qiaonuohan@huawei.com>
      Cc: ak@linux.intel.com
      Cc: peterz@infradead.org
      Cc: kan.liang@intel.com
      Cc: dave.hansen@linux.intel.com
      Cc: eranian@google.com
      Cc: qiaonuohan@huawei.com
      Cc: davidcc@google.com
      Cc: guohanjun@huawei.com
      Link: http://lkml.kernel.org/r/1484536871-3131-1-git-send-email-zhouchengming1@huawei.com
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      ---					---
       arch/x86/events/intel/core.c |    7 +++++--
       1 file changed, 5 insertions(+), 2 deletions(-)
      4e71de79
    • Benjamin Herrenschmidt's avatar
      powerpc/icp-opal: Fix missing KVM case and harden replay · 9728a7c8
      Benjamin Herrenschmidt authored
      The icp-opal call is missing the code from icp-native to recover
      interrupts snatched by KVM. Without that, when running KVM, we can
      get into a situation where an interrupt is lost and the CPU stuck
      with an elevated CPPR.
      
      Also harden replay by always checking the return from opal_int_eoi().
      
      Fixes: d7436188
      
       ("powerpc/xics: Add ICP OPAL backend")
      Cc: stable@vger.kernel.org # v4.8+
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      9728a7c8
  6. 16 Jan, 2017 4 commits
  7. 14 Jan, 2017 2 commits
    • Peter Jones's avatar
      efi/x86: Prune invalid memory map entries and fix boot regression · 0100a3e6
      Peter Jones authored
      Some machines, such as the Lenovo ThinkPad W541 with firmware GNET80WW
      (2.28), include memory map entries with phys_addr=0x0 and num_pages=0.
      
      These machines fail to boot after the following commit,
      
        commit 8e80632f
      
       ("efi/esrt: Use efi_mem_reserve() and avoid a kmalloc()")
      
      Fix this by removing such bogus entries from the memory map.
      
      Furthermore, currently the log output for this case (with efi=debug)
      looks like:
      
       [    0.000000] efi: mem45: [Reserved           |   |  |  |  |  |  |  |  |  |  |  |  ] range=[0x0000000000000000-0xffffffffffffffff] (0MB)
      
      This is clearly wrong, and also not as informative as it could be.  This
      patch changes it so that if we find obviously invalid memory map
      entries, we print an error and skip those entries.  It also detects the
      display of the address range calculation overflow, so the new output is:
      
       [    0.000000] efi: [Firmware Bug]: Invalid EFI memory map entries:
       [    0.000000] efi: mem45: [Reserved           |   |  |  |  |  |  |  |   |  |  |  |  ] range=[0x0000000000000000-0x0000000000000000] (invalid)
      
      It also detects memory map sizes that would overflow the physical
      address, for example phys_addr=0xfffffffffffff000 and
      num_pages=0x0200000000000001, and prints:
      
       [    0.000000] efi: [Firmware Bug]: Invalid EFI memory map entries:
       [    0.000000] efi: mem45: [Reserved           |   |  |  |  |  |  |  |   |  |  |  |  ] range=[phys_addr=0xfffffffffffff000-0x20ffffffffffffffff] (invalid)
      
      It then removes these entries from the memory map.
      Signed-off-by: default avatarPeter Jones <pjones@redhat.com>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      [ardb: refactor for clarity with no functional changes, avoid PAGE_SHIFT]
      Signed-off-by: default avatarMatt Fleming <matt@codeblueprint.co.uk>
      [Matt: Include bugzilla info in commit log]
      Cc: <stable@vger.kernel.org> # v4.9+
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=191121
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      0100a3e6
    • Jiri Olsa's avatar
      perf/x86: Reject non sampling events with precise_ip · 18e7a45a
      Jiri Olsa authored
      As Peter suggested [1] rejecting non sampling PEBS events,
      because they dont make any sense and could cause bugs
      in the NMI handler [2].
      
        [1] http://lkml.kernel.org/r/20170103094059.GC3093@worktop
        [2] http://lkml.kernel.org/r/1482931866-6018-3-git-send-email-jolsa@kernel.org
      
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vince@deater.net>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/20170103142454.GA26251@krava
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      18e7a45a