1. 07 Jul, 2021 1 commit
    • Jan Beulich's avatar
      IOMMU: make DMA containment of quarantined devices optional · 980d6acf
      Jan Beulich authored
      Containing still in flight DMA was introduced to work around certain
      devices / systems hanging hard upon hitting a "not-present" IOMMU fault.
      Passing through (such) devices (on such systems) is inherently insecure
      (as guests could easily arrange for IOMMU faults of any kind to occur).
      Defaulting to a mode where admins may not even become aware of issues
      with devices can be considered undesirable. Therefore convert this mode
      of operation to an optional one, not one enabled by default.
      
      This involves resurrecting code commit ea388678 ("x86 / iommu: set
      up a scratch page in the quarantine domain") did remove, in a slightly
      extended and abstracted fashion. Here, instead of reintroducing a pretty
      pointless use of "goto" in domain_context_unmap(), and instead of making
      the function (at least temporarily) inconsistent, take the opportunity
      and replace the other similarly pointless "goto" as well.
      
      In order to key the re-instated bypasses off of there (not) being a root
      page table this further requires moving the allocate_domain_resources()
      invocation from reassign_device() to amd_iommu_setup_domain_device() (or
      else reassign_device() would allocate a root page table anyway); this is
      benign to the second caller of the latter function.
      
      In VT-d's domain_context_unmap(), instead of adding yet another
      "goto out" when all that's wanted is a "return", eliminate the "out"
      label at the same time.
      
      Take the opportunity and also limit the control to builds supporting
      PCI.
      Signed-off-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarPaul Durrant <paul@xen.org>
      Reviewed-by: default avatarKevin Tian <kevin.tian@intel.com>
      980d6acf
  2. 28 May, 2021 1 commit
  3. 03 May, 2021 1 commit
  4. 20 Apr, 2021 1 commit
    • Roger Pau Monné's avatar
      x86/dpci: remove the dpci EOI timer · 730d0f60
      Roger Pau Monné authored
      Current interrupt pass though code will setup a timer for each
      interrupt injected to the guest that requires an EOI from the guest.
      Such timer would perform two actions if the guest doesn't EOI the
      interrupt before a given period of time. The first one is deasserting
      the virtual line, the second is perform an EOI of the physical
      interrupt source if it requires such.
      
      The deasserting of the guest virtual line is wrong, since it messes
      with the interrupt status of the guest. This seems to have been done
      in order to compensate for missing deasserts when certain interrupt
      controller actions are performed. The original motivation of the
      introduction of the timer was to fix issues when a GSI was shared
      between different guests. We believe that other changes in the
      interrupt handling code (ie: proper propagation of EOI related actions
      to dpci) will have fixed such errors now.
      
      Performing an EOI of the physical interrupt source is redundant, since
      there's already a timer that takes care of this for all interrupts,
      not just the HVM dpci ones, see irq_guest_action_t struct eoi_timer
      field.
      
      Since both of the actions performed by the dpci timer are not
      required, remove it altogether.
      Signed-off-by: default avatarRoger Pau Monné <roger.pau@citrix.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      730d0f60
  5. 13 Apr, 2021 1 commit
  6. 02 Mar, 2021 1 commit
    • Julien Grall's avatar
      xen/iommu: x86: Clear the root page-table before freeing the page-tables · 9bd9695a
      Julien Grall authored
      The new per-domain IOMMU page-table allocator will now free the
      page-tables when domain's resources are relinquished. However, the
      per-domain IOMMU structure will still contain a dangling pointer to
      the root page-table.
      
      Xen may access the IOMMU page-tables afterwards at least in the case of
      PV domain:
      
      (XEN) Xen call trace:
      (XEN)    [<ffff82d04025b4b2>] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8
      (XEN)    [<ffff82d04025b695>] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8
      (XEN)    [<ffff82d0402695f3>] F iommu_unmap+0x9c/0x129
      (XEN)    [<ffff82d0402696a6>] F iommu_legacy_unmap+0x26/0x63
      (XEN)    [<ffff82d04033c5c7>] F mm.c#cleanup_page_mappings+0x139/0x144
      (XEN)    [<ffff82d04033c61d>] F put_page+0x4b/0xb3
      (XEN)    [<ffff82d04033c87f>] F put_page_from_l1e+0x136/0x13b
      (XEN)    [<ffff82d04033cada>] F devalidate_page+0x256/0x8dc
      (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
      (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
      (XEN)    [<ffff82d04033d8d6>] F mm.c#put_page_from_l2e+0x8a/0xcf
      (XEN)    [<ffff82d04033cc27>] F devalidate_page+0x3a3/0x8dc
      (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
      (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
      (XEN)    [<ffff82d04033d807>] F mm.c#put_page_from_l3e+0x8a/0xcf
      (XEN)    [<ffff82d04033cdf0>] F devalidate_page+0x56c/0x8dc
      (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
      (XEN)    [<ffff82d04033d64d>] F mm.c#put_pt_page+0x6f/0x80
      (XEN)    [<ffff82d04033d6c7>] F mm.c#put_page_from_l4e+0x69/0x6d
      (XEN)    [<ffff82d04033cf24>] F devalidate_page+0x6a0/0x8dc
      (XEN)    [<ffff82d04033d396>] F mm.c#_put_page_type+0x236/0x47e
      (XEN)    [<ffff82d04033d92e>] F put_page_type_preemptible+0x13/0x15
      (XEN)    [<ffff82d04032598a>] F domain.c#relinquish_memory+0x1ff/0x4e9
      (XEN)    [<ffff82d0403295f2>] F domain_relinquish_resources+0x2b6/0x36a
      (XEN)    [<ffff82d040205cdf>] F domain_kill+0xb8/0x141
      (XEN)    [<ffff82d040236cac>] F do_domctl+0xb6f/0x18e5
      (XEN)    [<ffff82d04031d098>] F pv_hypercall+0x2f0/0x55f
      (XEN)    [<ffff82d04039b432>] F lstar_enter+0x112/0x120
      
      This will result to a use after-free and possibly an host crash or
      memory corruption.
      
      It would not be possible to free the page-tables further down in
      domain_relinquish_resources() because cleanup_page_mappings() will only
      be called when the last reference on the page dropped. This may happen
      much later if another domain still hold a reference.
      
      After all the PCI devices have been de-assigned, nobody should use the
      IOMMU page-tables and it is therefore pointless to try to modify them.
      
      So we can simply clear any reference to the root page-table in the
      per-domain IOMMU structure. This requires to introduce a new callback of
      the method will depend on the IOMMU driver used.
      
      Take the opportunity to add an ASSERT() in arch_iommu_domain_destroy()
      to check if we freed all the IOMMU page tables.
      
      Fixes: 3eef6d07 ("x86/iommu: convert VT-d code to use new page table allocator")
      Signed-off-by: default avatarJulien Grall <jgrall@amazon.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarKevin Tian <kevin.tian@intel.com>
      Release-Acked-by: default avatarIan Jackson <iwj@xenproject.org>
      9bd9695a
  7. 15 Jan, 2021 1 commit
    • Jan Beulich's avatar
      mm: split out mfn_t / gfn_t / pfn_t definitions and helpers · ced9795c
      Jan Beulich authored
      xen/mm.h has heavy dependencies, while in a number of cases only these
      type definitions are needed. This separation then also allows pulling in
      these definitions when including xen/mm.h would cause cyclic
      dependencies.
      
      Replace xen/mm.h inclusion where possible in include/xen/. (In
      xen/iommu.h also take the opportunity and correct the few remaining
      sorting issues.)
      
      While the change could be dropped, remove an unnecessary asm/io.h
      inclusion from xen/arch/x86/acpi/power.c. This was the initial attempt
      to address build issues with it, until it became clear that the header
      itself needs adjustment.
      Signed-off-by: default avatarJan Beulich <jbeulich@suse.com>
      Acked-by: default avatarJulien Grall <jgrall@amazon.com>
      ced9795c
  8. 27 Nov, 2020 3 commits
  9. 28 Sep, 2020 1 commit
  10. 22 Sep, 2020 2 commits
  11. 07 Jul, 2020 1 commit
    • Roger Pau Monné's avatar
      x86/iommu: introduce a cache sync hook · 91526b46
      Roger Pau Monné authored
      The hook is only implemented for VT-d and it uses the already existing
      iommu_sync_cache function present in VT-d code. The new hook is
      added so that the cache can be flushed by code outside of VT-d when
      using shared page tables.
      
      Note that alloc_pgtable_maddr must use the now locally defined
      sync_cache function, because IOMMU ops are not yet setup the first
      time the function gets called during IOMMU initialization.
      
      No functional change intended.
      
      This is part of XSA-321.
      Signed-off-by: default avatarRoger Pau Monné <roger.pau@citrix.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      91526b46
  12. 10 Mar, 2020 6 commits
  13. 29 Nov, 2019 1 commit
    • Paul Durrant's avatar
      x86 / iommu: set up a scratch page in the quarantine domain · ea388678
      Paul Durrant authored
      This patch introduces a new iommu_op to facilitate a per-implementation
      quarantine set up, and then further code for x86 implementations
      (amd and vtd) to set up a read-only scratch page to serve as the source
      for DMA reads whilst a device is assigned to dom_io. DMA writes will
      continue to fault as before.
      
      The reason for doing this is that some hardware may continue to re-try
      DMA (despite FLR) in the event of an error, or even BME being cleared, and
      will fail to deal with DMA read faults gracefully. Having a scratch page
      mapped will allow pending DMA reads to complete and thus such buggy
      hardware will eventually be quiesced.
      
      NOTE: These modifications are restricted to x86 implementations only as
            the buggy h/w I am aware of is only used with Xen in an x86
            environment. ARM may require similar code but, since I am not
            aware of the need, this patch does not modify any ARM implementation.
      Signed-off-by: default avatarPaul Durrant <pdurrant@amazon.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Release-acked-by: default avatarJuergen Gross <jgross@suse.com>
      ea388678
  14. 26 Nov, 2019 1 commit
    • Jan Beulich's avatar
      IOMMU: default to always quarantining PCI devices · ba2ab00b
      Jan Beulich authored
      XSA-302 relies on the use of libxl's "assignable-add" feature to prepare
      devices to be assigned to untrusted guests.
      
      Unfortunately, this is not considered a strictly required step for
      device assignment. The PCI passthrough documentation on the wiki
      describes alternate ways of preparing devices for assignment, and
      libvirt uses its own ways as well. Hosts where these alternate methods
      are used will still leave the system in a vulnerable state after the
      device comes back from a guest.
      
      Default to always quarantining PCI devices, but provide a command line
      option to revert back to prior behavior (such that people who both
      sufficiently trust their guests and want to be able to use devices in
      Dom0 again after they had been in use by a guest wouldn't need to
      "manually" move such devices back from DomIO to Dom0).
      
      This is XSA-306.
      Reported-by: default avatarMarek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
      Signed-off-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarWei Liu <wl@xen.org>
      ba2ab00b
  15. 26 Sep, 2019 1 commit
    • Oleksandr Tyshchenko's avatar
      iommu/arm: Introduce iommu_add_dt_device API · 3e27f7d4
      Oleksandr Tyshchenko authored
      The main puprose of this patch is to add a way to register DT device
      (which is behind the IOMMU) using the generic IOMMU DT bindings [1]
      before assigning that device to a domain.
      
      So, this patch adds new "iommu_add_dt_device" API for adding DT device
      to the IOMMU using generic IOMMU DT bindings and previously added
      "iommu_fwspec" support. As devices can be assigned to the hardware domain
      and other domains this function is called from two places: handle_device()
      and iommu_do_dt_domctl().
      
      Besides that, this patch adds new "dt_xlate" callback (borrowed from
      Linux "of_xlate") for providing the driver with DT IOMMU specifier
      which describes the IOMMU master interfaces of that device (device IDs, etc).
      According to the generic IOMMU DT bindings the context of required
      properties for IOMMU device/master node (#iommu-cells, iommus) depends
      on many factors and is really driver depended thing.
      
      Please note, all IOMMU drivers which support generic IOMMU DT bindings
      should use "dt_xlate" and "add_device" callbacks.
      
      [1] https://www.kernel.org/doc/Documentation/devicetree/bindings/iommu/iommu.txtSigned-off-by: default avatarOleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
      Acked-by: default avatarJulien Grall <julien.grall@arm.com>
      Acked-by: default avatarJan Beulich <jbeulich@suse.com>
      3e27f7d4
  16. 25 Sep, 2019 3 commits
    • Paul Durrant's avatar
      introduce a 'passthrough' configuration option to xl.cfg... · babde47a
      Paul Durrant authored
      ...and hence the ability to disable IOMMU mappings, and control EPT
      sharing.
      
      This patch introduces a new 'libxl_passthrough' enumeration into
      libxl_domain_create_info. The value will be set by xl either when it parses
      a new 'passthrough' option in xl.cfg, or implicitly if there is passthrough
      hardware specified for the domain.
      
      If the value of the passthrough configuration option is 'disabled' then
      the XEN_DOMCTL_CDF_iommu flag will be clear in the xen_domctl_createdomain
      flags, thus allowing the toolstack to control whether the domain gets
      IOMMU mappings or not (where previously they were globally set).
      
      If the value of the passthrough configuration option is 'sync_pt' then
      a new 'iommu_opts' field in xen_domctl_createdomain will be set with the
      value XEN_DOMCTL_IOMMU_no_sharept. This will override the global default
      set in iommu_hap_pt_share, thus allowing the toolstack to control whether
      EPT sharing is used for the domain.
      
      If the value of passthrough is 'enabled' then xl will choose an appropriate
      default according to the type of domain and hardware support.
      
      NOTE: The 'iommu_memkb' overhead in libxl_domain_build_info will now only
            be set if passthrough is 'sync_pt' (or xl has chosen this mode as
            a default).
      Signed-off-by: default avatarPaul Durrant <paul.durrant@citrix.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Acked-by: default avatarChristian Lindig <christian.lindig@citrix.com>
      Acked-by: default avatarJulien Grall <julien.grall@arm.com>
      Reviewed-by: default avatarAnthony PERARD <anthony.perard@citrix.com>
      babde47a
    • Paul Durrant's avatar
      iommu: tidy up iommu_use_hap_pt() and need_iommu_pt_sync() macros · 80ff3d33
      Paul Durrant authored
      Thes macros really ought to live in the common xen/iommu.h header rather
      then being distributed amongst architecture specific iommu headers and
      xen/sched.h. This patch moves them there.
      
      NOTE: Disabling 'sharept' in the command line iommu options should really
            be hard error on ARM (as opposed to just being ignored), so define
            'iommu_hap_pt_share' to be true for ARM (via ARM-selected
            CONFIG_IOMMU_FORCE_PT_SHARE).
      Signed-off-by: default avatarPaul Durrant <paul.durrant@citrix.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Acked-by: default avatarJulien Grall <julien.grall@arm.com>
      80ff3d33
    • Paul Durrant's avatar
      remove late (on-demand) construction of IOMMU page tables · f89f5558
      Paul Durrant authored
      Now that there is a per-domain IOMMU-enable flag, which should be set if
      any device is going to be passed through, stop deferring page table
      construction until the assignment is done. Also don't tear down the tables
      again when the last device is de-assigned; defer that task until domain
      destruction.
      
      This allows the has_iommu_pt() helper and iommu_status enumeration to be
      removed. Calls to has_iommu_pt() are simply replaced by calls to
      is_iommu_enabled(). Remaining open-coded tests of iommu_hap_pt_share can
      also be replaced by calls to iommu_use_hap_pt().
      The arch_iommu_populate_page_table() and iommu_construct() functions become
      redundant, as does the 'strict mode' dom0 page_list mapping code in
      iommu_hwdom_init(), and iommu_teardown() can be made static is its only
      remaining caller, iommu_domain_destroy(), is within the same source
      module.
      
      All in all, about 220 lines of code are removed from the hypervisor (at
      the expense of some additions in the toolstack).
      
      NOTE: This patch will cause a small amount of extra resource to be used
            to accommodate IOMMU page tables that may never be used, since the
            per-domain IOMMU-enable flag is currently set to the value of the
            global iommu_enable flag. A subsequent patch will add an option to
            the toolstack to allow it to be turned off if there is no intention
            to assign passthrough hardware to the domain.
            To account for the extra resource, 'iommu_memkb' has been added to
            domain_build_info. This patch sets it to a value calculated based
            on the domain's maximum memory when the P2M sharing is either not
            supported or globally disabled, or zero otherwise. However, when
            the toolstack option mentioned above is added, it will also be zero
            if the per-domain IOMMU-enable flag is turned off.
      Signed-off-by: default avatarPaul Durrant <paul.durrant@citrix.com>
      Reviewed-by: default avatarAlexandru Isaila <aisaila@bitdefender.com>
      Acked-by: default avatarRazvan Cojocaru <rcojocaru@bitdefender.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Acked-by: default avatarJulien Grall <julien.grall@arm.com>
      Acked-by: default avatarWei Liu <wl@xen.org>
      f89f5558
  17. 17 Sep, 2019 1 commit
  18. 05 Sep, 2019 1 commit
    • Jan Beulich's avatar
      VT-d: avoid PCI device lookup · 4067bbfa
      Jan Beulich authored
      The two uses of pci_get_pdev_by_domain() lack proper locking, but are
      also only used to get hold of a NUMA node ID. Calculate and store the
      node ID earlier on and remove the lookups (in lieu of fixing the
      locking).
      
      While doing this it became apparent that iommu_alloc()'s use of
      alloc_pgtable_maddr() would occur before RHSAs would have been parsed:
      iommu_alloc() gets called from the DRHD parsing routine, which - on
      spec conforming platforms - happens strictly before RHSA parsing. Defer
      the allocation until after all ACPI table parsing has finished,
      established the node ID there first.
      Suggested-by: default avatarKevin Tian <kevin.tian@intel.com>
      Signed-off-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarKevin Tian <kevin.tian@intel.com>
      4067bbfa
  19. 23 Aug, 2019 1 commit
  20. 17 May, 2019 1 commit
  21. 09 Apr, 2019 1 commit
  22. 08 Apr, 2019 2 commits
  23. 22 Mar, 2019 1 commit
  24. 23 Jan, 2019 1 commit
    • Andrew Cooper's avatar
      xen/dom0: Deprecate iommu_hwdom_inclusive and leave it disabled by default · b7e8dee0
      Andrew Cooper authored
      This option is unique to x86 PV dom0's, but it is not sensible to have a
      catch-all which blindly maps all non-RAM regions into the IOMMU.
      
      The map-reserved option remains, and covers all the buggy firmware issues that
      I am aware of.  The two common cases are legacy USB keyboard emulation, and
      the BMC mailbox used by vendor firmware in NICs/HBAs to report information
      back to the iLO/iDRAC/etc for remote remote management purposes.
      
      A specific advantage of this change is that x86 dom0's IOMMU setup is now
      consistent between PV and PVH.
      
      This change is not expected to have any impact, due to map-reserved remaining.
      In the unlikely case that it does cause an issue, we should introduce other
      map-$SPECIFIC options rather than re-introducing this catch-all.
      Signed-off-by: default avatarAndrew Cooper <andrew.cooper3@citrix.com>
      Release-acked-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarRoger Pau Monné <roger.pau@citrix.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      b7e8dee0
  25. 03 Jan, 2019 2 commits
    • Paul Durrant's avatar
      iommu: elide flushing for higher order map/unmap operations · e8afe112
      Paul Durrant authored
      This patch removes any implicit flushing that occurs in the implementation
      of map and unmap operations and adds new iommu_map/unmap() wrapper
      functions. To maintain semantics of the iommu_legacy_map/unmap() wrapper
      functions, these are modified to call the new wrapper functions and then
      perform an explicit flush operation.
      
      Because VT-d currently performs two different types of flush dependent upon
      whether a PTE is being modified versus merely added (i.e. replacing a non-
      present PTE) 'iommu flush flags' are defined by this patch and the
      iommu_ops map_page() and unmap_page() methods are modified to OR the type
      of flush necessary for the PTE that has been populated or depopulated into
      an accumulated flags value. The accumulated value can then be passed into
      the explicit flush operation.
      
      The ARM SMMU implementations of map_page() and unmap_page() currently
      perform no implicit flushing and therefore the modified methods do not
      adjust the flush flags.
      
      NOTE: The per-cpu 'iommu_dont_flush_iotlb' is respected by the
            iommu_legacy_map/unmap() wrapper functions and therefore this now
            applies to all IOMMU implementations rather than just VT-d.
      Signed-off-by: default avatarPaul Durrant <paul.durrant@citrix.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarKevin Tian <kevin.tian@intel.com>
      Acked-by: default avatarJulien Grall <julien.grall@arm.com>
      Acked-by: default avatarBrian Woods <brian.woods@amd.com>
      e8afe112
    • Paul Durrant's avatar
      iommu: rename wrapper functions · d40029c8
      Paul Durrant authored
      A subsequent patch will add semantically different versions of
      iommu_map/unmap() so, in advance of that change, this patch renames the
      existing functions to iommu_legacy_map/unmap() and modifies all call-sites.
      It also adjusts a comment that refers to iommu_map_page(), which was re-
      named by a previous patch.
      
      This patch is purely cosmetic. No functional change.
      Signed-off-by: default avatarPaul Durrant <paul.durrant@citrix.com>
      Acked-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarKevin Tian <kevin.tian@intel.com>
      d40029c8
  26. 21 Nov, 2018 1 commit
  27. 15 Nov, 2018 1 commit
  28. 05 Oct, 2018 1 commit
    • Paul Durrant's avatar
      mm / iommu: split need_iommu() into has_iommu_pt() and need_iommu_pt_sync() · 91d4eca7
      Paul Durrant authored
      The name 'need_iommu()' is a little confusing as it suggests a domain needs
      to use the IOMMU but something might not be set up yet, when in fact it
      represents a tri-state value (not a boolean as might be expected) where
      -1 means 'IOMMU mappings being set up' and 1 means 'IOMMU mappings have
      been fully set up'.
      
      Two different meanings are also inferred from the macro it in various
      places in the code:
      
      - Some callers want to test whether a domain has IOMMU mappings at all
      - Some callers want to test whether they need to synchronize the domain's
        P2M and IOMMU mappings
      
      This patch replaces the 'need_iommu' tri-state value with a defined
      enumeration and adds a boolean flag 'need_sync' to separate these meanings,
      and places both of these in struct domain_iommu, rather than directly in
      struct domain.
      This patch also creates two new boolean macros:
      
      - 'has_iommu_pt()' evaluates to true if a domain has IOMMU mappings, even
        if they are still under construction.
      - 'need_iommu_pt_sync()' evaluates to true if a domain requires explicit
        synchronization of the P2M and IOMMU mappings.
      
      All callers of need_iommu() are then modified to use the macro appropriate
      to what they are trying to test, except for the instance in
      xen/drivers/passthrough/pci.c:assign_device() which has simply been
      removed since it appears to be unnecessary.
      
      NOTE: There are some callers of need_iommu() that strictly operate on
            the hardware domain. In some of these case a more global flag is
            used instead.
      Signed-off-by: default avatarPaul Durrant <paul.durrant@citrix.com>
      Acked-by: default avatarRazvan Cojocaru <rcojocaru@bitdefender.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Acked-by: default avatarGeorge Dunlap <george.dunlap@citrix.com>
      Acked-by: default avatarJulien Grall <julien.grall@arm.com>
      91d4eca7