qemu

FORK: QEMU emulator
git clone https://git.neptards.moe/neptards/qemu.git
Log | Files | Refs | Submodules | LICENSE

virt.rst (6720B)


      1 'virt' Generic Virtual Platform (``virt``)
      2 ==========================================
      3 
      4 The ``virt`` board is a platform which does not correspond to any real hardware;
      5 it is designed for use in virtual machines. It is the recommended board type
      6 if you simply want to run a guest such as Linux and do not care about
      7 reproducing the idiosyncrasies and limitations of a particular bit of
      8 real-world hardware.
      9 
     10 Supported devices
     11 -----------------
     12 
     13 The ``virt`` machine supports the following devices:
     14 
     15 * Up to 8 generic RV32GC/RV64GC cores, with optional extensions
     16 * Core Local Interruptor (CLINT)
     17 * Platform-Level Interrupt Controller (PLIC)
     18 * CFI parallel NOR flash memory
     19 * 1 NS16550 compatible UART
     20 * 1 Google Goldfish RTC
     21 * 1 SiFive Test device
     22 * 8 virtio-mmio transport devices
     23 * 1 generic PCIe host bridge
     24 * The fw_cfg device that allows a guest to obtain data from QEMU
     25 
     26 The hypervisor extension has been enabled for the default CPU, so virtual
     27 machines with hypervisor extension can simply be used without explicitly
     28 declaring.
     29 
     30 Hardware configuration information
     31 ----------------------------------
     32 
     33 The ``virt`` machine automatically generates a device tree blob ("dtb")
     34 which it passes to the guest, if there is no ``-dtb`` option. This provides
     35 information about the addresses, interrupt lines and other configuration of
     36 the various devices in the system. Guest software should discover the devices
     37 that are present in the generated DTB.
     38 
     39 If users want to provide their own DTB, they can use the ``-dtb`` option.
     40 These DTBs should have the following requirements:
     41 
     42 * The number of subnodes of the /cpus node should match QEMU's ``-smp`` option
     43 * The /memory reg size should match QEMU’s selected ram_size via ``-m``
     44 * Should contain a node for the CLINT device with a compatible string
     45   "riscv,clint0" if using with OpenSBI BIOS images
     46 
     47 Boot options
     48 ------------
     49 
     50 The ``virt`` machine can start using the standard -kernel functionality
     51 for loading a Linux kernel, a VxWorks kernel, an S-mode U-Boot bootloader
     52 with the default OpenSBI firmware image as the -bios. It also supports
     53 the recommended RISC-V bootflow: U-Boot SPL (M-mode) loads OpenSBI fw_dynamic
     54 firmware and U-Boot proper (S-mode), using the standard -bios functionality.
     55 
     56 Machine-specific options
     57 ------------------------
     58 
     59 The following machine-specific options are supported:
     60 
     61 - aclint=[on|off]
     62 
     63   When this option is "on", ACLINT devices will be emulated instead of
     64   SiFive CLINT. When not specified, this option is assumed to be "off".
     65 
     66 - aia=[none|aplic|aplic-imsic]
     67 
     68   This option allows selecting interrupt controller defined by the AIA
     69   (advanced interrupt architecture) specification. The "aia=aplic" selects
     70   APLIC (advanced platform level interrupt controller) to handle wired
     71   interrupts whereas the "aia=aplic-imsic" selects APLIC and IMSIC (incoming
     72   message signaled interrupt controller) to handle both wired interrupts and
     73   MSIs. When not specified, this option is assumed to be "none" which selects
     74   SiFive PLIC to handle wired interrupts.
     75 
     76 - aia-guests=nnn
     77 
     78   The number of per-HART VS-level AIA IMSIC pages to be emulated for a guest
     79   having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not specified,
     80   the default number of per-HART VS-level AIA IMSIC pages is 0.
     81 
     82 Running Linux kernel
     83 --------------------
     84 
     85 Linux mainline v5.12 release is tested at the time of writing. To build a
     86 Linux mainline kernel that can be booted by the ``virt`` machine in
     87 64-bit mode, simply configure the kernel using the defconfig configuration:
     88 
     89 .. code-block:: bash
     90 
     91   $ export ARCH=riscv
     92   $ export CROSS_COMPILE=riscv64-linux-
     93   $ make defconfig
     94   $ make
     95 
     96 To boot the newly built Linux kernel in QEMU with the ``virt`` machine:
     97 
     98 .. code-block:: bash
     99 
    100   $ qemu-system-riscv64 -M virt -smp 4 -m 2G \
    101       -display none -serial stdio \
    102       -kernel arch/riscv/boot/Image \
    103       -initrd /path/to/rootfs.cpio \
    104       -append "root=/dev/ram"
    105 
    106 To build a Linux mainline kernel that can be booted by the ``virt`` machine
    107 in 32-bit mode, use the rv32_defconfig configuration. A patch is required to
    108 fix the 32-bit boot issue for Linux kernel v5.12.
    109 
    110 .. code-block:: bash
    111 
    112   $ export ARCH=riscv
    113   $ export CROSS_COMPILE=riscv64-linux-
    114   $ curl https://patchwork.kernel.org/project/linux-riscv/patch/20210627135117.28641-1-bmeng.cn@gmail.com/mbox/ > riscv.patch
    115   $ git am riscv.patch
    116   $ make rv32_defconfig
    117   $ make
    118 
    119 Replace ``qemu-system-riscv64`` with ``qemu-system-riscv32`` in the command
    120 line above to boot the 32-bit Linux kernel. A rootfs image containing 32-bit
    121 applications shall be used in order for kernel to boot to user space.
    122 
    123 Running U-Boot
    124 --------------
    125 
    126 U-Boot mainline v2021.04 release is tested at the time of writing. To build an
    127 S-mode U-Boot bootloader that can be booted by the ``virt`` machine, use
    128 the qemu-riscv64_smode_defconfig with similar commands as described above for Linux:
    129 
    130 .. code-block:: bash
    131 
    132   $ export CROSS_COMPILE=riscv64-linux-
    133   $ make qemu-riscv64_smode_defconfig
    134 
    135 Boot the 64-bit U-Boot S-mode image directly:
    136 
    137 .. code-block:: bash
    138 
    139   $ qemu-system-riscv64 -M virt -smp 4 -m 2G \
    140       -display none -serial stdio \
    141       -kernel /path/to/u-boot.bin
    142 
    143 To test booting U-Boot SPL which in M-mode, which in turn loads a FIT image
    144 that bundles OpenSBI fw_dynamic firmware and U-Boot proper (S-mode) together,
    145 build the U-Boot images using riscv64_spl_defconfig:
    146 
    147 .. code-block:: bash
    148 
    149   $ export CROSS_COMPILE=riscv64-linux-
    150   $ export OPENSBI=/path/to/opensbi-riscv64-generic-fw_dynamic.bin
    151   $ make qemu-riscv64_spl_defconfig
    152 
    153 The minimal QEMU commands to run U-Boot SPL are:
    154 
    155 .. code-block:: bash
    156 
    157   $ qemu-system-riscv64 -M virt -smp 4 -m 2G \
    158       -display none -serial stdio \
    159       -bios /path/to/u-boot-spl \
    160       -device loader,file=/path/to/u-boot.itb,addr=0x80200000
    161 
    162 To test 32-bit U-Boot images, switch to use qemu-riscv32_smode_defconfig and
    163 riscv32_spl_defconfig builds, and replace ``qemu-system-riscv64`` with
    164 ``qemu-system-riscv32`` in the command lines above to boot the 32-bit U-Boot.
    165 
    166 Enabling TPM
    167 ------------
    168 
    169 A TPM device can be connected to the virt board by following the steps below.
    170 
    171 First launch the TPM emulator:
    172 
    173 .. code-block:: bash
    174 
    175   $ swtpm socket --tpm2 -t -d --tpmstate dir=/tmp/tpm \
    176         --ctrl type=unixio,path=swtpm-sock
    177 
    178 Then launch QEMU with some additional arguments to link a TPM device to the backend:
    179 
    180 .. code-block:: bash
    181 
    182   $ qemu-system-riscv64 \
    183     ... other args .... \
    184     -chardev socket,id=chrtpm,path=swtpm-sock \
    185     -tpmdev emulator,id=tpm0,chardev=chrtpm \
    186     -device tpm-tis-device,tpmdev=tpm0
    187 
    188 The TPM device can be seen in the memory tree and the generated device
    189 tree and should be accessible from the guest software.