1. 29 Jun, 2020 2 commits
  2. 26 Jun, 2020 1 commit
    • Boris Shingarov's avatar
      Fix ld error in elf maketarget · 49f1389a
      Boris Shingarov authored
      
      The sdram_init ELF fails to link:
      
      powerpc64le-linux-gnu-ld -static -nostdlib -T sdram_init.lds \
          --gc-sections -o sdram_init.elf head.o main.o sdram.o console.o \
          libc.o sdram_init.lds
      powerpc64le-linux-gnu-ld: error: linker script file 'sdram_init.lds'
          appears multiple times
      make: *** [Makefile:70: sdram_init.elf] Error 1
      
      This is because sdram_init.lds is one of the prerequisites, and thus is
      contained in $^.  However, it is also explicitly specified as part of
      LDFLAGS, as the argument to -T.
      Signed-off-by: default avatarBoris Shingarov <shingarov@labware.com>
      49f1389a
  3. 23 Jun, 2020 4 commits
  4. 22 Jun, 2020 1 commit
  5. 19 Jun, 2020 6 commits
  6. 17 Jun, 2020 1 commit
  7. 16 Jun, 2020 2 commits
  8. 15 Jun, 2020 3 commits
    • Paul Mackerras's avatar
      execute1: Reduce width of the result mux to help timing · ec2fa617
      Paul Mackerras authored
      
      This reduces the number of different things that are assigned to
      the result variable.
      
      - The computations for the popcnt, prty, cmpb and exts instruction
        families are moved into the logical unit.
      - The result of mfspr from the slow SPRs is computed in 'spr_val'
        before being assigned to 'result'.
      - Writes to LR as a result of a blr or bclr instruction are done
        through the exc_write path to writeback.
      
      This eases timing considerably.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      ec2fa617
    • Paul Mackerras's avatar
      core: Implement a simple branch predictor · 6687aae4
      Paul Mackerras authored
      This implements a simple branch predictor in the decode1 stage.  If it
      sees that the instruction is b or bc and the branch is predicted to be
      taken, it sends a flush and redirect upstream (to icache and fetch1)
      to redirect fetching to the branch target.  The prediction is sent
      downstream with the branch instruction, and execute1 now only sends
      a flush/redirect upstream if the prediction was wrong.  Unconditional
      branches are always predicted to be taken, and conditional branches
      are predicted to be taken if and only if the offset is negative.
      Branches that take the branch address from a register (bclr, bcctr)
      are predicted not taken, as we don't have any way to predict the
      branch address.
      
      Since we can now have a mflr being executed immediately after a bl
      or bcl, we now track the update to LR in the hazard tracker, using
      the second write register field that is used to track RA updates for
      update-form loads and stores.
      
      For those branches that update LR but don't w...
      6687aae4
    • Paul Mackerras's avatar
      decode1: Improve timing for slow SPR decode path · 09ae2ce5
      Paul Mackerras authored
      
      This makes the logic that works out decode.unit and decode.sgl_pipe
      for mtspr/mfspr to/from slow SPRs detect the fact that the
      instruction is mtspr/mfspr based on a match with the instruction
      word rather than looking at v.decode.insn_type.  This improves timing
      substantially, as the ROM lookup to get v.decode is relatively slow.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      09ae2ce5
  9. 14 Jun, 2020 4 commits
  10. 13 Jun, 2020 16 commits
    • Paul Mackerras's avatar
      dcache: Reduce back-to-back store latency from 3 cycles to 2 · a4500c63
      Paul Mackerras authored
      
      This uses the machinery we already had for comparing the real address
      of a new request with the tag of a previous request (r1.reload_tag)
      to get better timing on comparing the address of a second store with
      the one in progress.  The comparison is now on the set size rather
      than the page size, but since set size can't be larger than the page
      size (and usually will equal the page size), that is OK.
      
      The same comparison can also be used to tell when we can satisfy
      a load miss during a cache line refill.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      a4500c63
    • Benjamin Herrenschmidt's avatar
      soc: Don't require dram wishbones signals to be wired by toplevel · bf7def55
      Benjamin Herrenschmidt authored
      
      Currently, when not using litedram, the top level still has to hook
      up "dummy" wishbones to the main dram and control dram busses coming
      out of the SoC and provide ack signals.
      
      Instead, make the SoC generate the acks internally when not using
      litedram and use defaults to make the wiring entirely optional.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      bf7def55
    • Benjamin Herrenschmidt's avatar
      soc: Add defaults for some input signals · 1ffc89e5
      Benjamin Herrenschmidt authored
      
      That way the top-level's don't need to assign them
      
      Also remove generics that are set to the default anyways
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1ffc89e5
    • Benjamin Herrenschmidt's avatar
    • Paul Mackerras's avatar
      mmu: Take an extra cycle to do TLB invalidations · aebd915f
      Paul Mackerras authored
      
      This makes the TLB invalidations that occur as a result of a tlbie,
      slbia or mtspr instruction take one more cycle.  This breaks some
      long combinatorial chains from decode2 to dcache and icache and
      thus eases timing.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      aebd915f
    • Paul Mackerras's avatar
      dcache: Reduce latencies and improve timing · b5959632
      Paul Mackerras authored
      
      This implements various improvements to the dcache with the aim of
      making it go faster.
      
      - We can now execute operations that don't need to access main memory
        (cacheable loads that hit in the cache and TLB operations) as soon
        as any previous operation has completed, without waiting for the
        state machine to become idle.
      
      - Cache line refills start with the doubleword that is needed to
        satisfy the load that initiated them.
      
      - Cacheable loads that miss return their data and complete as soon as
        the requested doubleword comes back from memory; they don't wait for
        the refill to finish.
      
      - We now have per-doubleword valid bits for the cache line being
        refilled, meaning that if a load comes in for a line that is in the
        process of being refilled, we can return the data and complete it
        within a couple of cycles of the doubleword coming in from memory.
      
      - There is now a bypass path for data being written to the cache RAM
        so that we can do a store hit followed immediately by a load hit to
        the same doubleword.  This also makes the data from a refill
        available to load hits one cycle earlier than it would be otherwise.
      
      - Stores complete in the cycle where their wishbone operation is
        initiated, without waiting for the wishbone cycle to complete.
      
      - During the wishbone cycle for a store, if another store comes in
        that is to the same page, and we don't have a stall from the
        wishbone, we can send out the write for the second store in the same
        wishbone cycle and without going through the IDLE state first.  We
        limit it to 7 outstanding writes that have not yet been
        acknowledged.
      
      - The cache tag RAM is now read on a clock edge rather than being
        combinatorial for reading.  Its width is rounded up to a multiple of
        8 bits per way so that byte enables can be used for writing
        individual tags.
      
      - The cache tag RAM is now written a cycle later than previously, in
        order to ease timing.
      
      - Data for a store hit is now written one cycle later than
        previously.  This eases timing since we don't have to get through
        the tag matching and on to the write enable within a single cycle.
        The 2-stage bypass path means we can still handle a load hit on
        either of the two cycles after the store and return the correct
        data.  (A load hit 3 or more cycles later will get the correct data
        from the BRAM.)
      
      - Operations can sit in r0 while there is an uncompleted operation in
        r1.  Once the operation in r1 is completed, the operation in r0
        spends one cycle in r0 for TLB/cache tag lookup and then gets put
        into r1.req.  This can happen before r1 gets to the IDLE state.
        Some operations can then be completed before r1 gets to the IDLE
        state - a load miss to the cache line being refilled, or a store to
        the same page as a previous store.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      b5959632
    • Paul Mackerras's avatar
      decode: Work out ispr1/ispr2 in parallel with decode ROM lookup · 65a36cc0
      Paul Mackerras authored
      
      This makes the logic that calculates which SPRs are being accessed
      work in parallel with the instruction decode ROM lookup instead of
      being dependent on the opcode found in the decode ROM.  The reason
      for doing that is that the path from icache through the decode ROM
      to the ispr1/ispr2 fields has become a critical path.
      
      Thus we are now using only a very partial decode of the instruction
      word in the logic for isp1/isp2, and we therefore can no longer rely
      on them being zero in all cases where no SPR is being accessed.
      Instead, decode2 now ignores ispr1/ispr2 in all cases except when the
      relevant decode.input_reg_a/b or decode.output_reg_a is set to SPR.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      65a36cc0
    • Paul Mackerras's avatar
      loadstore1: Reduce busy cycles · 209aa9ce
      Paul Mackerras authored
      
      This reduces the number of cycles where loadstore1 asserts its busy
      output, leading to increased throughput of loads and stores.  Loads
      that hit in the cache can now be executed at the rate of one every two
      cycles.  Stores take 4 cycles assuming the wishbone slave responds
      with an ack the cycle after we assert strobe.
      
      To achieve this, the state machine code is split into two parts, one
      for when we have an existing instruction in progress, and one for
      starting a new instruction.  We can now combinatorially clear busy and
      start a new instruction in the same cycle that we get a done signal
      from the dcache; in other words we are completing one instruction and
      potentially writing back results in the same cycle that we start a new
      instruction and send its address and data to the dcache.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      209aa9ce
    • Paul Mackerras's avatar
      loadstore1: Complete mfspr/mtspr a cycle later · 1d09daae
      Paul Mackerras authored
      
      This makes mfspr and mtspr complete (and mfspr write back) on the
      cycle after the instruction is received from execute1, rather than
      on the same cycle.  This makes them match all other instructions
      that execute in one cycle.  Because these instructions are marked
      as single-issue, there wasn't the possibility of having two
      instructions complete on the same cycle (which we can't cope with),
      but it is better to fix this.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      1d09daae
    • Paul Mackerras's avatar
      core: Use a busy signal rather than a stall · 6701e734
      Paul Mackerras authored
      This changes the instruction dependency tracking so that we can
      generate a "busy" signal from execute1 and loadstore1 which comes
      along one cycle later than the current "stall" signal.  This will
      enable us to signal busy cycles only when we need to from loadstore1.
      
      The "busy" signal from execute1/loadstore1 indicates "I didn't take
      the thing you gave me on this cycle", as distinct from the previous
      stall signal which meant "I took that but don't give me anything
      next cycle".  That means that decode2 proactively gives execute1
      a new instruction as soon as it has taken the previous one (assuming
      there is a valid instruction available from decode1), and that then
      sits in decode2's output until execute1 can take it.  So instructions
      are issued by decode2 somewhat earlier than they used to be.
      
      Decode2 now only signals a stall upstream when its output buffer is
      full, meaning that we can fill up bubbles in the upstream pipe while a
      long instruction is executing.  Thi...
      6701e734
    • Paul Mackerras's avatar
      icache: Improve latencies when reloading cache lines · 62b24a8d
      Paul Mackerras authored
      
      The icache can now detect a hit on a line being refilled from memory,
      as we have an array of individual valid bits per row for the line
      that is currently being loaded.  This enables the request that
      initiated the refill to be satisfied earlier, and also enables
      following requests to the same cache line to be satisfied before the
      line is completely refilled.  Furthermore, the refill now starts
      at the row that is needed.  This should reduce the latency for an
      icache miss.
      
      We now get a 'sequential' indication from fetch1, and use that to know
      when we can deliver an instruction word using the other half of the
      64-bit doubleword that was read last cycle.  This doesn't make much
      difference at the moment, but it frees up cycles where we could test
      whether the next line is present in the cache so that we could
      prefetch it if not.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      62b24a8d
    • Paul Mackerras's avatar
      multiply: Use DSP48 slices for multiplication on Xilinx FPGAs · 0809bc89
      Paul Mackerras authored
      
      This adds a custom implementation of the multiplier which uses 16
      DSP48E1 slices to do a 64x64 bit multiplication in 2 cycles.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      0809bc89
    • Paul Mackerras's avatar
      multiply: Move selection of result bits into execute1 · 9880fc74
      Paul Mackerras authored
      
      This puts the logic that selects which bits of the multiplier result
      get written into the destination GPR into execute1, moved out from
      multiply.
      
      The multiplier is now expected to do an unsigned multiplication of
      64-bit operands, optionally negate the result, detect 32-bit
      or 64-bit signed overflow of the result, and return a full 128-bit
      result.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      9880fc74
    • Paul Mackerras's avatar
      core: Double the dcache and icache sizes · f80da657
      Paul Mackerras authored
      
      This makes the dcache and icache both be 8kB.  This still only uses
      one BRAM per way per cache on the Artix-7, since the BRAMs were only
      half-used previously.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      f80da657
    • Paul Mackerras's avatar
      core: Remove fetch2 pipeline stage · b5a7dbb7
      Paul Mackerras authored
      
      The fetch2 stage existed primarily to provide a stash buffer for the
      output of icache when a stall occurred.  However, we can get the same
      effect -- of having the input to decode1 stay unchanged on a stall
      cycle -- by using the read enable of the BRAMs in icache, and by
      adding logic to keep the outputs unchanged on a clock cycle when
      stall_in = 1.  This reduces branch and interrupt latency by one
      cycle.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      b5a7dbb7
    • Paul Mackerras's avatar
      Add core logging · 49a4d9f6
      Paul Mackerras authored
      This logs 256 bits of data per cycle to a ring buffer in BRAM.  The
      data collected can be read out through 2 new SPRs or through the
      debug interface.
      
      The new SPRs are LOG_ADDR (724) and LOG_DATA (725).  LOG_ADDR contains
      the buffer write pointer in the upper 32 bits (in units of entries,
      i.e. 32 bytes) and the read pointer in the lower 32 bits (in units of
      doublewords, i.e. 8 bytes).  Reading LOG_DATA gives the doubleword
      from the buffer at the read pointer and increments the read pointer.
      Setting bit 31 of LOG_ADDR inhibits the trace log system from writing
      to the log buffer, so the contents are stable and can be read.
      
      There are two new debug addresses which function similarly to the
      LOG_ADDR and LOG_DATA SPRs.  The log is frozen while either or both of
      the LOG_ADDR SPR bit 31 or the debug LOG_ADDR register bit 31 are set.
      
      The buffer defaults to 2048 entries, i.e. 64kB.  The size is set by
      the LOG_LENGTH generic on the core_debug module.  Software can
      determine the length of the buffe...
      49a4d9f6