Testbench

Hardware Description Languages

Sarah 50. Harris , David Harris , in Digital Design and Computer Architecture, 2022

4.9 Testbenches

A testbench is an HDL module that is used to test another module, called the device under test (DUT). The testbench contains statements to use inputs to the DUT and, ideally, to bank check that the correct outputs are produced. The input and desired output patterns are called test vectors.

Some tools too telephone call the module to be tested the unit under test (UUT).

Consider testing the sillyfunction module from Section 4.1.1 that computes y = a ¯ b ¯ c ¯ + a b ¯ c ¯ + a b ¯ c . This is a uncomplicated module, so nosotros can perform exhaustive testing past applying all 8 possible test vectors.

HDL Example 4.37 demonstrates a simple testbench. It instantiates the DUT, and so applies the inputs. Blocking assignments and delays are used to apply the inputs in the appropriate guild. The user must view the results of the simulation and verify by inspection that the correct outputs are produced. Testbenches are simulated the same equally other HDL modules. Nevertheless, they are not synthesizeable.

HDL Example 4.37

Testbench

SystemVerilog

module testbench1();

  logic a, b, c, y;

  // instantiate device nether test

  sillyfunction dut(a, b, c, y);

  // apply inputs one at a time

  initial begin

  a = 0; b = 0; c = 0; #x;

  c = 1;   #x;

  b = i; c = 0;   #10;

  c = ane;   #ten;

  a = 1; b = 0; c = 0; #x;

  c = 1;   #x;

  b = i; c = 0;   #10;

  c = one;   #ten;

  end

endmodule

The initial statement executes the statements in its body at the get-go of simulation. In this example, information technology first applies the input blueprint 000 and waits for 10 time units. Information technology then applies 001 and waits 10 more than units, and and then forth until all viii possible inputs have been practical. initial statements should exist used only in testbenches for simulation, not in modules intended to be synthesized into bodily hardware. Hardware has no fashion of magically executing a sequence of special steps when it is first turned on.

VHDL

library IEEE; use IEEE.STD_LOGIC_1164.all;

entity testbench1 is –– no inputs or outputs

stop;

architecture sim of testbench1 is

  component sillyfunction

  port(a, b, c: in   STD_LOGIC;

  y: out STD_LOGIC);

  end component;

  betoken a, b, c, y: STD_LOGIC;

begin

  –– instantiate device under test

  dut: sillyfunction port map(a, b, c, y);

  –– utilise inputs ane at a time

  process begin

  a <= '0'; b <= '0'; c <= '0'; wait for 10   ns;

  c <= '1';   await for 10   ns;

  b <= 'i'; c <= '0'; wait for 10   ns;

  c <= '1';   wait for ten   ns;

  a <= '1'; b <= '0'; c <= '0'; expect for 10   ns;

  c <= '1';   wait for 10   ns;

  b <= '1'; c <= '0';   wait for 10   ns;

  c <= '1';   wait for x   ns;

  wait; ––   wait forever

  end procedure;

cease;

The process statement first applies the input pattern 000 and waits for 10   ns. It then applies 001 and waits x more ns, and and so forth until all eight possible inputs take been applied.

At the end, the process waits indefinitely. Otherwise, the process would brainstorm over again, repeatedly applying the design of exam vectors.

Checking for right outputs is tedious and mistake-prone. Moreover, determining the correct outputs is much easier when the design is fresh in your heed. If you brand minor changes and need to retest weeks subsequently, determining the right outputs becomes a hassle. A much better approach is to write a cocky-checking testbench, shown in HDL Example 4.38.

HDL Example 4.38

Self-Checking Testbench

SystemVerilog

module testbench2();

  logic a, b, c, y;

  // instantiate device under exam

  sillyfunction dut(a, b, c, y);

  // apply inputs one at a time

  // checking results

  initial brainstorm

  a = 0; b = 0; c = 0; #x;

  assert (y = = = i) else $error("000 failed.");

  c = 1; #x;

  affirm (y = = = 0) else $fault("001 failed.");

  b = 1; c = 0; #10;

  assert (y = = = 0) else $fault("010 failed.");

  c = 1; #10;

  assert (y = = = 0) else $mistake("011 failed.");

  a = 1; b = 0; c = 0; #x;

  assert (y = = = 1) else $error("100 failed.");

  c = 1; #ten;

  assert (y = = = 1) else $mistake("101 failed.");

  b = one; c = 0; #ten;

  assert (y = = = 0) else $error("110 failed.");

  c = ane; #x;

  assert (y = = = 0) else $mistake("111 failed.");

terminate

endmodule

The SystemVerilog affirm statement checks whether a specified condition is true. If not, it executes the else statement. The $mistake system task in the else statement prints an error message describing the exclamation failure. assert is ignored during synthesis.

In SystemVerilog, comparison using = = or != is effective between signals that do not take on the values of x and z. Testbenches use the = = = and != = operators for comparisons of equality and inequality, respectively, because these operators work correctly with operands that could exist x or z.

VHDL

library IEEE; use IEEE.STD_LOGIC_1164.all;

entity testbench2 is –– no inputs or outputs

finish;

architecture sim of testbench2 is

  component sillyfunction

  port(a, b, c: in   STD_LOGIC;

  y:   out STD_LOGIC);

  end component;

  signal a, b, c, y: STD_LOGIC;

brainstorm

  –– instantiate device under test

  dut: sillyfunction port map(a, b, c, y);

  –– use inputs one at a fourth dimension

  –– checking results

  process begin

  a <= '0'; b <= '0'; c <= '0'; wait for 10   ns;

  assert y = '1' report "000   failed.";

  c <= '1'; wait for x   ns;

  assert y = '0' report "001   failed.";

  b <= '1'; c <= '0'; await for ten   ns;

  assert y = '0' report "010   failed.";

  c <= '1'; wait for x   ns;

  affirm y = '0' report "011   failed.";

  a <= '1'; b <= '0'; c <= '0'; wait for 10   ns;

  assert y = '1' report "100 failed.";

  c <= '1'; wait for 10   ns;

  assert y = 'ane' report "101   failed.";

  b <= 'ane'; c <= '0'; wait for x   ns;

  affirm y = '0' study "110   failed.";

  c <= '1'; wait for x   ns;

  affirm y = '0' report "111   failed.";

  look; –– wait forever

  end process;

end;

The affirm statement checks a condition and prints the message given in the report clause if the status is not satisfied. affirm is meaningful but in simulation, non in synthesis.

Writing lawmaking for each test vector also becomes tedious, particularly for modules that crave a big number of vectors. An even better approach is to place the examination vectors in a divide file. The testbench simply reads the test vectors from the file, applies the input exam vector to the DUT, waits, checks that the output values from the DUT match the output vector, and repeats until reaching the end of the test vectors file.

HDL Example iv.39 demonstrates such a testbench. The testbench generates a clock using an always/process statement with no sensitivity listing, so that it is continuously reevaluated. At the commencement of the simulation, it reads the test vectors from a text file and pulses reset for ii cycles. Although the clock and reset aren't necessary to examination combinational logic, they are included because they would exist of import when testing a sequential DUT. example.txt is a text file containing the test vectors, the inputs and expected output written in binary:

HDL Example iv.39

Testbench With Exam Vector File

SystemVerilog

module testbench3();

  logic   clk, reset;

  logic   a, b, c, y, yexpected;

  logic [31:0] vectornum, errors;

  logic [3:0] testvectors[10000:0];

  // instantiate device under test

  sillyfunction dut(a, b, c, y);

  // generate clock

  always

  begin

  clk = 1; #5; clk = 0; #5;

  end

  // at start of test, load vectors

  // and pulse reset

  initial

  begin

  $readmemb("example.txt", testvectors);

  vectornum = 0; errors = 0;

  reset = 1; #22; reset = 0;

  finish

  // utilize test vectors on rising edge of clk

  e'er @(posedge clk)

  begin

  #ane; {a, b, c, yexpected} = testvectors[vectornum];

  finish

// bank check results on falling edge of clk

always @(negedge clk)

  if (~reset) begin // skip during reset

  if (y ! = = yexpected) begin // check upshot

  $display("Error: inputs = %b", {a, b, c});

  $display(" outputs = %b (%b expected)", y, yexpected);

  errors = errors + ane;

  stop

  vectornum = vectornum + 1;

  if (testvectors[vectornum] = = = 4'bx) begin

  $display("%d tests completed with %d errors",

  vectornum, errors);

  $stop;

  end

  cease

endmodule

$readmemb reads a file of binary numbers into the testvectors array. $readmemh is similar but reads a file of hexadecimal numbers.

The next block of code waits one time unit after the rising edge of the clock (to avoid any defoliation if clock and information modify simultaneously), then sets the three inputs and the expected output based on the 4 bits in the current test vector.

The testbench compares the generated output, y, with the expected output, yexpected, and prints an error if they don't lucifer. %b and %d point to impress the values in binary and decimal, respectively. $display is a system job to print in the simulator window. For example, $brandish ("%b %b", y, yexpected); prints the two values, y and yexpected, in binary. %h prints a value in hexadecimal.

This procedure repeats until at that place are no more valid test vectors in the testvectors array. $finish stops the simulation.

Note that fifty-fifty though the SystemVerilog module supports upwardly to 10,001 exam vectors, it volition finish the simulation after executing the eight vectors in the file.

VHDL

library IEEE; utilise IEEE.STD_LOGIC_1164.all;

employ IEEE.STD_LOGIC_TEXTIO.ALL; use STD.TEXTIO.all;

entity testbench3 is –– no inputs or outputs

end;

architecture sim of testbench3 is

  component sillyfunction

  port(a, b, c: in STD_LOGIC;

  y: out STD_LOGIC);

  end component;

  betoken a, b, c, y:   STD_LOGIC;

  signal y_expected: STD_LOGIC;

  signal clk, reset:   STD_LOGIC;

begin

  –– instantiate device under test

  dut: sillyfunction port map(a, b, c, y);

  –– generate clock

  process begin

  clk <= 'ane'; wait for five   ns;

  clk <= '0'; wait for v   ns;

  stop process;

  –– at beginning of examination, pulse reset

  process begin

  reset <= 'i'; wait for 27   ns; reset <= '0';

  wait;

  end procedure;

  –– run tests

  process is

  file tv: text;

  variable L: line;

  variable vector_in: std_logic_vector(ii downto 0);

  variable dummy: graphic symbol;

  variable vector_out: std_logic;

  variable vectornum: integer := 0;

  variable errors: integer := 0;

  brainstorm

  FILE_OPEN(tv, "example.txt", READ_MODE);

  while non endfile(tv) loop

  –– change vectors on ascension edge

  expect until rising_edge(clk);

  –– read the next line of testvectors and carve up into pieces

  readline(television receiver, L);

  read(L, vector_in);

  read(L, dummy); –– skip over underscore

  read(Fifty, vector_out);

  (a, b, c) <= vector_in(2 downto 0) after one   ns;

  y_expected <= vector_out after ane   ns;

  -- bank check results on falling edge

  look until falling_edge(clk);

  if y /= y_expected then

  report "Error: y = " & std_logic'image(y);

  errors := errors + 1;

  cease if;

  vectornum := vectornum + i;

  terminate loop;

  -- summarize results at end of simulation

  if (errors = 0) and then

  report "NO ERRORS -- " &

  integer'image(vectornum) &

  " tests completed successfully."

  severity failure;

  else

  study integer'image(vectornum) &

  " tests completed, errors = " &

  integer'epitome(errors)

  severity failure;

  end if;

  end process;

stop;

The VHDL lawmaking uses file reading commands beyond the scope of this chapter, but it gives the sense of what a cocky-checking testbench looks like.

000_1

001_0

010_0

011_0

100_1

101_1

110_0

111_0

New inputs are applied on the rising edge of the clock, and the output is checked on the falling edge of the clock. Errors are reported as they occur. At the stop of the simulation, the testbench prints the total number of test vectors applied and the number of errors detected.

The testbench in HDL Example 4.39 is overkill for such a simple excursion. However, information technology can hands be modified to test more than circuitous circuits by changing the example.txt file, instantiating the new DUT, and irresolute a few lines of code to fix the inputs and check the outputs.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128200643000040

Functional Verification

In Top-Down Digital VLSI Design, 2015

five.v Testbench coding and hdl simulation

A testbench provides the following services during a simulation run:

a)

Generate a periodic clock signal for driving simulation and clocked circuit models.

b)

Obtain stimuli vectors and utilise them to the MUT at well-defined moments of time.

c)

Acquire the signal waveforms that emanate from the MUT equally bodily response vectors.

d)

Obtain expected response vectors and utilize them equally a reference against which to compare.

e)

Establish a simulation report that lists functional discrepancies and timing violations.

VHDL and SystemVerilog not only support the modeling of digital circuits, they as well provide the necessary instruments for implementing simulation testbenches. 26 This section gives guidelines for doing so based on the general principles established earlier in this chapter.

5.5.1 Modularity and reuse are the keys to testbench design

File-based and golden-model-based simulation are not the only meaningful ways to process test vectors, run into fig.five.xvi for more options. The coding endeavor can be kept more reasonable past recognizing that all sorts of simulation set up-ups can be readily assembled from a very small-scale number of versatile and reusable software modules. Adapting to a new design or a new simulation set up-up can so largely be bars to a couple of pocket-size adjustments to existing code. 27

Observation five.16

With testbenches being major pieces of software, it pays to have a look at them from a software technology perspective.

Figure 5.16. Software modules from which simulation set-ups can be assembled to serve a variety of needs. Bones simulation ready-up that operates with a test suite previously stored on deejay (a). Alternative fix-up that generates stimuli and expected responses at run fourth dimension (b). Preparing stimulus/response pairs with the help of a golden model (c). Fully exclamation-based verification with no evaluation of responses (d). A hybrid organization that combines traits from (a and b) (east). A special set-up for involutory ciphers (f). Another special set-up for a situation where the stimuli exist as source lawmaking for a plan-controlled processor (g). 28

Preparing verification aids is not the same as circuit design. VLSI architects are limited to a synthe-sizable subset of their HDL and primarily remember in block diagram and RTL categories. Verification engineers, in contrast, must call up in terms of functionality and behavioral properties, but are not restricted to any particular subset of the language as testbenches and assertions are non for synthesis. They are free to use other language constructs or to use the same constructs in totally different ways. VHDL users tin can resort to shared variables at this point, while adopters of SystemVerilog will likely take advantage of classes, inheritance, and diverse high-level synchronization mechanisms offered by that language.

Observation 5.17

Verification requires a different mindset than coding synthesis models.

five.5.2 Anatomy of a file-based testbench

Consistent with what has been said in section 5.4.1, our focus hither is on the set up-up for a file-based format- and bicycle-true simulation. Source code organized along the lines shown in fig.5.17 is available from the volume'due south companion website. As a excursion example, we have chosen the Grayness counter of fig.5.3. Obtaining a skilful understanding requires working through that code. The comments below are intended as a kind of travel guide that points to major attractions.

Figure v.17. Arrangement of HDL testbench code for simulation set-up of fig. five.11.

All disk files stored in ASCII format

File-based simulation involves at least three files, namely the stimuli, the expected responses, and a simulation report. ASCII text files are definitely preferred over binary data because the sometime are human-readable and platform-independent, whereas the latter are not. What'due south more, the usage of ASCII characters makes it possible to take reward of the full MVL-9 (SysVer: MVL-iv) set of values for specifying stimuli and expected responses. Designers are so put in a position to bank check for a high-impedance condition past entering a z as expected response, for example, or to neutralize a response by inbound a don't care - (SysVer: ten).

Separate processes for stimulus awarding and for response conquering

Call up from section v.4.1 that the correct scheduling of simulation events is crucial:

The application of a new stimulus denoted as Δ (for Awarding),

The acquisition and evaluation of the response denoted as T (for Test),

The two clock edges symbolically denoted as ↑ and ↓.

It would be naive to include the time of occurrence of those cardinal events hardcoded into a multitude of wait for statements or later on clauses (SysVer: # terms) dispersed throughout the testbench code. A much ameliorate solution is to assign stimulus application and response acquisition to separate processes that get periodically activated at fourth dimension Δ and ⊤ respectively. All relevant timing parameters are expressed as constants or as generics, thereby making information technology possible to adjust them from a single place in the testbench code. 29

The stimulus awarding process is in accuse of opening, reading, and closing the stimuli file. The response acquisition process does the same with the expected responses file. In addition, information technology handles the simulation report file. When the stimuli file has been exhausted, the stimulus application process notifies its counterpart via an auxiliary 2-valued betoken named EndOfSim_S.

Stimuli and responses nerveless in records

Ii measures contribute towards rendering the two processes that apply the stimuli and that acquire the responses respectively independent of the MUT and, hence, highly reusable.

a)

All input signals are collected into ane stimulus record and, analogously, all output signals into 1 response record. 30

b)

The subsequent operations are delegated to specialized subprograms:

All file read and write operations,

Unpacking of stimuli records (where applicable),

Unpacking of expected response records (where applicable),

Packing of bodily response records (if written to file),

Response checking (actual against expected), and

Compiling a simulation report.

The main processes that make part of the testbench are so put in a position to handle stimulus and response records as wholesale quantities without having to know about their detailed limerick. The writing of custom code is confined to a handful of subprograms. 31

Simulation to proceed fifty-fifty after expected responses take been exhausted

Occasionally, a designer may want to run a simulation before having prepared a complete set of expected responses. At that place will be more stimuli vectors than expected responses in this situation. To support this policy, the testbench has been designed such as to continue until the end of the stimuli file, no matter how many expected responses are actually available. The numbers of responses that went unchecked gets reflected in the simulation written report.

Stoppable clock generator

A simulation run draws to a close when the processing of the last entry in the stimuli file has been completed and the pertaining response has been caused. A mundane difficulty is to halt the simulator. In that location exist three alternatives for doing so in VHDL, namely

a)

Accept the simulator finish after a predetermined amount of time,

b)

Crusade a failure in an assert or written report statement to arrest the run, or

c)

Starve the event queue in which case simulation comes to a natural end.

Alternative a) is as restrictive as b) is ugly, so c) is the pick to retain. A clock generator that tin be close downwardly is implemented equally concurrent procedure phone call, essentially a shorthand notation for a procedure phone call embedded in a VHDL process. 32

While starving the event queue also works in SystemVerilog, the customary style to end a simulation run is to use either the $stop or $finish system chore listed in table four.6.

Reset treated equally an ordinary stimulus flake

Timingwise, the reset betoken, irrespective of whether synchronous or asynchronous, gets updated at time Δ like whatsoever other stimulus fleck. It is, therefore, fabricated part of the stimuli tape.

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128007303000058

Microarchitecture

Sarah L. Harris , David Harris , in Digital Design and Computer Architecture, 2022

7.half dozen.3 Testbench

The testbench loads a program into the memories. The program in Figure 7.64 exercises all of the instructions by performing a computation that should produce the right result only if all of the instructions are performance correctly. Specifically, the program will write the value 25 to accost 100 if information technology runs correctly, only it is unlikely to do so if the hardware is buggy. This is an case of ad hoc testing.

Figure 7.64. riscvtest.due south

The machine lawmaking is stored in a text file called riscvtest.txt (Effigy 7.65) which is loaded by the testbench during simulation. The file consists of the car code for the instructions written in hexadecimal, one teaching per line.

Figure 7.65. riscvtest.txt

The testbench, top-level RISC-V module (that instantiates the RISC-Five processor and memories), and external memory HDL code are given in the following examples. The testbench instantiates the top-level module existence tested and generates a periodic clock and a reset at the start of the simulation. It checks for memory writes and reports success if the correct value (25) is written to accost 100. The memories in this example hold 64 32-chip words each.

HDL Example 7.12

Testbench

SystemVerilog

module testbench();

  logic   clk;

  logic   reset;

  logic [31:0] WriteData, DataAdr;

  logic   MemWrite;

  // instantiate device to be tested

  superlative dut(clk, reset, WriteData, DataAdr, MemWrite);

  // initialize examination

  initial

  begin

  reset <= 1; # 22; reset <= 0;

  end

  // generate clock to sequence tests

  e'er

  begin

  clk <= 1; # 5; clk <= 0; # 5;

  end

  // check results

  e'er @(negedge clk)

  brainstorm

  if(MemWrite) brainstorm

  if(DataAdr = = = 100 & WriteData = = = 25) begin

  $display("Simulation succeeded");

  $stop;

  terminate else if (DataAdr != = 96) begin

  $display("Simulation failed");

  $stop;

  cease

  end

  end

endmodule

VHDL

library IEEE;

apply IEEE.STD_LOGIC_1164.all;

use IEEE.NUMERIC_STD_UNSIGNED.all;

entity testbench is

end;

architecture test of testbench is

  component meridian

  port(clk, reset:   in STD_LOGIC;

  WriteData, DataAdr: out STD_LOGIC_VECTOR(31 downto 0);

  MemWrite:   out STD_LOGIC);

  stop component;

  signal WriteData, DataAdr:   STD_LOGIC_VECTOR(31 downto 0);

  signal clk, reset, MemWrite: STD_LOGIC;

begin

  –– instantiate device to be tested

  dut: top port map(clk, reset, WriteData, DataAdr, MemWrite);

  –– Generate clock with ten ns period

  process brainstorm

  clk <= 'ane';

  expect for 5 ns;

  clk <= '0';

  wait for 5 ns;

  end process;

  –– Generate reset for first ii clock cycles

  process begin

  reset <= 'one';

  wait for 22 ns;

  reset <= '0';

  wait;

  end procedure;

  –– check that 25 gets written to accost 100 at end of program

  process(clk) begin

  if(clk'upshot and clk = '0' and MemWrite = 'i') and so

  if(to_integer(DataAdr) = 100 and to_integer(writedata) = 25) and so

  written report "NO ERRORS: Simulation succeeded" severity failure;

  elsif (DataAdr /= 96) then

  report "Simulation failed" severity failure;

  end if;

  end if;

  end process;

end;

HDL Instance 7.13

Top-level Module

SystemVerilog

module top(input   logic   clk, reset,

  output logic [31:0] WriteData, DataAdr,

  output logic   MemWrite);

  logic [31:0] PC, Instr, ReadData;

  // instantiate processor and memories

  riscvsingle rvsingle(clk, reset, PC, Instr, MemWrite, DataAdr,

  WriteData, ReadData);

  imem imem(PC, Instr);

  dmem dmem(clk, MemWrite, DataAdr, WriteData, ReadData);

endmodule

VHDL

library IEEE;

use IEEE.STD_LOGIC_1164.all;

use IEEE.NUMERIC_STD_UNSIGNED.all;

entity top is

  port(clk, reset:   in   STD_LOGIC;

  WriteData, DataAdr: buffer STD_LOGIC_VECTOR(31 downto 0);

  MemWrite:   buffer STD_LOGIC);

end;

architecture examination of superlative is

  component riscvsingle

  port(clk, reset:   in   STD_LOGIC;

  PC:   out STD_LOGIC_VECTOR(31 downto 0);

  Instr:   in   STD_LOGIC_VECTOR(31 downto 0);

  MemWrite:   out STD_LOGIC;

  ALUResult, WriteData: out STD_LOGIC_VECTOR(31 downto 0);

  ReadData:   in   STD_LOGIC_VECTOR(31 downto 0));

    end component;

    component imem

    port(a:   in   STD_LOGIC_VECTOR(31 downto 0);

  rd: out STD_LOGIC_VECTOR(31 downto 0));

    end component;

    component dmem

    port(clk, we: in   STD_LOGIC;

  a, wd:   in   STD_LOGIC_VECTOR(31 downto 0);

  rd:   out STD_LOGIC_VECTOR(31 downto 0));

    cease component;

    signal PC, Instr, ReadData: STD_LOGIC_VECTOR(31 downto 0);

begin

    –– instantiate processor and memories

    rvsingle: riscvsingle port map(clk, reset, PC, Instr, MemWrite, DataAdr,

  WriteData, ReadData);

    imem1: imem port map(PC, Instr);

    dmem1: dmem port map(clk, MemWrite, DataAdr, WriteData, ReadData);

end;

HDL Example 7.fourteen

Educational activity Memory

SystemVerilog

module imem(input   logic [31:0] a,

  output logic [31:0] rd);

  logic [31:0] RAM[63:0];

  initial

  $readmemh("riscvtest.txt",RAM);

  assign rd = RAM[a[31:2]]; // give-and-take aligned

endmodule

VHDL

library IEEE;

use IEEE.STD_LOGIC_1164.all;

use STD.TEXTIO.all;

utilize IEEE.NUMERIC_STD_UNSIGNED.all;

utilize ieee.std_logic_textio.all;

entity imem is

  port(a:   in   STD_LOGIC_VECTOR(31 downto 0);

  rd: out STD_LOGIC_VECTOR(31 downto 0));

end;

architecture behave of imem is

  type ramtype is array(63 downto 0) of STD_LOGIC_VECTOR(31 downto 0);

  –– initialize memory from file

  impure office init_ram_hex render ramtype is

  file text_file : text open up read_mode is "riscvtest.txt";

  variable text_line : line;

  variable ram_content : ramtype;

  variable i : integer := 0;

  begin

  for i in 0 to 63 loop –– set all contents depression

  ram_content(i) := (others => '0');

  end loop;

    while not endfile(text_file) loop –– prepare contents from file

  readline(text_file, text_line);

  hread(text_line, ram_content(i));

  i := i + i;

    cease loop;

    render ram_content;

  end function;

  bespeak mem : ramtype := init_ram_hex;

  begin

  –– read retentivity

  process(a) begin

  rd <= mem(to_integer(a(31 downto ii)));

  finish procedure;

terminate;

HDL Instance 7.15

Data Retentiveness

SystemVerilog

module dmem(input   logic   clk, nosotros,

  input   logic [31:0] a, wd,

  output logic [31:0] rd);

  logic [31:0] RAM[63:0];

  assign rd = RAM[a[31:two]]; // discussion aligned

  always_ff @(posedge clk)

  if (we) RAM[a[31:2]] <= wd;

endmodule

VHDL

library IEEE;

utilize IEEE.STD_LOGIC_1164.all;

use STD.TEXTIO.all;

utilise IEEE.NUMERIC_STD_UNSIGNED.all;

entity dmem is

  port(clk, nosotros: in   STD_LOGIC;

  a, wd:   in   STD_LOGIC_VECTOR(31 downto 0);

  rd:   out STD_LOGIC_VECTOR(31 downto 0));

end;

architecture behave of dmem is

brainstorm

  procedure is

  type ramtype is array (63 downto 0) of STD_LOGIC_VECTOR(31 downto 0);

  variable mem: ramtype;

  begin

  –– read or write retentivity

  loop

  if rising_edge(clk) then

  if (we = '1') then mem(to_integer(a(7 downto ii))) := wd;

        stop if;

      end if;

      rd <= mem(to_integer(a(seven downto 2)));

      wait on clk, a;

      terminate loop;

    end process;

end;

Read total chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780128200643000076

The Role of Hardware Description Languages in the Blueprint Process of Multinature Systems

Sorin A. Huss , in The Electrical Technology Handbook, 2005

3.4.2 Models of Functional Units

The Testbench

The entity denoted testbench is the acme-level entity. It contains the environment and the device under test. The testbench is self-contained and used to ascertain the interconnections between the unmarried layers. Moreover, it selects the architectures to be used. The code is given in the following:

The Environmental Model

The environment is modeled to produce all the input data needed to test the depth gauge. In consequence, but observable effects for the device under test are reproduced.

The generic timescale can be used as a workaround for simulators that do non allow to alter the resolution limit for the type fourth dimension. Simulations based on femtoseconds reduce the guaranteed simulation time to ii.14 μs—typical application employments are several hours or fifty-fifty days. This requires range extensions beyond 231 or changing the resolution limit to milliseconds. If both are not feasible, a timescale to shrink the simulation's duration may be used.

The port terminals Pressure_sensor and Temp_sensor are modeled as sources that guarantee pressure and temperature potentials without side effects. These potentials can be gained from the scuba diver'southward position. To simplify the model, only the resulting potentials are described in the post-obit 2 architectures. The first is characteristic for lake diving, and the second is aimed at ocean diving.

The model for the lake dive outlined in the following reproduces the strong coherence betwixt diving depth and water temperature. This temperature profile is typical for deep lakes during the summertime: the surface is heated past the sun, whereas wind and solar day/night changes form a several meter deep warm surface layer. Beneath this layer, the temperature drops chop-chop beneath 10°C.

Temperature Sensor

The temperature sensor we used is a platinum resistor made in thin-layer engineering science on Al2Othree substrate. We use a 1-kΩ sensor with a temperature sensitivity (Tk) of iii.85 ⋅ 10iii/°C (as specified in IEC 751). These sensors are quite popular in automotive applications. This results in depression cost and good availability of subcomponents. The sensor is used in a voltage divider together with the abiding resistor Rfix. The temperature information is represented in the output voltage Uout. The multinature property is thus obvious equally shown in Figure three.13.

FIGURE iii.13. Temperature Sensor

The simple architecture is based on the Sensor Nite PT1000 from Heraeus. Thus, the modeling process is constrained to an exploitation of commercially available subsystems:

Digital Block

The depth is determined in the digital part from data obtained through pressure and temperature measurements so represented as electrical quantities. In the beginning intermediate pace, the actual temperature is calculated using the temperature information represented in six-chip. This is followed by the adding of the actual pressure from the force per unit area data given in 12-flake. The conversion of the specified force per unit area to a depth beneath the h2o surface is obtained by means of following equation:

(3.i) depth = 1 m 0.one bar ( pressure 1 bar )

The model outlined in this section implements digital signal transformations aimed to recoup for the nonlinearities of the used sensors only. The depth is calculated with a guaranteed accuracy of 0.1 grand.

The digital_algorithmic compages makes utilise of the resources sharing and scheduling features of the synthesis products from Synopsys, which results in a relatively small circuit design. The synthesized lawmaking consists of ane multiplication unit of measurement, one division unit, and some glue logic. The code given in the following (algorithmic level) lawmaking and is fully synthesizable past the Behavioral Compiler.

This case highlights the features of VHDL in denoting high levels but these features point still synthesizable models for the digital domain.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780121709600500207

Functional verification

Hung-Pivot (Charles) Wen , ... Kwang-Ting (Tim) Cheng , in Electronic Design Automation, 2009

ix.4.1 Testbench and simulation environs development

In general, the testbench is an HDL description used to create a airtight system on meridian of the design nether verification. A testbench consists of 3 fundamental components: a stimuli driver, a monitor, and a checker.

The stimuli driver is responsible for providing stimuli to the DUV. The stimuli can be either predetermined or generated during simulation. The purpose of the stimuli driver is not to mimic the behavior of the entire neighboring blocks but to maintain the interface coherence to the DUV.

The monitor is used to observe signal at the inputs, outputs, and any internal wires of interest on the DUV. The values at the input and output signals must be consistent with the interface protocol, and the monitor will issue an error if any exception occurs.

A checker can be viewed as a special blazon of monitor for checking the functionality of the design intent. Traditionally, designers create the functionality checkers manually and utilise them to compare the responses from the design with the specification. As designs get more complicated, the need to automate the development of such checkers increases.

On the footing of the coverage metrics, verification engineers attempt to prepare a set of test cases to cover the target functional events. In developing such test cases, experience plays a crucial role. Creating meaningful test cases for some specific events often rely heavily on a designer's knowledge and estimation of the specifications.

Consider a 16-bit i-hot encoding passenger vehicle protocol. To achieve an optimal coverage for all scenarios, the test cases would crave each bit taking a turn to be one with others being 0. In deriving the exam cases, it could be difficult to notice the regularity solely from the structure of a design implementation. All the same, having cognition of the functionality of the protocol would aid capture the regularity and similarity for each bit that make test generation easier and more efficient.

Enumerating deterministic examination cases to cover all functions is wearisome. An alternative is to convert a design specification into an HDL model to automate the checking. Such a testbench is chosen a self-checking testbench, because checking instrumentation is no longer needed. The self-checking testbench paradigms can be divided into 3 types: checking with aureate vectors, checking confronting a reference model, and transaction-based checking.

Checking with golden vectors is the most widely used approach among the iii. Given coverage metrics, the verification engineers search for test cases at inputs and derive the respective output responses manually or by use of an auxiliary program. Such combinations of input and output vectors are chosen the golden vectors. After the testbench applies the input vectors to the DUV, the actual responses are captured and compared with the gilded vectors. A bug is found when a mismatch occurs betwixt the gilt and the actual responses. Effigy 9.13 shows the components of this method.

FIGURE ix.13. Self-checking testbench with gilt vectors.

The checking-against-a-reference-model epitome uses a reference model that captures all functions in the specification. The reference model is typically implemented at a more than abstruse level with either a high-level programming or a verification linguistic communication. All input vectors are applied to both the reference model and the DUV, and their responses are evaluated and compared. If the comparison takes place at the end of each cycle, the reference model must be cycle-authentic. The checker compares the responses from both the DUV and the reference model, as illustrated in Figure ix.fourteen. If the specifications modify, the reference model would need to be modified appropriately. This modification effort is usually much lower than the effort of reproducing all golden vectors required for the checking-with-gilded-vectors image.

Figure ix.14. Self-checking testbench with a reference model.

Transaction-based checking is applicable to the DUV that can correspond to commands and information in a transaction. It uses a scoreboard to record the verified command and data. The checker is used to query the scoreboard. It issues an error if the identifier cannot match whatsoever transaction in the scoreboard or if the control and data are not the expected values given by the scoreboard. This concept is illustrated in Effigy nine.xv.

FIGURE nine.15. Transaction-based self-checking testbench.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780123743640500163

Design Simulation

R.C. Cofer , Benjamin F. Harding , in Rapid Arrangement Prototyping with FPGAs, 2006

eight.6 Mutual Simulation Mistakes and Tips

One of the almost common mistakes associated with simulation in rapid system development is the use of waveform stimulus. Waveform stimulus generation is a fourth dimension consuming procedure that typically forces the design team to leave out many test cases. This tin can atomic number 82 to a less mature pattern at the beginning of the board-level debug and testing phase. Resources are better spent generating test cases with higher levels of automation through the implementation of testbenches.

A mutual simulation mistake when using testbenches is inadequate test case coverage. This can cause a sub-optimal design to be debugged at the board level. This should be avoided in order to avoid wasted effort and schedule.

Another common mistake involves implementing an inflexible, nonscalable examination model. This can cause the pattern team to have to re-implement significant portions of the exam lawmaking to accommodate blueprint changes or updates. This can lead to schedule erosion and wasted effort.

To assist further streamline the simulation process and help in the engineering trade-offs associated with pattern testing, we will present some simulation tips. The first tip relates to potential differences between pre- and post-synthesis simulation results. It is important to realize that pre-synthesis simulation results will often be different from mail service-synthesis simulation results. For example, command statements in synthesis tools may produce longer delays for "if/else" structures when compared to the delays generated by "switch/case" structures. This is due to the fact that "if/else" statements generate priority-based structures. Thus, it may exist possible to experience differences in timing between simulation and the synthesized board-level implementation. The 2d design tip is related to pattern grouping and ordering. Effigy viii.3 shows 2 differently formatted HDL statements and their potential pre- and post-synthesized simulation results. Synthesis tools accept the potential to implement the 2nd function as a parallel structure, while design simulation may implement both equations the same style. Thus, the pre- and post-synthesized simulation results may be different.

Figure 8.3. Pre- and post-synthesis simulation issue

Synthesis tools generally (but not universally) ignore initial values. Notwithstanding, it may also be possible for a simulated design to not ignore initial values. A result of this discrepancy is that in that location may be a departure between pre- and post-synthesis simulation results and the design team should take this into consideration during testing. Thus, it is of import to understand the implementation details of the selected synthesis tool set.

Simulation results can be improved by exercising design blocks with captured real-earth data streams in addition to exercising the block with generated input data. An instance is the simulation of an encryption/decryption cake pair with a captured data stream from the intended application in add-on to simulation with computer generated inputs.

It is desirable to assign an individual, different than the original block designer, to simulate a design block. While this may take a petty longer since the second individual will need to come to speed on the design block, information technology can avoid many simulation errors and oversights. The designer of a block volition bring biases and preconceptions to any simulation effort that can preclude comprehensive block simulation. In addition, testing other designer's developed design blocks tin be a good initial assignment for new HDL designers. Without a comprehensive design verification philosophy standard, design verification volition ultimately be as private as each designer'southward personality. The implementation of uniform code standards and code reviews can dramatically reduce blueprint development take chances.

A final design tip is the implementation of "hardware in the loop" simulation. If this characteristic is supported by the selected tool set, large-scale simulation cycle time can be dramatically reduced. This approach takes advantage of the acceleration of parallel hardware implementation over sequential software-based simulation. The following checklist identifies simulation topics to consider.

Simulation Checklist
Use behavioral and timing simulation with testbenches for simulating
Add complexity incrementally
Focus simulation efforts on critical design areas and new design functionality
Develop testbenches which can evaluate simulation results automatically
Develop modular testbenches with reuse in heed
Understand simulator details – unlike simulators accept unlike features, capabilities and performance characteristics
When possible use result-based simulators
Outcome-driven testbenches should specify an explicit stimulus sequence
Develop flexible testbenches which can accommodate design changes
Implement testbenches to validate functionality over a wide range of atmospheric condition
Develop a simulation test plan
Implement block level simulation before integration to the next design level
Create testbenches for each lath-level component external to the FPGA

Read total affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780750678667500093

Foreword for "Formal Verification: An Essential Toolkit for Modern VLSI Pattern"

Robert Bentley , in Formal Verification, 2015

Notwithstanding, this approach has a number of drawbacks:

Testbench evolution can be a lengthy process—typically of the club of months for complex areas of the design

Testbench evolution is an error-prone activeness, oft creating a number of bugs in the testbench equal to or greater than the number that exist in the actual RTL

Exam execution is an expensive business—Intel dedicates tens of thousands of high-performance computer servers to this trouble running around the clock, along with dedicated emulation hardware and FPGA-based solutions

Tests themselves may contain errors that either mask problems in the RTL (false positives) or wrongly betoken errors that exercise not in fact exist (false negatives)

Debug of failing tests is a major effort drain—oftentimes the largest single component of validation effort—in role because the failure is often detected only long afterward the betoken at which it occurred

Information technology is in general hard to tell how much of the blueprint has been exercised ("covered") by whatever given fix of tests, then even if all tests are passing it however isn't articulate that the design is actually clean

Bug detection via dynamic testing is an inherently sequential process, oftentimes referred to every bit "peeling the onion," since bugs can and do hide backside other bugs

Some bugs—like the Pentium® FDIV bug mentioned earlier—are data-dependent, or involve such a circuitous set of microarchitectural conditions that it is highly unlikely that they will exist hit past random testing on an RTL model

Read total chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780128007273000162

Formal holding verification

Erik Seligman , ... M Five Achutha Kiran Kumar , in Formal Verification, 2015

Simulation

In a typical simulation surround, y'all need to blueprint a testbench that drives agile values to the model. Then we demand to supply a procedure for deciding what random values to drive and and so cheque if whatsoever of the values practise our desired atmospheric condition.

In our combination lock example, if you are worried about trying to find alternate combinations that will incorrectly open your lock, you lot volition demand to design your testbench to drive lots and lots of random values, hoping to luckily guess the random one that results in incorrectly seeing open == 1 on the output. In this example, where each attempt at opening the lock requires three cycles during which 10four possible values are being driven on each cycle, we volition need to get through 104×iii/2, or about 500 billion, simulation cycles on average before we stumble upon the needed result.

Furthermore, even after spending all those simulation cycles to detect the alternate combination, and so fixing the problems, we will even so have no idea whether other alternating combinations exist. We will have to run a trillion cycles for complete coverage.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780128007273000046

Cores and Intellectual Holding

R.C. Cofer , Benjamin F. Harding , in Rapid System Prototyping with FPGAs, 2006

13.5.three Qualifying an IP Vendor

One of the biggest challenges associated with selecting an IP solution is the qualification and evaluation of an IP vendor. The IP partner evaluation process can exist challenging since the selection process is subjective and unremarkably must be based on incomplete information. Many factors must be taken into consideration and evaluated.

Unfortunately, few projects are similar in scope, scale or functionality and the staff of the IP vendor (and their availability) are subject to significant changes. Thus, the cognition and experience regarding prior IP vendor partnerships may exist of limited applicability in subsequent IP vendor evaluations. The experience a client had previously may non reverberate the experience that may be experienced with a new IP block engagement.

The procedure can exist further complicated when the standard IP block offered by the vendor does not implement the exact functionality required for the project. In this case, an evaluation must also exist fabricated regarding how modifications to the offered IP block may be made. Making the modifications may require an additional contract with the IP vendor or a third party.

When evaluating an IP vendor, enquire open-concluded questions. For example, inquire them to explain their configuration direction process. Determine if they have coding standards. Ask them to define their verification and validation process. Is information technology independent? Does the aforementioned group that adult the IP besides test the IP? The answers to these questions can help the design squad better understand the potential IP vendor.

The post-obit listing presents some topics that should be addressed when evaluating an IP vendor or evaluating modification of an bachelor IP core.

IP Vendor Qualification Question Checklist
The level of design pre-verification completed
Availability of testbenches and examination results
Supplier experience with the targeted FPGA vendor/architecture/component
IP vendor tool set used to generate, synthesize, and simulate IP blocks
IP design flow and testing procedures
Documentation philosophy
Level and completeness of IP documentation
Contract options; back up, modification, guarantees
How changes or updates are implemented
Licensing requirements and employ limitations
IP delivery format and design collateral provided
Show of IP performance on the targeted FPGA platform
Organisation history
Staff size and qualifications
Who on staff volition provide required IP support
Organizational expertise in disquisitional specialization areas
Number of successful commercial IP design implementations
Has the blueprint been optimized for the targeted device family?
Previous implementations within the targeted device family unit
Testbench and test result availability

Downwardly Selecting

To select the all-time solution for a specific application or projection, information must be gathered nearly the potential IP offerings and their suppliers. The review process must start with the development of an abstruse of the technical requirements, to be provided to potential candidates if the implementation is annihilation other than a standard off-the-shelf office. The responses from the IP vendors should eliminate whatsoever solutions that can't meet the system-level requirements. This procedure should be repeated with increasingly fine levels of technical detail. With each review process iteration, IP cores that don't meet the operational requirements tin can be eliminated. The final choice round should involve merely two or three potential candidates. A final trade report with more than than three candidates is likely to be quite complex and may crave an extended selection period.

The final choice stage should include additional detailed discussions with the IP suppliers regarding design details. This is a critical phase where diligence tin can pay dividends. Any candidates that are in doubt should be reviewed closely and eliminated from selection if possible. Once a decision has been made on the final candidates, more detailed technical reviews should be held. The process of selecting an IP vendor may include the demand to reveal some proprietary information to the IP suppliers being evaluated. In cases where highly sensitive textile is involved, this can be a risk. Putting a confidentiality agreement or nondisclosure agreement (NDA) in place may be necessary, although this may involve a pregnant delay if legal departments go entangled in the process.

The assay and evaluation stage is when hands-on evaluation will likely accept place. This process includes the prototyping of solutions. This is the point in the IP selection process where the level of a supplier's support tin exist all-time evaluated.

During the final stages of the decision process, it is possible to see a setback with the selected IP vendor, so it is desirable to identify and maintain a backup arroyo with one or more alternate vendors if possible. Try to maintain practiced relationships with the IP vendors who were non ultimately selected.

Earlier eliminating a potential candidate, give them an opportunity to respond. In cases where a supplier has invested heavily in assisting the design endeavour, it is desirable to give them the selection and allow them decide if they desire to advise an alternative arroyo.

Try to go along an open up listen during the qualification process. Consider if a candidate may be trying to "buy" their way into the design. This can prove counterproductive during the product development cycle. It is possible that later price adjustments will occur.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780750678667500147

Hardware-Software Prototyping from LOTOS

LUIS SÁNCHEZ FERNÁNDEZ , ... WOLFGANG ROSENSTIEL , in Readings in Hardware/Software Co-Design, 2002

vii.ii Prototyping

Several different implementations accept been mapped to the prototyping environment. These range from a pure software implementation to testbench versions which immune for better design space exploration and different debugging levels. The C and VHDL lawmaking for software and hardware were mainly automatically generated from the specification. Manual changes were made in the process or tailoring the C code for the Hyperstone processor used in the prototype. Manual additions were necessary to implement the communication interfaces for hardware/software-communication. These additions were made in the software and in the hardware. This kind of manual work is expected to be required for each new target architecture, since every migration to another target architecture involves slight changes of the communication scheme and a different software development environment, which imposes different constraints on the software.

A major advantage of prototyping derives from the existence of such manually coded parts. These parts are typically very difficult to simulate, since they mostly refer to inter-facing, be information technology hardware/software interfaces or interfaces to the globe. A prototype allows for much deeper validation by running nether realistic weather condition and by running in real-time and thus covering arrangement times of hours, while simulation can only cover organization times of a few seconds. This was also true in the example of the instance presented in this paper.

The Ethernet bridge was implemented past a processor-coprocessor architecture. The communication between processor and coprocessor is done directly via the 32-bit wide processor jitney. The coprocessor appears every bit a gear up of I/O addresses to the processor and the software. For the presented awarding, five addresses were used, ii for reading from the hardware and three for writing to the hardware. The communication is synchronised via the transferred information, not via a status annals. The whole address decoding and motorbus protocol handling is done by the hardware on the FPGAs, which represents the coprocessor. The RAM needed past the coprocessor is implemented as on-chip SRAM straight on the FPGAs.

Read full chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9781558607026500557