In the past month I’ve been experimenting with various approaches to create Automated Test Benches to my Verilog IO cores that are using Wishbone bus. The Automated part simply means that the test bench will verify the correctness of the received outputs and display a Pass or Fail message. This is opposing to the normal test benches where a waveform or textual dump of signals is generated and the user has to verify the correctness of the outputs himself. In this blog I’ll explain the various algorithms I’ve experimented with and their pros and cons.

  • The first method is one where the IP core is connected to two modules, one that loads in test vectors into the device and another that monitors the input, predicts the expected correct outputs and compare them with the actual obtained outputs. In this approach the designer have to build a circuit that models the behavior of the IP core to generate the expected output.
    • Pros
      • Almost all of the test bench is coded in Verilog.The designer don’t need to create scripts to generate the test vectors or verify the obtained outputs. Hence, no knowledge of other scripting or programming languages is required.
      • Since the test bench is not going to be synthesized, the designer can be free to use non-synthesizable constructs
    • Cons
      • If the developer is not careful it is possible that his Verilog code can be comparing the expected outputs against them selves not the actual ones and hence the output is always true.
      • If non-synthesizable logic is used simulators like Verilator that don’t support these constructs can’t be utilized.
      • Can get complicated with sophisticated devices.

 

  • The Second method is one where the modeling part and test generation were generated by a script. For example, a bash script is used to generate the test vectors into a text file and the behavior of the Ip core to be tested is modeled inside the bash script and generates the expected output. A Verilog module is then simply used to load the test vectors into the core and then compare the generated outputs with the expected ones
    • Pros
      • Modelling can be much simpler when written in a programming or shell language.
    • Cons
      • Requires coding in several languages

 

  • The Third approach which is the one being implemented involves no modeling at all. In this approach two kinds of tests are used, fixed and random tests. Necessary fixed tests are determined by the designer and the driving test vectors are generated manually or using a script. Those fixed tests test for all possible know scenarios that the device should be able to handle. The output of the tests is generated by running the test vectors through the tested cores and printing out it’s outputs. The obtained outputs are then manually verified to be matching the followed specs and once proved, they are saved as reference outputs that will be compared with future tests to verify them. Another kind of tests is random tests which are simply verified by comparing the outputs of two simulators running the same code together to ensure no language or simulator problems are present.
    • Pros
      • No modelling takes place so it’s simpler to code and avoids the need to verify that your model is working properly.
      • The Verilog parts of the test vectors are very limited that simply load the test vectors and dumps the outputs.
      • Can be tuned into generic testing devices since no modeling is involved. Hence if all differences between cores are accounted for in one code, this code can be used to test every single IP core.
    • Cons
      • Time consuming as the designer needs to verify each case of the reference output manually before being able to use it to verify future tests.
      • Consumes more space as test vector and reference outputs need to be saved along with the script files.

That’s all about the algorithms I experimented with. I’ve chosen to adapt the final method since this is the one that can be used to create a generic test device, our ultimate goal. However, due to time constraint, I didn’t manage to actually build the full generic test module. In the next two posts I’ll explain what I’ve accomplished and the hurdles needed to be overcome to create a generic test bench.