My journey with C++ continues with three engines up and running. The bitgenerator, the software encoding bitstreams to suit our system. The software loader which runs data2mem to load the elf file into the RAM blocks on the bitstream. Last but nor least the synthesizer which runs Xilinx command line tools to transform the Verilog design into a bitstream.
Picking up from last week, the software encoding the bitstream , bitgenerator, wasn’t supposed to use the Boost library to do base64 encoding. After writing the functions for Base64 encoding and successfully testing them we decided to use the Boost library to do it. The Main reason behind it is to avoid any reckless mistakes due to my lack of experience in programming. I used the Boost library binary to base64 as explained by the first answer to this question. If you don’t care about like me about the line breaks just remove that part. Moreover, instead of using an iterator like in the example to get the output of the base64 you can use a character pointer pointing to a memory block enough to accommodate your encoded output. Watch out because this code doesn’t insert the ‘=’ padding character at the end of each block. You should add your code at the end of each encoding process to pad the output character array.
For the Synthesizer, my code uses the system() function. This function invokes the command processor to run a set of commands passed to it as a string. The concept is pretty simple. Sometimes it gets slippery when using escape characters to write backslashes and double quotations in the string of commands. The getenv() function is a great help in checking the values of environment variables as the program is running. I’ve put some effort into making the values and nomenclature of various directories and flags as clear as possible to aid any one looking to use this class for compiling codes. The data2mem tool uses the same concepts of the synthesizer code.
This is my first time writing classes but I’ve put some effort into making sure my classes meet the basic two design goals of classes, stability with any code and catering for various user scenarios. I can’t claim to have met those two goals as it pleases me but I’ve done what I could considering my other pending HDL tasks.
Speaking of HDL, I started finalizing all my devices. I had bits and pieces missing in each device and this week I checked the GPIO to fix this tiny thing nagging me, the SEL signal. Based on Wishbone standards, after the device receives a read or a write it should only send the data bytes or consider reading those selected by the SEL signal. However, for the special case of our system, the AEMB itself after it reads from the data bus it only considers those bytes selected by the SEL signal. Thus, slaves can still send the full Word and the AEMB will select the suitable ones itself. On the other hand, it becomes more crucial when the processor is writing to the GPIO to ensure that only the selected bytes will change. Still, our drivers for GPIO always write a full word to the device. Bottom line, the GPIO SEL signal selects nothing. The device only requires that any of the SEL bus lines to be high to perform a transaction. Otherwise it’ll ignore the transaction.
To conclude, the only engine left is the creator of the Verilog top level. On the other hand a huge list of small fixes is still pending for our HDL devices.