This week was a real eye opener in many ways. Physically I was unwell, and mentally I was fatigued. But it throughout the week, I picked up a lot of advice from the supervisor and I hope I was able to implement some of them in my work this week.
Bitstream Header File
The first task of the week was to look at what was learned from the research done on the Bitstream Header File. As explained previously, the Bitstream Header file contains some information about the bitstream itself (https://blog.aeste.my/archives/2892). To understand this information better and understand the exact nature of the Bitstream length given in the header file, a test bitstream was generated by my colleague so that it could be used to observe exactly what data the bitstream held. The findings were very encouraging. A hex-dump of the header file revealed that the actual data was exactly as explained in the Xilinx datasheets. Furthermore it was found that the Bitstream length in the header file is the total length of the bitstream excluding the header file itself. Having reported this finding to the supervisor, it was decided that this was a very useful piece of information as it means that the size of the bistreams does not need to be calculated at all. By taking the information from the header file itself, it makes the overall program much simpler and takes the complicated Bitstream size counter(https://blog.aeste.my/archives/2866) out of the program altogether.
Base64 Decoder with CRC
Having worked a bit with my colleague on the Base64 Decoder complete with CRC check the previous week, the task given to me initially was the test the code for bugs and performance. The initial problem I faced was generating the test files of varying lengths with the correct format (512 bytes of data +2 bytes of CRC). The on-line translator and calculator used thus far proved inefficient for larger files. The supervisor then provided a useful solution by using a free checksum utility called Jacksum.
The supervisor helped write a short script to generate the dummy files for testing the code. During the testing process, a few bugs were found and for larger files, a CRC error was detected which should not have been. Having reported this finding to the supervisor, I was asked to redesign the Base64 decoder and make the mechanism simpler so as to avoid these bugs, as currently the coding catered to too many different conditions for the file.
The previous method for the decoder catered for files encoded in the following method:
1) Get 512 block;
2) Generate CRC16 for 512-byte block;
3) Get another 512 block;
4) Generate CRC16 for the next 512-block;
10) Base64 the entire file.
This brings up many issues as the code must cater to different block sizes. The supervisor then provided a better method which would simplify the code massively. The supervisor suggested developing a decoder to cater for files encoded in the following method:
1) Get 512 block;
2) Generate CRC16 for 512-byte block;
3) Base64 this 512+2 byte block.
4) Get next 512 block;
5) Generate CRC16 for next 512-byte block;
6) Base64 next 512+2 byte block;
7) Append it to the earlier Base64 block.
This allows for the implementation of a repeated pattern of decoding. The block sizes will all be the same and contain exactly the same data.
The two bytes of padding are added to sum up the block size to 516 bytes (divisible by 3 to complete base64 encoding) and is represented by the character “=” in the file itself (http://en.wikipedia.org/wiki/Base64). This would effectively make the decoding and writing process much easier as the decoder will be able to decode exactly 516 bytes of data, write the data and check for CRC and move on to the next block of 516 bytes of data; a fixed pattern that the code could follow.
Implementation of the new method
The implementation of the method was done after further review of the current coding and understanding of where the changes are to be made. The method stated above was implemented into the coding and the following diagram describes the flow of the data and how it is processed by the PIC:
The first few steps of the code remains the same apart from the fact that the block sizes are altered to 516 instead of the 514 previously set. As data is received from the TCP socket, 4 bytes are taken at a time and the decoder decodes them to 3 bytes. The 3 bytes are then written to the SD card. The block counter (516 initially) is decremented by 3 at every step of writing the 3 bytes. Eventually, the block counter will reach 3. This means that the block boundary has been reached and the first of the two bytes of CRC is sent to the SD card. For the remaining 3 bytes to be written, only the 1st byte (which is the second byte of the CRC) is written to the SD card and the other two bytes of padding are discarded. The same method is followed for the next block of 516 bytes. As can be seen, the repeatable pattern makes the coding simpler.
To test out the code, the script for generating the base64 encoded files was modified slightly and some testing was done.
Performance of the writing process (with Base64 Decoding):
Once the reliability of the code was established, a speed test was run to determine the performance of the writing speed of the entire process. The following figure shows the results of the test:
The test results show that the performance of the data write operations has actually gotten slower. This is largely down to the Base64 decoder and perhaps the CRC check. For a full length bitstream file size, it takes about 51 seconds which is much slower than what is required. Much more work needs to be put in to improve the performance.
SD card initialization issues
This feels like a topic beaten to death but it was something that needed to be investigated out of necessity this week. During the testing of Base64 decoder code, the current SD card stopped responding completely, both to the PIC as well as the PC. The only working SD card left was the 8GB card that so far has failed to initialize on the PICTail. Therefore, using the hardware debugger on the PICKit3 programmer, the Microchip’s MDD Initialization code was debugged to find out exactly where the code had failed. Upon initial debugging, it was found that the code failed during the DiskMount process in the Media Initialize function. Delving deeper, it was discovered that the problem occurred when the SD card would not respond to the PIC’s request to echo back the operating conditions check.