This week was a real eye opener in many ways. Physically I was unwell, and mentally I was fatigued. But it throughout the week, I picked up a lot of advice from the supervisor and I hope I was able to implement some of them in my work this week.

Bitstream Header File

The first task of the week was to look at what was learned from the research done on the Bitstream Header File. As explained previously, the Bitstream Header file contains some information about the bitstream itself (https://blog.aeste.my/archives/2892). To understand this information better and understand the exact nature of the Bitstream length given in the header file, a test bitstream was generated by my colleague so that it could be used to observe exactly what data the bitstream held. The findings were very encouraging. A hex-dump of the header file revealed that the actual data was exactly as explained in the Xilinx datasheets. Furthermore it was found that the Bitstream length in the header file is the total length of the bitstream excluding the header file itself. Having reported this finding to the supervisor, it was decided that this was a very useful piece of information as it means that the size of the bistreams does not need to be calculated at all. By taking the information from the header file itself, it makes the overall program much simpler and takes the complicated Bitstream size counter(https://blog.aeste.my/archives/2866) out of the program altogether.

Base64 Decoder with CRC

Having worked a bit with my colleague on the Base64 Decoder complete with CRC check the previous week, the task given to me initially was the test the code for bugs and performance. The initial problem I faced was generating the test files of varying lengths with the correct format (512 bytes of data +2 bytes of CRC). The on-line translator and calculator used thus far proved inefficient for larger files. The supervisor then provided a useful solution by using a free checksum utility called Jacksum.

The supervisor helped write a short script to generate the dummy files for testing the code. During the testing process, a few bugs were found and for larger files, a CRC error was detected which should not have been. Having reported this finding to the supervisor, I was asked to redesign the Base64 decoder and make the mechanism simpler so as to avoid these bugs, as currently the coding catered to too many different conditions for the file.

The previous method for the decoder catered for files encoded in the following method:

1) Get 512 block;
2) Generate CRC16 for 512-byte block;
3) Get another 512 block;
4) Generate CRC16 for the next 512-block;

10) Base64 the entire file.

This brings up many issues as the code must cater to different block sizes. The supervisor then provided a better method which would simplify the code massively. The supervisor suggested developing a decoder to cater for files encoded in the following method:

1) Get 512 block;
2) Generate CRC16 for 512-byte block;
3) Base64 this 512+2 byte block.
4) Get next 512 block;
5) Generate CRC16 for next 512-byte block;
6) Base64 next 512+2 byte block;
7) Append it to the earlier Base64 block.

This allows for the implementation of a repeated pattern of decoding. The block sizes will all be the same and contain exactly the same data.

Fig 1: 516 bytes block

Fig 1: 516 bytes block

The two bytes of padding are added to sum up the block size to 516 bytes (divisible by 3 to complete base64 encoding) and is represented by the character “=” in the file itself (http://en.wikipedia.org/wiki/Base64). This would effectively make the decoding and writing process much easier as the decoder will be able to decode exactly 516 bytes of data, write the data and check for CRC and move on to the next block of 516 bytes of data; a fixed pattern that the code could follow.

Implementation of the new method

The implementation of the method was done after further review of the current coding and understanding of where the changes are to be made. The method stated above was implemented into the coding and the following diagram describes the flow of the data and how it is processed by the PIC:

Fig 2: Base64 Decoder with CRC check

Fig 2: Base64 Decoder with CRC check

The first few steps of the code remains the same apart from the fact that the block sizes are altered to 516 instead of the 514 previously set. As data is received from the TCP socket, 4 bytes are taken at a time and the decoder decodes them to 3 bytes. The 3 bytes are then written to the SD card. The block counter (516 initially) is decremented by 3 at every step of writing the 3 bytes. Eventually, the block counter will reach 3. This means that the block boundary has been reached and the first of the two bytes of CRC is sent to the SD card. For the remaining 3 bytes to be written, only the 1st byte (which is the second byte of the CRC) is written to the SD card and the other two bytes of padding are discarded. The same method is followed for the next block of 516 bytes. As can be seen, the repeatable pattern makes the coding simpler.

To test out the code, the script for generating the base64 encoded files was modified slightly and some testing was done.

Performance of the writing process (with Base64 Decoding):

Once the reliability of the code was established, a speed test was run to determine the performance of the writing speed of the entire process. The following figure shows the results of the test:

Fig 3: Upload Speed Performance Data

Fig 3: Upload Speed Performance Data

The test results show that the performance of the data write operations has actually gotten slower. This is largely down to the Base64 decoder and perhaps the CRC check. For a full length bitstream file size, it takes about 51 seconds which is much slower than what is required. Much more work needs to be put in to improve the performance.

SD card initialization issues

This feels like a topic beaten to death but it was something that needed to be investigated out of necessity this week. During the testing of Base64 decoder code, the current SD card stopped responding completely, both to the PIC as well as the PC. The only working SD card left was the 8GB card that so far has failed to initialize on the PICTail. Therefore, using the hardware debugger on the PICKit3 programmer, the Microchip’s MDD Initialization code was debugged to find out exactly where the code had failed. Upon initial debugging, it was found that the code failed during the DiskMount process in the Media Initialize function. Delving deeper, it was discovered that the problem occurred when the SD card would not respond to the PIC’s request to echo back the operating conditions check.

After the card is detected and after the power ON cycle is completed and the bus is activated, the PIC starts the initialization and identification process. During the bus activation process, the SD card is sent to Go_Idle_State which resets whatever the card may be doing at the time.
Then a CMD8 command is issued by the PIC to verify if the SD card can operate under the conditions provided by the PICTail. If the card complies, it should echo back a dummy argument sent with CMD8. This is where the card fails to initialize. The card never responds and remains in the Idle State, eventually timing out the Media Initialize function and causing a Disk Mounting failure.
This summary of the preliminary stages are agreed with by the SD card Physical Layer documentation. According to SD Specifications Part 1 Physical Layer Simplified Specification [2013], the following is the card initialization and identification flow and states that the card may not respond if there is a voltage mismatch due to the card being a Ver 2.00 or later SD Memory card.
Fig 4: SD Card Initialization flow

Fig 4: SD Card Initialization flow

The debugging was continued by trying to force the SD card to continue with the initialization process by changing the code so that the PIC does not expect an echo back from the SD card. The next problem occurs when the PIC does a check on the OCR register to load operating conditions of the SD card. The response sent by the SD card tells the PIC that it is a SD standard capacity card, whereas in reality it is a SDHC card. The program was allowed to continue but failed when trying to load the partition table from the MBR of the SD card as it fails to effectively read the MBR of the card.
Further debugging will be done to try and modify the Microchip’s MDD initialization process so that it can initialize the newer SD cards. Further research is needed to find the difference in responses from newer SD cards as compared to older ones.
Final Thoughts
Some good progress was made on the Base64 decoder front and the information gathered about the bitstreams. But the biggest downside was the damage to the SD card with halted further testing and bug detection of the code written. One of the bugs that was found in both versions of the Base64 decoder was resolved. The bug occurred because in the Base64 decoder function, the “=” was treated as the End-Of-File, and so caused errors when running the code. This error was rectified by coding the decoder to ignore this case and carry on decoding, as the padding is not the End-Of-File for multiple blocks and the decoded data will be discarded anyway.


1 Comment

base64decode.net · 2013-10-21 at 19:45

great text

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.