Time Response and Conclusions
Following on from the problems faced during the last weeks of the project, the attempts of hacking the toy quad-copter was abandoned to allow for the completion of the project in time. It was quite clear that the envisioned system would not be successfully completed within the time scope of the project so the rest of the system was designed to accommodate all the work done on the Location Sensing (via image segmentation), the swarm algorithm modelling and the output for the system. Following successful troubleshooting of the NRF24L01+ modules, the system was tested for time responses and errors were dealt with so that the overall project could be tested as a proof of concept. The system was also evaluated using a custom made quad-copter however, these tests were not as conclusive due to the design limitation of this quad copter, as it could not be calibrated within the time frame.
Troubleshooting the RF communication
Since the last few months, the RF communication has been found to be quite the enigma, choosing to operate properly at certain times and completely shutting down at other instances. Since the evaluation of the project hinged on the effectiveness of the RF communication, the datasheet for the NRF24L01+ module was scoured for inconsistencies with the current stack of code used to operate the RF module. After much in-depth analysis, the problem was found to be that the current stack of code written was made to follow the possible operating mode for the module following a transmission (i.e. the module was Powered Down) rather than the recommended operating mode (go to Stand By after Transmission). This was duly corrected within the coding of the RF library used for the project, and the performance and the connectivity of the modules improved greatly. In fact since this correction, the module rarely showed faults and all of the prior modules thought to be damaged were recovered. It should still be noted however that adding the capacitor across the VCC and GND pins on the module itself greatly improves its range following tests conducted on the module.
Assembling the Overall System
So far during the project, some of the different components were separately developed while waiting on a suitable solution for the quad-copter problem. However, for the overall system to be operational, a suitable interface was needed to enable easy control of the different parts of the operation and to ensure that the specific sequence of the event based program ran smoothly. Therefore, a simple Matlab GUI was designed to incorporate the location sensing, swarm modelling and the USART communication with the Arduino.
The GUI was designed to allow a user to take the necessary reference image for the segmentation process and only then start the swarm system. One axis was used to display the output from the webcam as the program runs, and another axis was used to portray the 2 dimensional plot of the position of the bots in the system. In the overall system, once the positions of the quad-copter is determined via the segmentation process, the x-y coordinates are fed into the swarm model (uniform dispersion for this application of the project) which correspondingly spits out a 1 digit number for the 5 bots in the system that correlates to the movement that the bot should make (for example, ‘1’ means move forwards, ‘2’ means move left etc). Following this, the program sends out this data via the USART to the Arduino, 2 bytes for each copter in the system, the first byte specifying which copter, the second byte specifying what type of movement to execute. Although this was a very simplistic implementation of the entire system, it allowed for some data to be acquired from the setup that could be used to predict the time responses and the errors within the system.
Measuring Time Response
Since the project was facing so many issues from all ends, the most significant and useful data that could be gathered from the available setup was the overall time response of the system. This was done by compiling the system on the computer side and simulating the quad-copter with a separate Arduino using LED’s to indicate whether the state changes occurred or not. It was crude methodology in an attempt to find out whether the delays in the system are too long for the overall system to work and also to try and pin point the bottleneck in the system. As for the stimulus in the system, it was provided by flying the quad-copter in front of the webcam. A garden variety stop-watch was used from the time the quad-copter was in position to the time it took to represent the reaction of the system by the LED’s. on the quad-copter stand-in.
The system was tested for time response over 30 iterations with increasing number of bots simulated in the system using more toys as the stimuli. In each case of the experiment, the simulated quad-copter was set as the last bot in the system (i.e. if there are 5 bots in the system, the simulated quad-copter would be bot no.5). This was done to ensure that the time delay measured would represent the maximum delay that can occur in the overall system. Following the 30 iterations, the following is the graph of the the time response of the system for increasing number of bots.
The overall time response seems quite sporadic and at times the response of the system is quite fast but at other times the delays are quire unreasonable. The correlation between the number of bots and the time delays was not very concrete until the average of the time delays over the 30 iterations is taken into perspective, where it can be clearly seen that as the number of bots in the system increases, the system response is slower.
To find the bottleneck in the system, the layers of delays within the system must be peeled away.
The first delay that can be directly measured is the delay in the USART communication. The initial setting for the simulation was to run the communication at a baud rate of 9600. Considering that a total of 180 bits (2 bytes(16 bits) + 2(1 start bit and 1 end bit) each for 5 bots) need to be sent to the Arduino, the calculated delay was to be 18.756 ms. Considering the delays at hand, this was seen to be negligible and was surely not the cause for the overall increase in delay for the system.
RF Communication Delays
The second part that needs to be analyzed is the RF communication itself. In terms of the RF communication, the only measurable delay was that of the Transmission delay that must be applied to the code to allow the module to operate optimally. Throughout the testing for the NRF24L01+, this delay was kept at 10ms so for the sake of consistency this delay was kept the same. Indeed reducing the delay greatly jeopardizes the connectivity of the RF modules. In addition to that, another issue was the fact that the connectivity of the RF modules was poor over a distance of about 30 ft (the distance at which the response was measured). To overcome this issue, the same data was transmitted for 10 times in a loop to ensure that there is no data loss and the receiver can always receive data from the transmitter. However, this adds to the delay greatly as the transmission delays must be maintained to ensure connectivity. So considering that a 10ms delay occurs for a loop 10 times, and that the processing delays are negligible for this scope, the delay for each transmission come up to 100ms for each bot. This is significant since adding more bots would multiply this delay, i.e. with 5 bots in the system, the 5th bot would sometimes receive the data after a delay of 500ms! This argument seems to hold true with the graph above as the delays with a full compliment of bots in the system never dips below 500ms, showing that regardless of the other components, this delay is unavoidable for now. However, it yields the belief that a better RF setup would increase this response problem and could produce a more efficient system.
WebCam image capture and segmentation delays
The last delay that needs to be discussed in the system is the delay within the image capture process from the webcam and the segmentation process that takes place. Due to time constraints, the different components of the segmentation could not be tested for a measurable delay. Therefore it is unknown whether the segmentation delay is down to the filtering process or is simply a hit or miss in terms of how fast the image can be processed. The graph certainly shows inconsistencies on that front where the image capture and processing was much faster whereas it was slower and cumbersome at other times. However, considering that this is the only other delay in the system that takes a significant chunk of time out from the further processes, it could be fair to deduce that this aspect of the overall system is currently the bottleneck in the system. Further analysis and work needs to be conducted to be sure of this deduction and also to notice if there are any unnecessary processes slowing down the segmentation of whether it is just slow.
Considering that the project reached its conclusion, some recommendations can be made about how to execute it better in the future and what could have been done to produce a more satisfying set of results:
- Hack the toys! Even a sloppy hack of the analog sticks would have sufficed. It was a little too late when the supervisor pointed out this but in reality it would have made it much easier and helped gather better results than the ones obtained above.
- Improve RF communication delays and connectivity. Work on limiting the number of retries to ensure that the RF communication is not slowing down the system too much.
- Check for the webcam capabilities and the delays within the filters in the segmentation process. The outcome of the entire system was greatly hindered by the slow responsiveness of the segmentation when paired with the other processes in the system
- Running the swarm model in the manner described above worked quite well but this implementation was in 2D. A 3D model would be of much greater use and better for the system dynamic and the implementation of more complex swarm algorithms
It is fair to say that I muddled the execution of the project up immensely. In my zeal, I proceeded to make the simple into complicated and chose the most unnecessary solutions that my objective did not demand. The results obtained were not satisfactory enough and that is quite clearly evident. However, some of the important aspects of the desired system was touched up and there was great promise in the project at certain times. The only way for me is to move forward and even though my FYP is over, keep working at. And that is exactly what I plant to. It was a tough year, but unfortunately the work is not over. During this FYP, I barely broke into the surface of the potential of the system that was being developed. Perhaps a bit more time on my end would allow me to overcome the problems I’ve been facing and produce better and more significant results.