Unlike the past few weeks, the progress in this one was a bit bumpy and slow. The work for most of this week was based on the research done last week and a continuation of it. While a certain amount of time was spent writing and implementing code, the biggest chuck of this week was concerned with research and attempting to understand the problems presented as hand.
The first test performed on the data samples involved measuring how good the rhythm of a performance is by tracking beats and tempos, ensuring consistency in playing. By measuring the standard deviation as it relates to the average tempo can give an indication of how well the rhythm of a musical performance is. This took us a step closer towards developing the machine learning model. However, while there was a somewhat visible trend in the data, it is still not very prominent which means that this feature will have to be used to aid with the detection not as a direct factor but rather as an additional measure of likelihood. The distribution of the data can be seen below.
As for the pitch detection bit of the project, there wasn’t much progress, mostly due to errors and false assumptions on my end. Initially, the pitch was measured by comparing the frequency of the most prominent frequencies (onsets) in each track to the frequency of ideal notes. The distance from the frequency of that onset is measured to the ideal note and is then divided by the distance between the two idea notes surrounding the detected one. While this seemed like a good idea at the time, it was brought to my attention that the intent behind each played note was not taken into consideration, which resulted in a lot of noise. Not to mention that in a lot of cases the onsets detected where external noises, which affected the accuracy of the detection immensely.
Even though this week’s work was not very fruitful, it is still the beginning of the project, and hopefully there is more to come.