This morning, I figured out why the graphs that I created with my code looked odd. I realized that because there were so many data points (around 3,000) that were over a large interval (0s to ~1,400 s), the matplotlib graph had to zoom out a lot for all of the data to fit. By editing the maximum and minimum of the x-coordinates, I was able to create graphs that looked more like an angular velocity versus time graph. After lunch, I put labels and gridlines on the graphs and I also learnt about importing a Python package called Plotly, which would help me create box-and-whisker plots and Gaussian curves to represent the distribution of the data. Then we had some of the other interns help us with collecting eye tracking data because as we already knew many details about certain eye movements, we were biased in our data collection. However, after the first sample of eye tracking data was collected, we were unable to calibrate the eye tracking headset and record more data. Kamran then showed us how to use the VR headset and many of us tried it out.
Here is an image of one of the graphs I created: