Wednesday, August 16, 2017

Day 30—8/16/17

Today was the last work day of the internship, since tomorrow we will be making our final presentations.  I worked on labelling data in the morning, and after lunch a few of us practiced our presentation in the auditorium.  I spent the rest of the day practicing and reviewing my presentation and managed to get it to be under 10 minutes.  This internship has been a very enjoyable experience for me and I have learnt a lot about eye tracking.

Tuesday, August 15, 2017

Day 29—8/15/17

Today instead of having the morning meeting, we went to the auditorium and practiced all of our presentations from 9-12.  I found it very helpful to practice my presentation in the room and with the same computer that I will be using for the actual presentation on Thursday.  I took about 11 minutes for my presentation so I will practice it a few more times to try and get it down to 10 minutes.  After lunch, I spent the rest of the day labelling data.  I think I labelled over 300 separate eye movements.

Monday, August 14, 2017

Day 28—8/14/17

Today I worked on practicing what I would say for my presentation in my head and editing some of my slides.  We also were able to run the reinforcement learning sample code, and were able to find the frame-by-frame images of the code that showed the machine learning output to each action made in the Pong-like game.  I had to leave early today since it was the first day of preseason sports.

Friday, August 11, 2017

Day 27—8/11/17

Today I fixed the code and was able to create layered graphs comparing the filtered data and unfiltered data.  I also was able to take videos of the data labelling software and I added them to my presentation.  Then in the afternoon I was able to see how the PowerPoint I created looked on the projector in the auditorium that we will be presenting in on Thursday.  After this, we looked at a sample of a reinforcement learning code that trained a computer to play a simple Pong-like game.

Here is one of the layered graphs I created:

Thursday, August 10, 2017

Day 26—8/10/17

Today I continued to work on my PowerPoint and in the afternoon I presented it at the MVRL meeting.  The rest of the afternoon was spent adding the information/editing my presentation based off the advice I got at the meeting.  Almost all of my presentation is finished, except for the graphs that I want to add.  Since the graphs involve layering the filtered data on top of the unfiltered data, I had to go into the older versions of my code to find the functions that created the graphs of the unfiltered data.  When I tried to create the layered graphs, I kept getting the error when the angular velocity of the unfiltered data was calculated.  I will work on fixing that error tomorrow and hopefully will have a final version of my presentation by the end of the day.

Wednesday, August 9, 2017

Day 25—8/9/17

Today I worked on my presentation and edited eye tracking gaze videos for it.  I was able to finish a draft of my PowerPoint with mostly all of the content included in it along with images for almost every slide.  I also edited some eye tracking videos so that they can be used to explain what saccades and fixations are.
While working on my presentation, I found a Google Q&A that I think does a good job of summarizing the basics of machine learning (it mainly describes supervised learning) in simple terms.  
Here is a link to that article:

Tuesday, August 8, 2017

Day 24—8/8/17

In the morning, I took notes on videos about reinforcement learning and the example program that we will work on.  After  lunch I continued to take notes on the videos and then also learnt about the relationship between the period of a function and frequency (frequency = 1/period).  Then I added a mean filter to the angular velocity code. I spent the rest of the day working on my presentation since on Thursday, I will be presenting a draft of my powerpoint at the Multidisciplinary Vision Research Lab (MVRL) meeting.  Also in the afternoon, I had the chance to go and see what Ronny and Henry in the optics/laser based manufacturing lab were doing.  It was very interesting to see what they were doing in their lab.

Here is an image of the angular velocity graph that is cleaned and filtered with the Gaussian, mean, and biological filters:

Monday, August 7, 2017

Day 23—8/7/17

In the morning today, we finished watching and analyzing the BeGaze eye tracking videos.  One interesting observation that we made was that when we navigate an environment, we tend to scan an area from right to left instead of left to right.  After we analyzed the videos, I worked on my presentation and added more content and images to it.  Then I finished the code for the Gaussian filter and created many different values for sigma (in a Gaussian filter, the sigma value can be used to manage the number of outliers, and can be used to smooth the data).  I found that the 0.5 value for sigma smoothed the angular velocity graph the best.

Here are graphs that are cleaned with the Gaussian filter (the first image on top has a sigma value of 0.2 and the image below has a sigma value of 0.5):

Friday, August 4, 2017

Day 22—8/4/17

Today we went over more outlines at the morning meeting and Matt gave me the idea of using eye tracking videos as a way to help explain various eye movements.  Then I went to my lab and worked on programming the Gaussian filter to smooth the data.  This took longer than I expected because for some reason the part of my code that normalized the gaze vectors (turned them into unit vectors) was not working.
In between my work on the code, I went to the undergraduate research symposium. I went to a talk on the LFEPR project that is similar to what Anjana is doing.  I also saw many posters, including one on the "Computational Power of Quantum Turing Machines", one on "Developing Instrumentation and Analysis tools for Single-Walled Carbon Nanotubes", and one on "Laser Light Scattering of Ternary Mixtures to Derive the Gibbs Free Energy".  I found the posters and talks all very interesting and they helped give me ideas on how to describe/present what I have been doing here.  After seeing the posters, I went back to the lab and discovered that by quitting out of the PyCharm software and retyping the unit vector function into the code made it work again.  Once I fixed my code, I helped write down observations Titus and I made when watching the eye tracking data we had collected earlier.  I had to leave early today since I had a college interview.

Here is an image that illustrates the smoothing effect created by using different filters on raw data (and what the resulting angular velocity versus time graphs which I have been creating after passing the raw data through various filters should look like):

Thursday, August 3, 2017

Day 21—8/3/17

In the morning today, Kamran taught us a little bit about the different types of machine learning.  I found it very interesting to learn about the general ideas behind supervised learning, unsupervised learning, and reinforcement learning (the three main types of machine learning).  Below I have included a graphic that I found useful in describing the different types of machine learning.  Then I read up and watched a few videos describing Gaussian filters and some of the math behind it.  For the rest of the day, I worked on my presentation and outline.

Wednesday, August 2, 2017

Day 20—8/2/17

Today I continued work on filtering the data by converting the cartesian (x,y,z) vectors to spherical coordinates (radius, azimuth, elevation).  Then, since I had already converted the vectors to unit vectors and thus knew that their radius was one, I created graphs of the azimuth (in degrees) versus time and the radius (in degrees) versus times.  Both of these graphs were noisy and this meant that the   vectors had to be filtered more because once you calculate the angular velocity based off the vectors, the noise increases because you are differentiating.  Tomorrow, I will be working on creating functions that filter the data through a Gaussian filter and a mean filter.
I also worked on labelling some eye tracking data using a software created in MATLAB.  To label the data, I would watch the eye tracking video and whenever I saw a blink, fixation or a saccade, I would select the respective option in the sidebar and then on the graph next to the video I would highlight the time interval the type of gaze movement.

Here is an image of the data labelling software I used.  The gaze point is marked with the red cross, and based of the movement of the gaze point and also whether the person's head was moving also, I classified the various types of eye movements.

Tuesday, August 1, 2017

Day 19—8/1/17

Today I created two more filters to clean the graphs. The first filter removed all the angular velocity values that were greater than 900 (since it is not biologically possible for humans to have gaze velocities above 900). Then I worked on coding a filter that would interpolate values for the right and left eye vectors when they went outside the bounds of the screen capture image.  After this, I edited my code so that it would be more efficient and run faster.  Today I also learnt about the data labelling that I will be doing.