Wednesday, August 16, 2017

Day 30—8/16/17

Today was the last work day of the internship, since tomorrow we will be making our final presentations.  I worked on labelling data in the morning, and after lunch a few of us practiced our presentation in the auditorium.  I spent the rest of the day practicing and reviewing my presentation and managed to get it to be under 10 minutes.  This internship has been a very enjoyable experience for me and I have learnt a lot about eye tracking.

Tuesday, August 15, 2017

Day 29—8/15/17

Today instead of having the morning meeting, we went to the auditorium and practiced all of our presentations from 9-12.  I found it very helpful to practice my presentation in the room and with the same computer that I will be using for the actual presentation on Thursday.  I took about 11 minutes for my presentation so I will practice it a few more times to try and get it down to 10 minutes.  After lunch, I spent the rest of the day labelling data.  I think I labelled over 300 separate eye movements.

Monday, August 14, 2017

Day 28—8/14/17

Today I worked on practicing what I would say for my presentation in my head and editing some of my slides.  We also were able to run the reinforcement learning sample code, and were able to find the frame-by-frame images of the code that showed the machine learning output to each action made in the Pong-like game.  I had to leave early today since it was the first day of preseason sports.

Friday, August 11, 2017

Day 27—8/11/17

Today I fixed the code and was able to create layered graphs comparing the filtered data and unfiltered data.  I also was able to take videos of the data labelling software and I added them to my presentation.  Then in the afternoon I was able to see how the PowerPoint I created looked on the projector in the auditorium that we will be presenting in on Thursday.  After this, we looked at a sample of a reinforcement learning code that trained a computer to play a simple Pong-like game.

Here is one of the layered graphs I created:

Thursday, August 10, 2017

Day 26—8/10/17

Today I continued to work on my PowerPoint and in the afternoon I presented it at the MVRL meeting.  The rest of the afternoon was spent adding the information/editing my presentation based off the advice I got at the meeting.  Almost all of my presentation is finished, except for the graphs that I want to add.  Since the graphs involve layering the filtered data on top of the unfiltered data, I had to go into the older versions of my code to find the functions that created the graphs of the unfiltered data.  When I tried to create the layered graphs, I kept getting the error when the angular velocity of the unfiltered data was calculated.  I will work on fixing that error tomorrow and hopefully will have a final version of my presentation by the end of the day.

Wednesday, August 9, 2017

Day 25—8/9/17

Today I worked on my presentation and edited eye tracking gaze videos for it.  I was able to finish a draft of my PowerPoint with mostly all of the content included in it along with images for almost every slide.  I also edited some eye tracking videos so that they can be used to explain what saccades and fixations are.
While working on my presentation, I found a Google Q&A that I think does a good job of summarizing the basics of machine learning (it mainly describes supervised learning) in simple terms.  
Here is a link to that article:


Tuesday, August 8, 2017

Day 24—8/8/17

In the morning, I took notes on videos about reinforcement learning and the example program that we will work on.  After  lunch I continued to take notes on the videos and then also learnt about the relationship between the period of a function and frequency (frequency = 1/period).  Then I added a mean filter to the angular velocity code. I spent the rest of the day working on my presentation since on Thursday, I will be presenting a draft of my powerpoint at the Multidisciplinary Vision Research Lab (MVRL) meeting.  Also in the afternoon, I had the chance to go and see what Ronny and Henry in the optics/laser based manufacturing lab were doing.  It was very interesting to see what they were doing in their lab.

Here is an image of the angular velocity graph that is cleaned and filtered with the Gaussian, mean, and biological filters:

Monday, August 7, 2017

Day 23—8/7/17

In the morning today, we finished watching and analyzing the BeGaze eye tracking videos.  One interesting observation that we made was that when we navigate an environment, we tend to scan an area from right to left instead of left to right.  After we analyzed the videos, I worked on my presentation and added more content and images to it.  Then I finished the code for the Gaussian filter and created many different values for sigma (in a Gaussian filter, the sigma value can be used to manage the number of outliers, and can be used to smooth the data).  I found that the 0.5 value for sigma smoothed the angular velocity graph the best.

Here are graphs that are cleaned with the Gaussian filter (the first image on top has a sigma value of 0.2 and the image below has a sigma value of 0.5):







Friday, August 4, 2017

Day 22—8/4/17

Today we went over more outlines at the morning meeting and Matt gave me the idea of using eye tracking videos as a way to help explain various eye movements.  Then I went to my lab and worked on programming the Gaussian filter to smooth the data.  This took longer than I expected because for some reason the part of my code that normalized the gaze vectors (turned them into unit vectors) was not working.
In between my work on the code, I went to the undergraduate research symposium. I went to a talk on the LFEPR project that is similar to what Anjana is doing.  I also saw many posters, including one on the "Computational Power of Quantum Turing Machines", one on "Developing Instrumentation and Analysis tools for Single-Walled Carbon Nanotubes", and one on "Laser Light Scattering of Ternary Mixtures to Derive the Gibbs Free Energy".  I found the posters and talks all very interesting and they helped give me ideas on how to describe/present what I have been doing here.  After seeing the posters, I went back to the lab and discovered that by quitting out of the PyCharm software and retyping the unit vector function into the code made it work again.  Once I fixed my code, I helped write down observations Titus and I made when watching the eye tracking data we had collected earlier.  I had to leave early today since I had a college interview.

Here is an image that illustrates the smoothing effect created by using different filters on raw data (and what the resulting angular velocity versus time graphs which I have been creating after passing the raw data through various filters should look like):


https://www.mathworks.com/help/signal/examples/practical-introduction-to-digital-filtering.html

Thursday, August 3, 2017

Day 21—8/3/17

In the morning today, Kamran taught us a little bit about the different types of machine learning.  I found it very interesting to learn about the general ideas behind supervised learning, unsupervised learning, and reinforcement learning (the three main types of machine learning).  Below I have included a graphic that I found useful in describing the different types of machine learning.  Then I read up and watched a few videos describing Gaussian filters and some of the math behind it.  For the rest of the day, I worked on my presentation and outline.


https://upxacademy.com/introduction-machine-learning/

Wednesday, August 2, 2017

Day 20—8/2/17

Today I continued work on filtering the data by converting the cartesian (x,y,z) vectors to spherical coordinates (radius, azimuth, elevation).  Then, since I had already converted the vectors to unit vectors and thus knew that their radius was one, I created graphs of the azimuth (in degrees) versus time and the radius (in degrees) versus times.  Both of these graphs were noisy and this meant that the   vectors had to be filtered more because once you calculate the angular velocity based off the vectors, the noise increases because you are differentiating.  Tomorrow, I will be working on creating functions that filter the data through a Gaussian filter and a mean filter.
I also worked on labelling some eye tracking data using a software created in MATLAB.  To label the data, I would watch the eye tracking video and whenever I saw a blink, fixation or a saccade, I would select the respective option in the sidebar and then on the graph next to the video I would highlight the time interval the type of gaze movement.

Here is an image of the data labelling software I used.  The gaze point is marked with the red cross, and based of the movement of the gaze point and also whether the person's head was moving also, I classified the various types of eye movements.

Tuesday, August 1, 2017

Day 19—8/1/17

Today I created two more filters to clean the graphs. The first filter removed all the angular velocity values that were greater than 900 (since it is not biologically possible for humans to have gaze velocities above 900). Then I worked on coding a filter that would interpolate values for the right and left eye vectors when they went outside the bounds of the screen capture image.  After this, I edited my code so that it would be more efficient and run faster.  Today I also learnt about the data labelling that I will be doing.

Monday, July 31, 2017

Day 18—7/31/17

Today I edited my code so that it is more readable and continued to work on cleaning the graphs.  For most of the day, I worked on making the function that I used to calculate the angular velocity more accurate.  I did this by having the change in the angle between two consecutive vectors (dθ) be equal to the arctan of the sin of the angle found by using the cross product divided by the cosine of the angle found by using the dot product (see equation below).  Coding this took some time because I ran into a small error in my program which I found by the end of the day.  I also read a few articles about signal processing, which is what I will be using to reduce the "noise" in the angular velocity vs time graphs.

Here is the equation that I used to make the change in angle more accurate:





Here is an image of the graph that I cleaned further with the code:

Day 17—7/28/17

Today (Friday, 7/28) I was not at the internship because I was out of town so I have nothing to post.

Friday, July 28, 2017

Outline for Presentation Draft

I.               Introduction/Background
A.   Background on Eye Movements        
1.     Complexity of eye movements & efficiency of human gaze movements
a.     When walking, or running, we do not stare at the ground all the time to avoid obstacles and ensure that we are not straying from our path.  Instead, we make occasional eye movements to the ground and our surroundings
b.     Another aspect of the efficiency of human gaze movements is how, when running, we continuously adjust our gaze as our body moves up and down, in synchrony with the movement of our feet
II.             Rationale/Purpose
A.   The goal of this project is to analyze and pre-process human gaze data collected from mobile eye trackers so that it can help create a machine learning based system that can predict and mimic a human’s gaze movements in specific situations (such as running or navigating an unfamiliar environment)
B.    The machine-learning based system created from this effort can be used in numerous scenarios
1.     Can assist in making robots more efficient in how they extract information from their surroundings
a.     When a robot is navigating an environment, it could mimic a human’s gaze movements which can help reduce the amount of sensory input needed
2.     In the efficiency of Virtual Reality (VR) software
a.     Assist in the improvement of foveated rendering, which is reducing the quality of the images that can be seen in people’s peripheral vision.  Foveated rendering helps reduce the amount of processing power required for creating the graphics.  By reducing the amount of processing power required, higher resolution images can be created in real time since the processing power can be focused on the detailed areas of the image.
III.           Methods
A.   Collect human gaze movement data using an SMI (SensoMotoric Instruments) eye tracker (company now acquired by Apple)
1.     Discuss specific scenarios in which data was collected
B.    Create programs in Python to analyze and pre-process it
1.     Angular Velocity vs Time graphs & their Significance
2.     Using interpolation functions to clean the data and remove any invalid data
C.    Pre-processing the data
1.     Cleaning the data by removing any invalid data values and interpolating over them
2.     Creating filters further smooth the data and process it for data labelling
D.   Classify eye movements for data labelling
1.     Types of Eye Movements (Definitions for Data Labeling)
a.     Fixations
                                                                              i.         When the gaze focuses at one specific location for an extended amount of time
b.     Saccades
                                                                              i.         Occur when the eye moves from one fixation point to another at a very high rate
IV.          Results 

V.            Conclusion/Recommendations/Future
A.   Create a model of human gaze movements using machine learning
1.     This model, as stated before, can be used to help with optimizing the graphics of VR software, make robots more efficient in navigating their surroundings, and assist in advertising research