Monday, July 31, 2017

Day 18—7/31/17

Today I edited my code so that it is more readable and continued to work on cleaning the graphs.  For most of the day, I worked on making the function that I used to calculate the angular velocity more accurate.  I did this by having the change in the angle between two consecutive vectors (dθ) be equal to the arctan of the sin of the angle found by using the cross product divided by the cosine of the angle found by using the dot product (see equation below).  Coding this took some time because I ran into a small error in my program which I found by the end of the day.  I also read a few articles about signal processing, which is what I will be using to reduce the "noise" in the angular velocity vs time graphs.

Here is the equation that I used to make the change in angle more accurate:

Here is an image of the graph that I cleaned further with the code:

Day 17—7/28/17

Today (Friday, 7/28) I was not at the internship because I was out of town so I have nothing to post.

Friday, July 28, 2017

Outline for Presentation Draft

I.               Introduction/Background
A.   Background on Eye Movements        
1.     Complexity of eye movements & efficiency of human gaze movements
a.     When walking, or running, we do not stare at the ground all the time to avoid obstacles and ensure that we are not straying from our path.  Instead, we make occasional eye movements to the ground and our surroundings
b.     Another aspect of the efficiency of human gaze movements is how, when running, we continuously adjust our gaze as our body moves up and down, in synchrony with the movement of our feet
II.             Rationale/Purpose
A.   The goal of this project is to analyze and pre-process human gaze data collected from mobile eye trackers so that it can help create a machine learning based system that can predict and mimic a human’s gaze movements in specific situations (such as running or navigating an unfamiliar environment)
B.    The machine-learning based system created from this effort can be used in numerous scenarios
1.     Can assist in making robots more efficient in how they extract information from their surroundings
a.     When a robot is navigating an environment, it could mimic a human’s gaze movements which can help reduce the amount of sensory input needed
2.     In the efficiency of Virtual Reality (VR) software
a.     Assist in the improvement of foveated rendering, which is reducing the quality of the images that can be seen in people’s peripheral vision.  Foveated rendering helps reduce the amount of processing power required for creating the graphics.  By reducing the amount of processing power required, higher resolution images can be created in real time since the processing power can be focused on the detailed areas of the image.
III.           Methods
A.   Collect human gaze movement data using an SMI (SensoMotoric Instruments) eye tracker (company now acquired by Apple)
1.     Discuss specific scenarios in which data was collected
B.    Create programs in Python to analyze and pre-process it
1.     Angular Velocity vs Time graphs & their Significance
2.     Using interpolation functions to clean the data and remove any invalid data
C.    Pre-processing the data
1.     Cleaning the data by removing any invalid data values and interpolating over them
2.     Creating filters further smooth the data and process it for data labelling
D.   Classify eye movements for data labelling
1.     Types of Eye Movements (Definitions for Data Labeling)
a.     Fixations
                                                                              i.         When the gaze focuses at one specific location for an extended amount of time
b.     Saccades
                                                                              i.         Occur when the eye moves from one fixation point to another at a very high rate
IV.          Results 

V.            Conclusion/Recommendations/Future
A.   Create a model of human gaze movements using machine learning
1.     This model, as stated before, can be used to help with optimizing the graphics of VR software, make robots more efficient in navigating their surroundings, and assist in advertising research