Yosef Razin presented his paper, "Learning to Predict Intent from Gaze During Robotic Hand-Eye Coordination Tasks" at the 2017 AAAI conference this week in San Francisco, CA. The work showed how accounting for anticipatory eye movements in addition to the movements of the robot improves intent estimation. This research compares the application of various machine learning methods to intent prediction from gaze tracking data during robotic hand-eye coordination tasks. It found that with proper feature selection, accuracies exceeding 94% and AUC greater than 91% are achievable with several classification algorithms but that anticipatory gaze data did not improve intent prediction. Acceptance rates at AAAI this year were less than 25%.