Eye tracking: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Added date. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 216/748
Fiora0 (talk | contribs)
added the Gaze machine project with images, gifs and references
Tag: Reverted
Line 72:
Due to potential drifts and variable relations between the EOG signal amplitudes and the saccade sizes, it is challenging to use EOG for measuring slow eye movement and detecting gaze direction. EOG is, however, a very robust technique for measuring [[saccade|saccadic eye movement]] associated with gaze shifts and detecting [[blink]]s.
Contrary to video-based eye-trackers, EOG allows recording of eye movements even with eyes closed, and can thus be used in sleep research. It is a very light-weight approach that, in contrast to current video-based eye-trackers, requires low computational power, works under different lighting conditions and can be implemented as an embedded, self-contained [[wearable technology|wearable]] system.<ref>{{cite journal|last=Bulling|first=A. |author2=Roggen, D. |author3=Tröster, G.|year=2009|title=Wearable EOG goggles: Seamless sensing and context-awareness in everyday environments|journal= Journal of Ambient Intelligence and Smart Environments|volume=1|pages=157–171|issue=2|doi=10.3233/AIS-2009-0020|hdl=20.500.11850/352886 |s2cid=18423163 |hdl-access=free}}</ref><ref>Sopic, D., Aminifar, A., & Atienza, D. (2018). e-glass: A wearable system for real-time detection of epileptic seizures. In IEEE International Symposium on Circuits and Systems (ISCAS).</ref> It is thus the method of choice for measuring eye movement in mobile daily-life situations and [[rapid eye movement sleep|REM]] phases during sleep. The major disadvantage of EOG is its relatively poor gaze-direction accuracy compared to a video tracker. That is, it is difficult to determine with good accuracy exactly where a subject is looking, though the time of eye movements can be determined.
 
 
==Technologies and techniques==
Line 120 ⟶ 121:
 
Some results are available on human eye movements under natural conditions where head movements are allowed as well.<ref>{{cite journal | last1 = Einhäuser | first1 = W | last2 = Schumann | first2 = F | last3 = Bardins | first3 = S | last4 = Bartl | first4 = K | last5 = Böning | first5 = G | last6 = Schneider | first6 = E | last7 = König | first7 = P | year = 2007 | title = Human eye-head co-ordination in natural exploration | journal = Network: Computation in Neural Systems | volume = 18 | issue = 3| pages = 267–297 | doi = 10.1080/09548980701671094 | pmid = 17926195 | s2cid = 1812177 }}</ref> The relative position of eye and head, even with constant gaze direction, influences neuronal activity in higher visual areas.<ref>{{cite journal | last1 = Andersen | first1 = R. A. | last2 = Bracewell | first2 = R. M. | last3 = Barash | first3 = S. | last4 = Gnadt | first4 = J. W. | last5 = Fogassi | first5 = L. | year = 1990 | title = Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque | journal = Journal of Neuroscience | volume = 10 | issue = 4| pages = 1176–1196 | doi = 10.1523/JNEUROSCI.10-04-01176.1990 | pmid = 2329374 | pmc = 6570201 | s2cid = 18817768 }}</ref>
 
===The Gaze Machine ===
The Gaze Machine (<ref>{{ cite patent |country=Italy |number= IT1379233 |title= Gaze Machine |pubdate= October 10, 2007 |assignee=University of Rome Sapienza |inventor=F.Pirri,A. Carbone, A. Belardinelli |url= }}</ref>
) was a wearable gaze recognition hardware studied in early 2000 and patented by F. Pirri, A. Carbone, and A. Belardinelli in 2007.
 
The Gaze Machine was a framework for collecting and analyzing human attentive behaviors in natural settings. It was made by two scene cameras for the 3D reconstruction of the scene, two Infrared Emitting Diode) (Ired) LEDs to highlight the pupils, and two eye cameras to track the eye motion in the reconstructed 3D scene mounted on a pair of large eyeglasses
<ref>{{cite conference |last=Pirri |first=F |last2=Pizzoli |first2=M |last3=Rudi |first3=A |title=A general method for the point of regard estimation in 3D space |conference=CVPR |pages=921-928 |year=2011}}</ref>, <ref>{{cite conference |last=Pirri |first=F |last2=Pizzoli |first2=M |last3=Rigato |first3=D |last4=Shabani |first4=R |title=3D saliency maps |conference=CVPRW |pages=9-14 |year=2011}}</ref> (see the figure).
[[File:GazeMachine.gif|thumb|A version of the Gaze Machine showing scene cameras, eye cameras and IRED Leds]]
 
The device required a quick calibration by fixating a _target while panning and tilting the head. The 3D position of the _target was recovered from stereo vision.
 
The Gaze Machine was able to simultaneously reconstruct the scene, localize the head of the subject wearing it in space, and project the human point of regard (POR) in the 3D space, in real-time. Gaze fixations were collected both on the reconstructed 3D scene and on 2D video showing the gaze scan path. [[File:3D Projection of Point of Regard (POR).gif|thumb|Projection of Point of Regard (POR) while simultaneously reconstructing the scene, the task is to search for small object in the scene.]] [[File:Precise POR path projected in the scene.gif|thumb|Precise identification of gaze path projected on video, during a search task.]].
 
The advantage of the Gaze Machine w.r.t any other eye trackers at the time was its effectiveness in measuring the region of the Field of View (FOV) attended by the subject wearing it.
 
<gallery mode="packed" class="center">
File:GM_on_FireFighters_helmet.jpg|Gaze Machine mounted on a Fire Fighter helmet.
File:Gm tunnel.gif|Point of regard (POR) projection during victim search in a simulated car accident inside a tunnel at Fire Fighters camp.
</gallery>
 
The Gaze Machine project lasted ten years, collecting a considerable amount of data on eye search behaviors and human attention <ref>{{cite journal |last=Belardinelli |first=A |last2=Pirri |first2=F |last3=Carbone |first3=A |title=Bottom-up gaze shifts and fixations learning by imitation |journal=IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) |volume=37 |year=2007}}</ref>, <ref>{{cite journal |last=Belardinelli |first=A |last2=Pirri |first2=F |last3=Carbone |first3=A |title=Gaze motion clustering in scan-path estimation |journal=Cognitive Processing |volume=9 |pages=269-282}}</ref>,
<ref>{{cite conference |last=Belardinelli |first=A |last2=Pirri |first2=F |last3=Carbone |first3=A |title=Motion saliency maps from spatiotemporal filtering |conference=International Workshop on Attention in Cognitive Systems |pages=112-123 |year=2008}}</ref>, <ref>{{cite journal |last=Ntouskos |first=V |last2=Pirri |first2=F |last3=Pizzoli |first3=M |last4=Sinha |first4=A |last5=Cafaro |first5=B |title=Saliency prediction in the coherence theory of attention |journal=Biologically Inspired Cognitive Architectures |volume=5 |pages=10-28 |year=2013}}</ref>. It has also been used to study attention in robots <ref>{{cite journal |last=Belardinelli |first=A |last2=Pirri |first2=F |title=A biologically plausible robot attention model, based on space and time |journal=Cognitive Processing |year=2006}}</ref>,<ref>{{cite journal |last=Carbone |first=A |last2=Finzi |first2=A |last3=Orlandini |first3=A |last4=Pirri |first4=F |title=Model-based control architecture for attentive robots in rescue scenarios |journal=Autonomous Robots |volume=24 |pages=87-120 |year=2008}}</ref>,<ref>{{cite conference |last=Carbone |first=A |last2=Pirri |first2=F |title=Analysis of the local statistics at the centre of fixation during visual scene exploration |conference=Procedings of IARP International Workshop on Robotics for risky interventions and Environmental Surveillance, RISE |year=2010}}</ref>,<ref>{{cite conference |last=Mancas |first=M |last2=Pirri |first2=F |last3=Pizzoli |first3=M |title=From saliency to eye gaze: Embodied visual selection for a pan-tilt-based robotic head |conference=International Symposium on Visual Computing |pages=135-146 |year=2011}}</ref>, <ref>{{cite conference |last=Zillich |first=M |last2=Frintrop |first2=S |last3=Pirri |first3=F |last4=Potapova |first4=E |last5=Vincze |first5=M |title=Workshop on attention models in robotics: visual systems for better HRI |conference=Proceedings of the ACM/IEEE international conference on Human-robot interaction |year=2014}}</ref>.
 
 
== Practice ==
  NODES
Bugs 1
INTERN 5
Project 9