It is late tonight but I simply wanted to post a note of congratulations to Roman for successfully defending his dissertation today (which contributed to improving the methodology in eye tracking studies – specifically using the domain of computer programing).
Not only was it a well-deserved Ph.D., but it also was a special pleasure to see because since I came to Joensuu, Roman has gone out of his way to be a friend to me in helping me feel welcome and informed of ways to get involved in local things, and I am continually impressed by how much he seems to reach out and help anyone who needs it.
So for these reasons and more, today I say to Dr. Bednarik, “Hyvin tehty!”
So… now I want to know more about the eye-tracking research. There’s a chance that I’ll get into that area of study. I’ve wondered if there exists any eye-tracking technology capable of detecting depth of field along with focal angle. And is there a good way to track irregular eyes? It seems like only 2/3 of the data were usable for the last study I heard of because of irregularities in eyes.
Good questions, Joseph. Now that I’m sure Roman will have so much free time on his hands 🙂 I bet he can answer them for you. If he doesn’t see this message, then I’ll email him and ask him to get in touch with you.
Joseph, I am not sure what you mean by irregular eyes. The most common reasons why an eye-tracker won’t capture somebody’s eyes are 1) eye-glasses and contact lenses, 2) head movements, 3) occluded iris (e.g. droopy eyelids).
As for the depth-of-field, I have not seen such a system yet. There is some research in tracking eye-movements in 3D environments, but not using dof. Eye-tracking, as far as I know, is used in some VR systems to emulate dof effect.
Hm. Thanks for addressing my questions, Roman. Perhaps I mis-remember, but I thought that an irregularly shaped pupil or retina could throw off eye-tracking if the system relies on a low-power infrared light reflecting off the retina in a circular pattern.
It makes sense that one can infer the depth of field by finding the distance to intersection of both eyes’ orientation. I wonder if actually detecting things like aperture size and lens flex would provide useful information.