DOI QR코드

DOI QR Code

Drowsiness Sensing System by Detecting Eye-blink on Android based Smartphones

  • Vununu, Caleb (Dept. of IT Convergence and Application Engineering, PuKyong National University) ;
  • Seung, Teak-Young (Dept. of IT Convergence and Application Engineering, PuKyong National University) ;
  • Moon, Kwang-Seok (Dept. of Electronics Eng., PuKyong Nat'l University) ;
  • Lee, Suk-Hwan (l University) ;
  • Kwon, Ki-Ryong (Dept. of Information Security, TongMyong University)
  • 투고 : 2016.01.01
  • 심사 : 2016.02.01
  • 발행 : 2016.05.30

초록

The discussion in this paper aims to introduce an approach to detect drowsiness with Android based smartphones using the OpenCV platform tools. OpenCV for Android actually provides powerful tools for real-time body's parts tracking. We discuss here about the maximization of the accuracy in real-time eye tracking. Then we try to develop an approach for detecting eye blink by analyzing the structure and color variations of human eyes. Finally, we introduce a time variable to capture drowsiness.

키워드

1. INTRODUCTION AND RELATED WORKS

Drowsiness while driving is a serious problem and is believed to be a direct and important contributing cause of road related accidents. Many studies try to find a better way to considerably reduce road accidents caused by drowsiness and most of them are essentially based on analyzing the signals (waveforms) from the electroencephalography (EEG) with different proposed methods [1-33]. The Electroencephalography (see Fig. 1–a) is an electrophysiological monitoring machine used to record spontaneous electrical activity of the brain over a time. It measures voltage fluctuations resulting from ionic current within the neurons of the brain. The signals from the EEG (Fig. 1–b) are then analyzed in purpose to detect some dysfunctions in the brain’s activity. But analyzing the signals from the EEG is obviously not the only way to detect drowsiness. One of the most popular approach for stating drowsiness is to capture eyes blink.

Fig. 1.(a) An electroencephalography used on a subject to analyze his neural activity. (b) An illustration of spontaneous electrical activity of a human brain recorded as a signal.

The literature about eyes blink detection is really abundant. We propose here some of the most referenced researches about this topic. Marc Lalonde et al. proposed a real-time eye blink detection with GPU-based by tracking scale-invariant feature transform (SIFT) [4]. They propose an implementation of a GPU-based, real-time eye blink detector on very low contrast images acquired under near-infrared illumination. Patrick Polatsek proposed an eye blink detector which can be used in dry eye prevention system. He proposed an algorithm based on histogram back projection and optical flow methods [5]. Michael Chau and Margrit Betke proposed a real-time blink detection using USB cameras [6].

All the above-cited references discuss about eyes blink detection without any further consideration. But many other studies propose a drowsiness detection system based essentially on the fact of catching eyes blink. Danisman et al. proposed an automatic drowsy driver monitoring and accident prevention system based on monitoring the changes in the eye blink duration. They detect visual changes in eye locations using a horizontal symmetry feature of the eyes [7]. That is an authentic approach that uses only eye blinks detection and introduces a time variable for stating drowsiness. An approach that we will also use in this discussion. Another most referenced paper that uses this approach has been proposed by Takehiro Ito et al. [8]. They propose a method for measuring the blinking of a driver in real time using motion picture processing, then they develop a method for presuming consciousness degradation from the change in blink duration, a method used to state drowsiness. A look at the work of Hiroshi Ueno et al. who also conduct an informative comparative study of different techniques for detecting drowsiness [9] was useful for us. Many other methods for detecting drowsiness are listed in the references [10] and [11].

It’s factual that devices like the electroencephalography cannot be used anytime and everywhere. We’re trying to develop an accurate sensing system that can be as much accessible as possible for a great number of users around the world. In our work, we propose a drowsiness detection approach using the cameras of the Android based smartphones.

Android is everywhere nowadays. It is quite known that the OS proposed by Google controlled over 80% of the market share in the smartphone’s world. Android-based smartphones are quite more accessible than many others devices used today for drowsiness detection.

Proposing an effective drowsiness detection based on such a popular platform is, without a shade of a doubt, an interesting initiative.

Although the study was not conducted with the drivers in mind, the final goal of such a topic must be thought for them.

 

2. PROPOSED METHOD FOR DROWSINESS DETECTION

2.1 Steps of Eye Detection Process

We have used 3 cascade classifiers provided by OpenCV sources: the LBP cascade for frontal face, the left and right HAAR cascades classifiers for the eyes. The process is shown in Fig. 2. This method of eye tracking stands out for its extreme accuracy.

Fig. 2.The principal steps for the eyes detection process: we detect face, we compute by hands the eyes area inside the face, we split that area into 2 parts and we find each eye in the corresponding part.

2.2 Color Structure of Human Eyes

Generally, except in the case of anomalies, human eyes have three areas according to the variations of color in different places (see the Fig. 3). We can separate these areas like this: the outer area that refers to the part of the eye called the sclera, the fist inner area that refers to the iris, the big circle located to the major part of the center of the eye, and the second inner area that refers to the pupil, the small circle right in the middle of the eye.

Fig. 3.The 3 different parts of the eye: the sclera (white), the iris (green in this example) and the pupil (black).

In our discussion, we are interested only to the points where the color significantly changes, as shown in Fig. 4. Knowing that eyes color can differ from a person to another, the fact that which color changes to which one does not matter here. It is really important to emphasize the fact that it is not the proper color of the eye that matters here, but the variations of colors from a part to another. The distinction between the different parts is based on those variations. So, the approach developed in this discussion can be used for all the human eyes. It’s quite clear that the cases of anomalies can pose certain substantive problems, but the approach can be used in all cases where the different colors of the eye in the different parts are not altered. The places where the color significantly changes are shown as small red stars in Fig. 4.

Fig. 4.The intersections between the 3 different parts of the eye. Those intersections represent the points where the color radically changes.

Let’s take a look at Fig. 3. Suppose that A1 denote the mean intensity color of the sclera, A2 the mean intensity color of the iris and A3 the mean intensity color of the pupil. It is evident in all cases (except in the cases of anomalies) that A1 > A2 > A3. The sclera is generally white (in fuzzy sense), that makes A1 be higher than A2 and A3. The color of the iris (we said its mean intensity color is A2) varies from a person to another (and it is quite difficult to find people for whom this part of the eye is completely white), while the pupil tends to be black (low intensity level) or completely black in almost all cases. Even though the colors differ from a person to another, the variation of colors between the 3 different parts is always abrupt and radical for all the cases. In our figures for example (Fig. 3 and 4), the color of the sclera changes from white to almost black (it does not reach black in most of cases). Then we have a small slowly varying color space right between the sclera and the iris, the color changing slowly from almost black to green. Then, the green color changes to black from the iris to the pupil. We have used those color variations to find global minimum and/or global maximum points in the eyes areas. For our case, we only pay attention to the global minimum point for a particular reason that will be explained in the next point.

2.3 Global Minimum Point and Global Maximum Point

Functions can have “hills and valleys” in their curves. Minimum and maximum points are places where functions reach a minimum or maximum possible value along their curves. The global minimum point denotes the point where the function reaches its minimum possible value along the entire curve, and the opposite, the global maximum point, is the point where the function reaches the maximum possible value along the entire curve (see Fig. 7 for illustration).

We take the rectangles obtained during the eyes detection process as our spatial domains. And the function where we will find the global minimum and maximum points is the function describing the variations of the color of the eye (that is the set of the pixels of the eye area). Let’s take a look now at Fig. 8. As said before, the mean intensity color of the sclera is higher than the mean intensity of the two other parts, iris and pupil. Assuming that the x-axis of our spatial domain (the rectangle obtained during the eyes detection process) starts from the left to the right, assuming the fact that the white color represents the greatest value in the color intensity scale level and the black color the smallest value, the function representing an arbitrary line profile of the eye area (see Fig. 6) starts from the highest value (the white color of the sclera), then significantly changes from white to relatively black, then changes again to green, then to completely black. Then, it repeats the same process inversely: from black to green, from green to relatively black, then from relatively black to white again.

Fig. 5.Global Maximum Point and Global Minimum Point shown on the curve of a given function.

Fig. 6.An arbitrary line profile on the eye.

As mentioned before, the color of the iris (in our case it’s green) varies from a person to another. The main point is to keep in mind that in all cases (except for the anomalies), the sclera is white and its mean intensity color remains the highest mean compared to the other parts of the eye. In almost all cases. The iris can have any color, its mean intensity value will remain smaller than the one of the sclera. Thus, the curve of our function will, in all cases, start from the highest value of the function, then will go through a region whose color intensity doesn’t really reach the smallest value. That means, the intensity color of the iris is always greater than the intensity color of the pupil (the black center in the middle), which is black at all. From a person to another, for an eye to another, no matter which color changes to which one, we will get the same variation: A1 > A2 > A3, with A1 being the intensity color of the sclera, A2 the intensity color of the iris, and A3 the one of the small black circle right in the middle of the eye, the pupil.

Fig. 7 represents the function describing the variations of the color for the arbitrary chosen line profile in Fig. 6. As we can clearly see, the global minimum point (the point where the function reaches the minimum possible value) of our function is found right in the middle of our spatial domain. The global maximum point (the highest value along the entire curve of the function) is caught up right at the beginning or at the end of the spatial domain. For that reason, analyzing the movements of the global maximum point will be at risk because even when the eyes are relatively closed, the white parts in the two opposite sides of the eye do not really disappear. That is why we will just focus on analyzing the movements of the global minimum point. In practice, as shown in Fig. 8, the global minimum point is found at the onset of the pupil.

Fig. 7.The function describing the color variations of the eye over the line profile chosen in Fig. 6.

Fig. 8.Global Minimum Point and Global Maximum Point found on eye over the chosen line profile. The first is found in the pupil while the second is found on the extremity of the sclera.

2.4 Eyes Blink Detection

The global minimum point just found will be the point whose movements will be analyzed to capture the eyes blink. The color variations in the eye area that were just described do not remain the same when the eye is closed or almost closed, they change abruptly. That will have as effect that the global minimum point will move from the middle of the eye area to the left part of the eye area. It just happens because the curve just described early completely changes. When the eye is relatively closed, the eyelashes make the global minimum point move (see Fig. 9).

Fig. 9.The global minimum point moves to the left when the eye is relatively closed.

When the eye is open, the black color, which is actually the smallest intensity value, is located at the center of the spatial domain, in the pupil. The eyelashes bring some black points over all the eye area. The smallest value can now be captured at the beginning or even at the end of the eye: the location of the global minimum point will change. And we will use those changes to capture the eyes blink. We may think of the case where the eyelashes are absent. But the situation remains the same. Let’s recall that the intensity value of the sclera is greater than the intensity values of the iris and the pupil. A1 > A2 > A3. When the eyes are relatively closed, the center black part (the pupil) just completely disappears. That makes the smallest value of our spatial domain leave the center. Or we may also suppose that the pupil can still be visible. That is why the chosen line profile must be at the upper part of the eye. We can divide the eye area by two, from the up side to the bottom side, and select a line profile located in the up side. That will force the black part located at the center of the eye to disappear when the eyes are relatively closed. And in case the eyelashes are absent, once the center black circle disappears, the smallest intensity value of our spatial domain will be located in the iris, and that will make the global minimum point move from the middle to the onset of the iris (A2).

Color variations also depend on the intensity of the light where the experiment is conducted. A better way of stating drowsiness is to capture the first position of the global minimum point when the eyes are detected. We can divide the eyes area in three parts horizontally, from the left to the right, and two parts vertically, the upper and the bottom parts. As recommended before, when the chosen line profile is placed in the upper part of the eye, the global minimum point will appear in the center and upper parts of the eye area. And if that is the case, when the eye is relatively closed, the point will move to the left. But, while experimenting, if the light is not sufficient enough, the point doesn’t move to the left part when we blink. In that case, the point just goes down, to the bottom part of the eye area.

The point moves from the middle of the spatial domain to the extremity (that is the most common case) or from the upper part of the eye to the bottom part. It is essential to understand that the most important thing here is to catch up the movements of the global minimum point. When its location radically changes (goes to the left part or moves to the bottom part), that just means that a blink was detected.

Our proposed method for detecting the eyes blink can be summarized as follows:

Fig. 10.The global minimum point in the center of the eye area when the eye is open.

Fig. 11.The global minimum point in the left part during the blinking.

2.5 Drowsiness Detection

We introduce here a time variable that we will use to state drowsiness. The time variable is just the fact of capturing a perceptible behavior over a given amount of time during the real-time tracking process, capturing a behavior in order to state something.

So, we can postulate that when the global minimum point is out of the center area of the spatial domain during X seconds, the subject was drowsing during X seconds. The proposed method for catching drowsiness can be summarized as follows:

The extreme accuracy of the eyes detection process allows us to catch eyes everytime we detect the face. The eyes can be lost when they are completely closed. And we can also postulate that when the eyes are lost during X seconds while the face is still detected during the same amount of time, the subject was drowsing (case of eyes completely closed!).

 

3. EXPERIMENTAL RESULTS AND ANALYSIS

We have used the latest version of “OpenCV for Android” sources: OpenCV2.4.12.0

The devices used:

a. Samsung Galaxy S3 :

b. Pantech Vega R3 IM-A85OL:

The accuracy of the eyes detection may be problematic when the environment of the experiment is not enlightened enough. As a recall, all the drowsiness detection process is based on the movements of the Global Minimum Point found on the eyes, and the first thing to care about while performing drowsiness sensing using our approach is to maximize the accuracy of the eyes detection. And the method just proposed here really stands out for its high accuracy. As we can see in Fig. 12, 13, 14 and 15, the eyes detection succeeds in all cases when the face’s detected. We minimize the difference of resolution between the devices in terms of accuracy. The Samsung Galaxy S3 (1,9 megapixels of front camera’s resolution) and all the devices of similar resolution (less than 2 megapixels) tend to slightly smooth the frame, as we can see in Fig. 12 and 15. The image from Fig. 12 is visibly less clear than the ones from Fig. 13 and 14. Those degradations, due to a non-significant resolution, do not affect at all the eyes detection. The probability of detecting the face was almost the same with all the devices that we have used, and even when the difference of resolution was too high. And once the face detection succeeds, as shown in the figures, the eyes detection also succeeds in all cases.

Fig. 12.(a) The global minimum point found in the center of the eye area when the eye is open. The result was obtained during the experiments. (b) The global minimum point found moves from the center to the right in the left eye and it moves from the center to the left in the right eye. The result was obtained during the experiments.

Fig. 13.(a) The global minimum point found in the center of the eye area when the eye is open. The result was obtained during the experiments. (b) The global minimum point found moves from the center to the right part of the eye. The result was obtained during the experiments.

Fig. 14.(a) The eyes are detected when the face is detected. (b) The eyes detection fails while the face detection succeeds because the eyes are completely closed.

Fig. 15.(a) The global minimum point is detected right in the center of the eyes even in the case of tiny eyes. (b) The global minimum point moves to the left in the left eye and goes to the right part in the right eye.

The global minimum point is caught up right in the middle of the eye. In Fig. 12, we’ve placed the line profile in the middle and upper part of the eye. You can see the location of the global minimum point in Fig. 12(a). When the eye’s relatively closed, the global minimum point of the left eye moves from the center to the right, and the global minimum point of the right eye moves clearly from the center to the left, as shown in Fig. 12(b). Those described movements remain the same for all the devices of similar resolution when the light is quite enough. In most of cases, when the environment is poorly enlightened, the global minimum point moves from the upper part to the bottom part of the eye area when we use the devices of front camera’s resolution smaller or equal to 2 megapixels. As we said before, capturing the radical change of position is the base of the blinking detection.

In Fig. 13, we’ve placed the line profile right in the middle of the eye. The global minimum point appears also right in the middle, as shown in the Fig. 13(a). For all the devices with a similar resolution with the Pantech Vega R3 IM-A85OL (2,1 megapixels and beyond), when the environment is highly enlightened, the global minimum point has the same movements as in the Samsung Galaxy S3 in the same situation. It moves from the center part to the right part for the left eye and from the center to the left part for the right eye. But in case the light is poor, sometimes the global minimum point moves from the middle line of the eye to the upper part, as shown in Fig. 13(b). In the Fig. 13(b) the point moves classically from the center to the left for the left eye. That is the most happened situation when the light is not enough and when using a device of 2 megapixels (or beyond) of front camera’s resolution. We can clearly see the difference of light between the figures 12(b) and 13(b). In 12(b) the light is intensive enough while it is not the case in 13(b). In all cases, the global minimum point just radically changes its firstly captured position when the eye’s relatively closed.

In Fig. 15 we’ve shown the case of tiny eyes and when the eyelashes are absent. As you can see, we’ve obtained the expected results. In 15(a), the global minimum point appears right in the pupil (the center of the eye area). With tiny eyes, the difference between the open and closed eyes is really slight. There is no difference at all between relatively closed eyes (case of blinking) and completely closed eyes. In the 2 cases (relatively closed and completely closed), the eyes are detected, and when a blink is detected or eyes are closed, the global minimum point radically changes its position, as shown in Fig. 15(b). In the figure, the point moves from the center to the left part for the left eye. For the right eye, it moves to the right part of the eye. In the example shown here – for the right eye in Fig. 15(b) - , the global minimum point is caught up under the line that separates the center and the right parts. Actually, the point (one pixel) is entirely located in the right part. For illustrations, we’ve drawn a circle of 3 pixels of radius (for all the images from Fig. 12 to 15) that encapsulates the global minimum point. Anyway, for all the cases, it’s practically easy to capture the radical changes of position of the global minimum point. Comparatively, the closed eye represents 50% of the open eye in case of tiny eyes (Fig. 15) while it represents nearly 20% in case of big eyes (Figure 14), but we obtain the same result for both cases in terms of movements of the global minimum point. For both cases, the global minimum point appears right in the center of the eye when the eyes are open and radically changes its position when blinking is detected or when the eyes are closed.

As said in the preceding paragraph, the “relatively closed eyes” and “completely closed eyes” situations are similar in case of tiny eyes. In that case, blinking and closed eyes are detected in the same way: capturing the changes of movements of the global minimum point.

In case of big eyes, as shown in Fig. 14, the closed eyes situation can be detected as explained in section II. Actually we’ve energetically closed eyes - see Fig. 14(b) - in order to capture a “long and stable” closed eyes detection (a situation where eyes are lost for a long time while the face is detected during the same amount of time). And in the case where eyes detection succeeds when the eyes are completely closed, that means if the eyes are not lost when they are completely closed, the global minimum point will act like in the case of blinking situation, like in the case of tiny eyes. Anyway, the closed eyes detection succeeds in all cases when the light is not sufficient. Also, the eyes detection can fail when the eyes are relatively closed in the case of extreme deficiency of light, which means that the eyes can be lost when they blink if the light is extremely poor. But this is not a problem. As discussed before, the extreme accuracy of the eyes detection allows us to state closed eyes when the eyes are lost while the face detection still succeeds. We can add that we can also state drowsiness when the eyes are lost during a given amount of time while the face is still detected during the exact same amount of time, especially in the case of lack of light.

 

4. CONCLUSION

During several tests, the accuracy of the eye detection was considerably impressive. It is necessary to recall again here that eyes are detected in the face, so, they are tracked after the face has been detected. If the face detection fails, there is no chance that the eyes detection can even start. And, with the process of maximizing accuracy just explained, once the face is detected, there is almost no chance that the eyes detection fails. Accuracy’s almost 100%.

The global minimum point appears right in the middle of the spatial domain for both big and tiny eyes. Every time we blink or close our eyes, the global minimum point radically changes its location also for the two cases. And using those changes is a way of stating drowsiness by introducing, as explained, the time variable.

참고문헌

  1. A.G. Correa, L. Orosco, and E. Laciar, “Automatic Detection of Drowsiness in EEG Records Based on Multimodal Analysis,” Journal of Medical Engineering & Physics, Vol. 36, Issue 2, pp. 244-249, 2014. https://doi.org/10.1016/j.medengphy.2013.07.011
  2. C.T. Lin, R.C. Wu, S.F. Liang, W.H Chao, Y.J. Chen, and T.P. Jung, “EEG-Based Drowsiness Estimation for Safety Driving Using Independent Component Analysis,” IEEE Transactions on Circuits and Systems-I : Regular Papers, Vol. 52, Issue 12, pp. 2726-2738, 2005. https://doi.org/10.1109/TCSI.2005.857555
  3. Y. Yin, Y. Zhu, S. Xiong, and J. Zhang, "Drowsiness Detection form EEG Spectrum Analysis," Journal of Informatics in Control, Automation and Robotics, Vol. 2, LNEE 133, pp. 753-759, 2012.
  4. M. Lalonde, D. Byrns, L. Gagnon, N. Teasdale, and D. Laurendeau, "Real-Time Eye Blink Detection with GPU- based SIFT Tracking," Proceedings of Fourth Canadian Conference on Computer and Robot Vision (CRV'07) , pp. 481-487, 2007.
  5. P. Polatsek, "Eye Blink Detection," Proceedings of Informatics and Information Technologies (I IT.SRC 2013) , pp. 1-8, 2013.
  6. M. Chau and M. Betke, "Real Time Eye Tracking and Blink Detection with USB Cameras," Boston University Computer Science Technical Report No. 2005-12, 2005.
  7. T. Danisman, I.M. Bilasco, C. Djeraba, and N. Ihaddadene, "Drowsy Driver Detection System Using Eye Blink Patterns," Proceedings of International Conference on Machine and Web Intelligence (ICMWI), pp. 230-233, 2010.
  8. T. Ito, S. Mita, K. Kozuka, T. Nakano, and S. Yamarnoto, "Driver Blink Measurement by the Motion Picture Processing and its Application to Drowsiness Detection," Proceedings of the IEEE 5th International Conference on Intelligent Transportation Systems, pp. 168-173, 2002.
  9. H. Ueno, M. Kaneda, and M. Tsukino, "Development of Drowsiness Detection System," Proceedings of Vehicle Navigation and Information Systems Conference, pp. 15-20, 1994.
  10. A. Eskandarian and A. Mortazavi, "Evaluation of a Smart Algorithm for Commercial Vehicle Driver Drowsiness Detection," Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, pp. 553-559, 2007.
  11. Q. Wang, J. Yang, M. Ren, and Y. Zheng, "Driver Fatigue Detection : A Survey," Proceedings of the 6th World Congress on Intelligent Control and Automation, Vol. 2, 8587-8591, 2006.