Acknowledgement
이 논문은 2020년 대한민국 교육부와 한국연구재단의 인문사회분야 중견연구자지원사업의 지원을 받아 수행된 연구임(NRF-2020S1A5A2A01040945).
References
- Yoon, Jung Won & Syn, Sue Yeon (2021). How do formats of health related Facebook posts effect on eye movements and cognitive outcomes?. Journal of the Korean Society for Library and Information Science, 55(3), 219-237. http://doi.org/10.4275/KSLIS.2021.55.3.219
- Allegretti, M., Moshfeghi, Y., Hadjigeorgieva, M., Pollick, F. E., Jose, J. M., & Pasi, G. (2015). When relevance judgement is happening? An EEG-based study. Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, 719-722. https://doi.org/10.1145/2766462.2767811
- Bhattacharya, N., Rakshit, S., Gwizdka, J., & Kogut, P. (2020). Relevance prediction from eye-movements using semi-interpretable convolutional neural networks. Proceedings of the 2020 Conference on Human Information Interaction and Retrieval, 223-233. https://doi.org/10.1145/3343413.3377960
- Borgalli, M. R. A. & Surve, S. (2022). Deep learning for facial emotion recognition using custom CNN architecture. Journal of Physics: Conference Series, 2236(1), 012004. https://doi.org/10.1088/1742-6596/2236/1/012004
- Davis, K. M., Spape, M., & Ruotsalo, T. (2022). Contradicted by the brain: Predicting individual and group preferences via brain-computer interfacing. IEEE Transactions on Affective Computing, 14(4), 3094-3105. https://doi.org/10.1109/TAFFC.2022.3225885
- DeLong, K. A., Quante, L., & Kutas, M. (2014). Predictability, plausibility, and two late ERP positivities during written sentence comprehension. Neuropsychologia, 61, 150-162. https://doi.org/10.1016/j.neuropsychologia.2014.06.016
- Eugster, M. J., Ruotsalo, T., Spape, M. M., Barral, O., Ravaja, N., Jacucci, G., & Kaski, S. (2016). Natural brain-information interfaces: Recommending information by relevance inferred from human brain signals. Scientific Reports, 6(1), 38580. https://doi.org/10.1038/srep38580
- Eugster, M. J., Ruotsalo, T., Spape, M. M., Kosunen, I., Barral, O., Ravaja, N., Jacucci, G., & Kaski, S. (2014). Predicting term-relevance from brain signals. Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval, 425-434. https://doi.org/10.1145/2600428.2609594
- Evans, W. J., Cui, L., & Starr, A. (1995). Olfactory event-related potentials in normal human subjects: Effects of age and gender. Electroencephalography and Clinical Neurophysiology, 95(4), 293-301. https://doi.org/10.1016/0013-4694(95)00055-4
- Foley, J. J. & Kwan, P. (2015). Feature extraction in content-based image retrieval. In Khosrow-Pour, D. B. A. ed. Encyclopedia of Information Science and Technology, Third Edition. Pennsylvania: IGI Global.
- Golenia, J. E., Wenzel, M. A., Bogojeski, M., & Blankertz, B. (2018). Implicit relevance feedback from electroencephalography and eye tracking in image search. Journal of Neural Engineering, 15(2), 026002. http://dx.doi.org/10.1088/1741-2552/aa9999
- Gwizdka, J. & Zhang, Y. (2015). Differences in eye-tracking measures between visits and revisits to relevant and irrelevant web pages. Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, 811-814. https://doi.org/10.1145/2766462.2767795
- Gwizdka, J., Hosseini, R., Cole, M., & Wang, S. (2017). Temporal dynamics of eye-tracking and EEG during reading and relevance decisions. Journal of the Association for Information Science and Technology, 68(10), 2299-2312. https://doi.org/10.1002/asi.23904
- Jacucci, G., Barral, O., Daee, P., Wenzel, M., Serim, B., Ruotsalo, T., Pluchino, P., Freeman J., Gamberini, L., Kaski, S., & Blankertz, B. (2019). Integrating neurophysiologic relevance feedback in intent modeling for information retrieval. Journal of the Association for Information Science and Technology, 70(9), 917-930. https://doi.org/10.1002/asi.24161
- Kim, H. H. & Kim, Y. H. (2019a). Video summarization using event-related potential responses to shot boundaries in real-time video watching. Journal of the Association for Information Science and Technology, 70(2), 164-175. https://doi.org/10.1002/asi.24103
- Kim, H. H. & Kim, Y. H. (2019b). ERP/MMR algorithm for classifying topic-relevant and topic-irrelevant visual shots of documentary videos. Journal of the Association for Information Science and Technology, 70(9), 931-941. https://doi.org/10.1002/asi.24179
- Mayer, R. E. (2005). Cognitive theory of multimedia learning. In R. E. Mayer ed. The Cambridge Handbook of Multimedia Learning. New York: Cambridge University Press, 134-146.
- Pold, T., Bachmann, M., Paeske, L., Kalev, K., Lass, J., & Hinrikus, H. (2018). EEG spectral asymmetry is dependent on education level of men. World Congress on Medical Physics and Biomedical Engineering 2018, 2, 405-408. https://doi.org/10.1007/978-981-10-9038-7_76
- Schindler, S. & Bublatzky, F. (2020). Attention and emotion: An integrative review of emotional face processing as a function of attention. Cortex, 130, 362-382. https://doi.org/10.1016/j.cortex.2020.06.010
- Shi, Z. F., Zhou, C., Zheng, W. L., & Lu, B. L. (2017). Attention evaluation with eye tracking glasses for EEG-based emotion recognition. 8th International IEEE/EMBS Conference on Neural Engineering (NER), 86-89. https://doi.org/10.1109/NER.2017.8008298
- Udurume, M., Valverde, E. C., Caliwag, A., Kim, S., & Lim, W. (2023). Real-time multimodal emotion recognition based on multithreaded weighted average fusion. Journal of the Ergonomics Society of Korea, 42(5), 417-433. http://doi.org/ 10.5143/JESK.2023.42.5.417
- Ul Haq, H. B., Asif, M., Ahmad, M. B., Ashraf, R., & Mahmood, T. (2022). An effective video summarization framework based on the object of interest using deep learning. Mathematical Problems in Engineering, 2022, 7453744. https://doi.org/10.1155/2022/7453744
- Wang, W., Song, H., Zhao, S., Shen, J., Zhao, S., Hoi, S. C., & Ling, H. (2019). Learning unsupervised video object segmentation through visual attention. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3064-3074.
- Ye, Z., Xie, X., Liu, Y., Wang, Z., Chen, X., Zhang, M., & Ma, S. (2022). Towards a better understanding of human reading comprehension with brain signals. Proceedings of the ACM Web Conference, 380-391. https://doi.org/10.1145/3485447.3511966
- Zheng, W. L., Liu, W., Lu, Y., Lu, B. L., & Cichocki, A. (2018). Emotionmeter: a multimodal framework for recognizing human emotions. IEEE Transactions on Cybernetics, 49(3), 1110-1122. http://dx.doi.org/10.1109/TCYB.2018.2797176