• Title/Summary/Keyword: artificial intelligence-based model

Search Result 1,215, Processing Time 0.033 seconds

IoT Open-Source and AI based Automatic Door Lock Access Control Solution

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Young, Ko Eun;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.8-14
    • /
    • 2020
  • Recently, there was an increasing demand for an integrated access control system which is capable of user recognition, door control, and facility operations control for smart buildings automation. The market available door lock access control solutions need to be improved from the current level security of door locks operations where security is compromised when a password or digital keys are exposed to the strangers. At present, the access control system solution providers focusing on developing an automatic access control system using (RF) based technologies like bluetooth, WiFi, etc. All the existing automatic door access control technologies required an additional hardware interface and always vulnerable security threads. This paper proposes the user identification and authentication solution for automatic door lock control operations using camera based visible light communication (VLC) technology. This proposed approach use the cameras installed in building facility, user smart devices and IoT open source controller based LED light sensors installed in buildings infrastructure. The building facility installed IoT LED light sensors transmit the authorized user and facility information color grid code and the smart device camera decode the user informations and verify with stored user information then indicate the authentication status to the user and send authentication acknowledgement to facility door lock integrated camera to control the door lock operations. The camera based VLC receiver uses the artificial intelligence (AI) methods to decode VLC data to improve the VLC performance. This paper implements the testbed model using IoT open-source based LED light sensor with CCTV camera and user smartphone devices. The experiment results are verified with custom made convolutional neural network (CNN) based AI techniques for VLC deciding method on smart devices and PC based CCTV monitoring solutions. The archived experiment results confirm that proposed door access control solution is effective and robust for automatic door access control.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

Exploring automatic scoring of mathematical descriptive assessment using prompt engineering with the GPT-4 model: Focused on permutations and combinations (프롬프트 엔지니어링을 통한 GPT-4 모델의 수학 서술형 평가 자동 채점 탐색: 순열과 조합을 중심으로)

  • Byoungchul Shin;Junsu Lee;Yunjoo Yoo
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.187-207
    • /
    • 2024
  • In this study, we explored the feasibility of automatically scoring descriptive assessment items using GPT-4 based ChatGPT by comparing and analyzing the scoring results between teachers and GPT-4 based ChatGPT. For this purpose, three descriptive items from the permutation and combination unit for first-year high school students were selected from the KICE (Korea Institute for Curriculum and Evaluation) website. Items 1 and 2 had only one problem-solving strategy, while Item 3 had more than two strategies. Two teachers, each with over eight years of educational experience, graded answers from 204 students and compared these with the results from GPT-4 based ChatGPT. Various techniques such as Few-Shot-CoT, SC, structured, and Iteratively prompts were utilized to construct prompts for scoring, which were then inputted into GPT-4 based ChatGPT for scoring. The scoring results for Items 1 and 2 showed a strong correlation between the teachers' and GPT-4's scoring. For Item 3, which involved multiple problem-solving strategies, the student answers were first classified according to their strategies using prompts inputted into GPT-4 based ChatGPT. Following this classification, scoring prompts tailored to each type were applied and inputted into GPT-4 based ChatGPT for scoring, and these results also showed a strong correlation with the teachers' scoring. Through this, the potential for GPT-4 models utilizing prompt engineering to assist in teachers' scoring was confirmed, and the limitations of this study and directions for future research were presented.

The Design of Digital Human Content Creation System (디지털 휴먼 컨텐츠 생성 시스템의 설계)

  • Lee, Sang-Yoon;Lee, Dae-Sik;You, Young-Mo;Lee, Kye-Hun;You, Hyeon-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.4
    • /
    • pp.271-282
    • /
    • 2022
  • In this paper, we propose a digital human content creation system. The digital human content creation system works with 3D AI modeling through whole-body scanning, and is produced with 3D modeling post-processing, texturing, rigging. By combining this with virtual reality(VR) content information, natural motion of the virtual model can be achieved in virtual reality, and digital human content can be efficiently created in one system. Therefore, there is an effect of enabling the creation of virtual reality-based digital human content that minimizes resources. In addition, it is intended to provide an automated pre-processing process that does not require a pre-processing process for 3D modeling and texturing by humans, and to provide a technology for efficiently managing various digital human contents. In particular, since the pre-processing process such as 3D modeling and texturing to construct a virtual model are automatically performed by artificial intelligence, so it has the advantage that rapid and efficient virtual model configuration can be achieved. In addition, it has the advantage of being able to easily organize and manage digital human contents through signature motion.

A Study on Orthogonal Image Detection Precision Improvement Using Data of Dead Pine Trees Extracted by Period Based on U-Net model (U-Net 모델에 기반한 기간별 추출 소나무 고사목 데이터를 이용한 정사영상 탐지 정밀도 향상 연구)

  • Kim, Sung Hun;Kwon, Ki Wook;Kim, Jun Hyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.251-260
    • /
    • 2022
  • Although the number of trees affected by pine wilt disease is decreasing, the affected area is expanding across the country. Recently, with the development of deep learning technology, it is being rapidly applied to the detection study of pine wilt nematodes and dead trees. The purpose of this study is to efficiently acquire deep learning training data and acquire accurate true values to further improve the detection ability of U-Net models through learning. To achieve this purpose, by using a filtering method applying a step-by-step deep learning algorithm the ambiguous analysis basis of the deep learning model is minimized, enabling efficient analysis and judgment. As a result of the analysis the U-Net model using the true values analyzed by period in the detection and performance improvement of dead pine trees of wilt nematode using the U-Net algorithm had a recall rate of -0.5%p than the U-Net model using the previously provided true values, precision was 7.6%p and F-1 score was 4.1%p. In the future, it is judged that there is a possibility to increase the precision of wilt detection by applying various filtering techniques, and it is judged that the drone surveillance method using drone orthographic images and artificial intelligence can be used in the pine wilt nematode disaster prevention project.

Integrating AI Generative Art and Gamification in an Art Education Model to Enhance Creative Thinking (AI 생성예술과 게임화 요소가 통합된 미술 교육 모델 개발 : 창의적 사고 향상)

  • Li Jun;Kim Yoojin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.425-433
    • /
    • 2023
  • In this study, we developed a virtual artist play lesson model using gamification concepts and AI-generated art programs to foster creative thinking in freshman art majors. Targeting first-year students in the Digital Media Art Department at Sichuan Film & Television University in China, this course aims to alleviate fear of artistic creation and enhance problem-solving abilities. The educational model consists of four stages: persona creation, creative writing, text visualization, and virtual exhibitions. Through persona creation, students established their artist identities, and by introducing game-like elements into writing experiences, they discovered their latent creativity. Using AI-generated art programs for text visualization, students gained confidence in their creations, and in the virtual exhibitions, they were able to enhance their self-esteem as artists by appreciating and evaluating each other's works. This educational model offers a new approach to promoting creative thinking and problem-solving skills while increasing learner engagement and interest. Based on these research findings, we expect that by developing and implementing educational strategies that cultivate creative thinking, more students will grow their artistic capacities and creativity, benefiting not only art majors but also students from various fields.

Multi-Time Window Feature Extraction Technique for Anger Detection in Gait Data

  • Beom Kwon;Taegeun Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.41-51
    • /
    • 2023
  • In this paper, we propose a technique of multi-time window feature extraction for anger detection in gait data. In the previous gait-based emotion recognition methods, the pedestrian's stride, time taken for one stride, walking speed, and forward tilt angles of the neck and thorax are calculated. Then, minimum, mean, and maximum values are calculated for the entire interval to use them as features. However, each feature does not always change uniformly over the entire interval but sometimes changes locally. Therefore, we propose a multi-time window feature extraction technique that can extract both global and local features, from long-term to short-term. In addition, we also propose an ensemble model that consists of multiple classifiers. Each classifier is trained with features extracted from different multi-time windows. To verify the effectiveness of the proposed feature extraction technique and ensemble model, a public three-dimensional gait dataset was used. The simulation results demonstrate that the proposed ensemble model achieves the best performance compared to machine learning models trained with existing feature extraction techniques for four performance evaluation metrics.

Optimizing Clustering and Predictive Modelling for 3-D Road Network Analysis Using Explainable AI

  • Rotsnarani Sethy;Soumya Ranjan Mahanta;Mrutyunjaya Panda
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.9
    • /
    • pp.30-40
    • /
    • 2024
  • Building an accurate 3-D spatial road network model has become an active area of research now-a-days that profess to be a new paradigm in developing Smart roads and intelligent transportation system (ITS) which will help the public and private road impresario for better road mobility and eco-routing so that better road traffic, less carbon emission and road safety may be ensured. Dealing with such a large scale 3-D road network data poses challenges in getting accurate elevation information of a road network to better estimate the CO2 emission and accurate routing for the vehicles in Internet of Vehicle (IoV) scenario. Clustering and regression techniques are found suitable in discovering the missing elevation information in 3-D spatial road network dataset for some points in the road network which is envisaged of helping the public a better eco-routing experience. Further, recently Explainable Artificial Intelligence (xAI) draws attention of the researchers to better interprete, transparent and comprehensible, thus enabling to design efficient choice based models choices depending upon users requirements. The 3-D road network dataset, comprising of spatial attributes (longitude, latitude, altitude) of North Jutland, Denmark, collected from publicly available UCI repositories is preprocessed through feature engineering and scaling to ensure optimal accuracy for clustering and regression tasks. K-Means clustering and regression using Support Vector Machine (SVM) with radial basis function (RBF) kernel are employed for 3-D road network analysis. Silhouette scores and number of clusters are chosen for measuring cluster quality whereas error metric such as MAE ( Mean Absolute Error) and RMSE (Root Mean Square Error) are considered for evaluating the regression method. To have better interpretability of the Clustering and regression models, SHAP (Shapley Additive Explanations), a powerful xAI technique is employed in this research. From extensive experiments , it is observed that SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions with an accuracy of 97.22% and strong performance metrics across all classes having MAE of 0.0346, and MSE of 0.0018. On the other hand, the ten-cluster setup, while faster in SHAP analysis, presented challenges in interpretability due to increased clustering complexity. Hence, K-Means clustering with K=4 and SVM hybrid models demonstrated superior performance and interpretability, highlighting the importance of careful cluster selection to balance model complexity and predictive accuracy.

A Performance Comparison of Land-Based Floating Debris Detection Based on Deep Learning and Its Field Applications (딥러닝 기반 육상기인 부유쓰레기 탐지 모델 성능 비교 및 현장 적용성 평가)

  • Suho Bak;Seon Woong Jang;Heung-Min Kim;Tak-Young Kim;Geon Hui Ye
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.193-205
    • /
    • 2023
  • A large amount of floating debris from land-based sources during heavy rainfall has negative social, economic, and environmental impacts, but there is a lack of monitoring systems for floating debris accumulation areas and amounts. With the recent development of artificial intelligence technology, there is a need to quickly and efficiently study large areas of water systems using drone imagery and deep learning-based object detection models. In this study, we acquired various images as well as drone images and trained with You Only Look Once (YOLO)v5s and the recently developed YOLO7 and YOLOv8s to compare the performance of each model to propose an efficient detection technique for land-based floating debris. The qualitative performance evaluation of each model showed that all three models are good at detecting floating debris under normal circumstances, but the YOLOv8s model missed or duplicated objects when the image was overexposed or the water surface was highly reflective of sunlight. The quantitative performance evaluation showed that YOLOv7 had the best performance with a mean Average Precision (intersection over union, IoU 0.5) of 0.940, which was better than YOLOv5s (0.922) and YOLOv8s (0.922). As a result of generating distortion in the color and high-frequency components to compare the performance of models according to data quality, the performance degradation of the YOLOv8s model was the most obvious, and the YOLOv7 model showed the lowest performance degradation. This study confirms that the YOLOv7 model is more robust than the YOLOv5s and YOLOv8s models in detecting land-based floating debris. The deep learning-based floating debris detection technique proposed in this study can identify the spatial distribution of floating debris by category, which can contribute to the planning of future cleanup work.