• Title/Summary/Keyword: artificial intelligence design

Search Result 773, Processing Time 0.028 seconds

Model-Based Intelligent Framework Interface for UAV Autonomous Mission (무인기 자율임무를 위한 모델 기반 지능형 프레임워크 인터페이스)

  • Son Gun Joon;Lee Jaeho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.3
    • /
    • pp.111-121
    • /
    • 2024
  • Recently, thanks to the development of artificial intelligence technologies such as image recognition, research on unmanned aerial vehicles is being actively conducted. In particular, related research is increasing in the field of military drones, which costs a lot to foster professional pilot personnel, and one of them is the study of an intelligent framework for autonomous mission performance of reconnaissance drones. In this study, we tried to design an intelligent framework for unmanned aerial vehicles using the methodology of designing an intelligent framework for service robots. For the autonomous mission performance of unmanned aerial vehicles, the intelligent framework and unmanned aerial vehicle module must be smoothly linked. However, it was difficult to provide interworking for drones using periodic message protocols with model-based interfaces of intelligent frameworks for existing service robots. First, the message model lacked expressive power for periodic message protocols, followed by the problem that interoperability of asynchronous data exchange methods of periodic message protocols and intelligent frameworks was not provided. To solve this problem, this paper proposes a message model extension method for message periodic description to secure the model's expressive power for the periodic message model, and proposes periodic and asynchronous data exchange methods using the extended model to provide interoperability of different data exchange methods.

Optimizing Clustering and Predictive Modelling for 3-D Road Network Analysis Using Explainable AI

  • Rotsnarani Sethy;Soumya Ranjan Mahanta;Mrutyunjaya Panda
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.9
    • /
    • pp.30-40
    • /
    • 2024
  • Building an accurate 3-D spatial road network model has become an active area of research now-a-days that profess to be a new paradigm in developing Smart roads and intelligent transportation system (ITS) which will help the public and private road impresario for better road mobility and eco-routing so that better road traffic, less carbon emission and road safety may be ensured. Dealing with such a large scale 3-D road network data poses challenges in getting accurate elevation information of a road network to better estimate the CO2 emission and accurate routing for the vehicles in Internet of Vehicle (IoV) scenario. Clustering and regression techniques are found suitable in discovering the missing elevation information in 3-D spatial road network dataset for some points in the road network which is envisaged of helping the public a better eco-routing experience. Further, recently Explainable Artificial Intelligence (xAI) draws attention of the researchers to better interprete, transparent and comprehensible, thus enabling to design efficient choice based models choices depending upon users requirements. The 3-D road network dataset, comprising of spatial attributes (longitude, latitude, altitude) of North Jutland, Denmark, collected from publicly available UCI repositories is preprocessed through feature engineering and scaling to ensure optimal accuracy for clustering and regression tasks. K-Means clustering and regression using Support Vector Machine (SVM) with radial basis function (RBF) kernel are employed for 3-D road network analysis. Silhouette scores and number of clusters are chosen for measuring cluster quality whereas error metric such as MAE ( Mean Absolute Error) and RMSE (Root Mean Square Error) are considered for evaluating the regression method. To have better interpretability of the Clustering and regression models, SHAP (Shapley Additive Explanations), a powerful xAI technique is employed in this research. From extensive experiments , it is observed that SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions SHAP analysis validated the importance of latitude and altitude in predicting longitude, particularly in the four-cluster setup, providing critical insights into model behavior and feature contributions with an accuracy of 97.22% and strong performance metrics across all classes having MAE of 0.0346, and MSE of 0.0018. On the other hand, the ten-cluster setup, while faster in SHAP analysis, presented challenges in interpretability due to increased clustering complexity. Hence, K-Means clustering with K=4 and SVM hybrid models demonstrated superior performance and interpretability, highlighting the importance of careful cluster selection to balance model complexity and predictive accuracy.

Designing A De-identified Municipal CCTV Of Live Video Service For Citizens (비식별 처리된 지방자치단체 CCTV 실시간 영상 대시민 제공 서비스의 설계)

  • Dong-Hyun Lim;Dea-Woo Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.5
    • /
    • pp.951-958
    • /
    • 2024
  • The public is requesting to view and receive copies of local government CCTV footage in order to deal with unauthorized dumping, loss, and other inconveniences related to their lives, and to confirm whether an incident or accident has occurred. However, due to the difficulty of identification and the privacy of others in the video, local governments cannot provide the original CCTV footage to the public, so they provide the footage after manual masking. As a result, the public is demanding local governments to resolve distrust and dissatisfaction, and secondary users of video information, such as police, fire, and courts, are demanding quick and efficient ways to improve video utilization. In this paper, we study how to make local government CCTV video information available to the public. To this end, we de-personalize the entire video by mosaicking it into a certain size. We automate and simplify the process of viewing and creating replicas. We apply artificial intelligence pattern technology to design measures against video misuse. If the government reflects these results in its policies, the public's right-to-know needs will be met, public trusted in the government will increase, and the government's administrative efficiency will be improved.

A Study on Information Bias Perceived by Users of AI-driven News Recommendation Services: Focusing on the Establishment of Ethical Principles for AI Services (AI 자동 뉴스 추천 서비스 사용자가 인지하는 정보 편향성에 대한 연구: AI 서비스의 윤리 원칙 수립을 중심으로)

  • Minjung Park;Sangmi Chai
    • Knowledge Management Research
    • /
    • v.25 no.3
    • /
    • pp.47-71
    • /
    • 2024
  • AI-driven news recommendation systems are widely used today, providing personalized news consumption experiences. However, there are significant concerns that these systems might increase users' information bias by mainly showing information from limited perspectives. This lack of diverse information access can prevent users from forming well-rounded viewpoints on specific issues, leading to social problems like Filter bubbles or Echo chambers. These issues can deepen social divides and information inequality. This study aims to explore how AI-based news recommendation services affect users' perceived information bias and to create a foundation for ethical principles in AI services. Specifically, the study looks at the impact of ethical principles like accountability, the right to explanation, the right to choose, and privacy protection on users' perceptions of information bias in AI news systems. The findings emphasize the need for AI service providers to strengthen ethical standards to improve service quality and build user trust for long-term use. By identifying which ethical principles should be prioritized in the design and implementation of AI services, this study aims to help develop corporate ethical frameworks, internal policies, and national AI ethics guidelines.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

The Need and Improvement Direction of New Computer Media Classes in Landscape Architectural Education in University (대학 내 조경전공 교육과정에 있어 새로운 컴퓨터 미디어 수업의 필요와 개선방향)

  • Na, Sungjin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.1
    • /
    • pp.54-69
    • /
    • 2021
  • In 2020, civilized society's overall lifestyle showed a distinct change from consumable analog media, such as paper, to digital media with the increased penetration of cloud computing, and from wired media to wireless media. Based on these social changes, this work examines whether the use of computer media in the field of landscape architecture is appropriately applied. This study will give directions for new computer media classes in landscape architectural education in the 4th Industrial Revolution era. Landscape architecture is a field that directly proposes the realization of a positive lifestyle and the creation of a living environment and is closely connected with social change. However, there is no clear evidence that landscape architectural education is making any visible change, while the digital infrastructure of the 4th Industrial Revolution, such as Artificial Intelligence (AI), Big Data, autonomous vehicles, cloud networks, and the Internet of Things, is changing the contemporary society in terms of technology, culture, and economy among other aspects. Therefore, it is necessary to review the current state of the use of computer technology and media in landscape architectural education, and also to examine the alternative direction of the curriculum for the new digital era. First, the basis for discussion was made by studying the trends of computational design in modern landscape architecture. Next, the changes and current status of computer media classes in domestic and overseas landscape education were analyzed based on prior research and curriculum. As a result, the number and the types of computer media classes increased significantly between the study in 1994 and the current situation in 2020 in the foreign landscape department, whereas there were no obvious changes in the domestic landscape department. This shows that the domestic landscape education is passively coping with the changes in the digital era. Lastly, based on the discussions, this study examined alternatives to the new curriculum that landscape architecture department should pursue in a new degital world.

Characterizing Strategy of Emotional sympathetic Robots in Animation and Movie - Focused on Appearance and Behavior tendency Analysis - (애니메이션 및 영화에 등장하는 정서교감형 로봇의 캐릭터라이징 전략 - 외형과 행동 경향성 분석을 중심으로 -)

  • Ryu, Beom-Yeol;Yang, Se-Hyeok
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.85-116
    • /
    • 2017
  • The purpose of this study is to analyze conditions that robots depicted in cinematographic works like animations or movies sympathize with and form an attachment with the nuclear person and organize characterizing strategies for emotional sympathetic robots. Along with the development of technology, the areas of artificial intelligence and robots are no longer considered to belong to science fiction but as realistic issues. Therefore, this author assumes that the expressive characteristics of emotional sympathetic robots created by cinematographic works should be used as meaningful factors in expressively embodying human-friendly service robots to be distributed widely afterwards, that is, in establishing the features of characters. To lay the grounds for it, this research has begun. As the subjects of analysis, this researcher has chosen robot characters whose emotional intimacy with the main person is clearly observed among those found in movies and animations produced after the 1920 when robot's contemporary concept was declared. Also, to understand robots' appearance and behavioral tendency, this study (1) has classified robots' external impressions into five types (human-like, cartoon, tool-like, artificial bring, pet or creature) and (2) has classified behavioral tendencies considered to be the outer embodiment of personality by using DiSC, the tool to diagnose behavioral patterns. Meanwhile, it has been observed that robots equipped with high emotional intimacy are all strongly independent about their duties and indicate great emotional acceptance. Therefore, 'influence' and 'Steadiness' types show great emotional acceptance, the influencing type tends to be highly independent, and the 'Conscientiousness' type tends to indicate less emotional acceptance and independency in general. Yet, according to the analysis on external impressions, appearance factors hardly have any significant relationship with emotional sympathy. It implies that regarding the conditions of robots equipped with great emotional sympathy, emotional sympathy grounded on communication exerts more crucial effects than first impression similarly to the process of forming interpersonal relationship in reality. Lastly, to study the characters of robots, it is absolutely needed to have consilient competence embracing different areas widely. This author also has felt that only with design factors or personality factors, it is hard to estimate robot characters and also analyze a vast amount of information demanded in sympathy with humans entirely. However, this researcher will end this thesis as the foundation for it expecting that the general artistic value of animations can be used preciously afterwards in developing robots that have to be studied interdisciplinarily.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

A case study of elementary school mathematics-integrated classes based on AI Big Ideas for fostering AI thinking (인공지능 사고 함양을 위한 인공지능 빅 아이디어 기반 초등학교 수학 융합 수업 사례연구)

  • Chohee Kim;Hyewon Chang
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.255-272
    • /
    • 2024
  • This study aims to design mathematics-integrated classes that cultivate artificial intelligence (AI) thinking and to analyze students' AI thinking within these classes. To do this, four classes were designed through the integration of the AI4K12 Initiative's AI Big Ideas with the 2015 revised elementary mathematics curriculum. Implementation of three classes took place with 5th and 6th grade elementary school students. Leveraging the computational thinking taxonomy and the AI thinking components, a comprehensive framework for analyzing of AI thinking was established. Using this framework, analysis of students' AI thinking during these classes was conducted based on classroom discourse and supplementary worksheets. The results of the analysis were peer-reviewed by two researchers. The research findings affirm the potential of mathematics-integrated classes in nurturing students' AI thinking and underscore the viability of AI education for elementary school students. The classes, based on AI Big Ideas, facilitated elementary students' understanding of AI concepts and principles, enhanced their grasp of mathematical content elements, and reinforced mathematical process aspects. Furthermore, through activities that maintain structural consistency with previous problem-solving methods while applying them to new problems, the potential for the transfer of AI thinking was evidenced.