• Title/Summary/Keyword: open source Software

Search Result 622, Processing Time 0.03 seconds

VENTOS-Based Platoon Driving Simulations Considering Variability (가변성을 고려하는 VENTOS 기반 군집 자율주행 시뮬레이션)

  • Kim, Youngjae;Hong, Jang-Eui
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.2
    • /
    • pp.45-56
    • /
    • 2021
  • In platoon driving, several autonomous vehicles communicate to exchange information with each other and drive in a single cluster. The platooning technology has various advantages such as increasing road traffic, reducing energy consumption and pollutant emission by driving in short distance between vehicles. However, the short distance makes it more difficult to cope with an emergency accident, and accordingly, it is difficult to ensure the safety of platoon driving, which must be secured. In particular, the unexpected situation, i.e., variability that may appear during driving can adversely affect the safety of platoon driving. Because such variability is difficult to predict and reproduce, preparing safety guards to prevent risks arising from variability is a challenging work. In this paper, we studied a simulation method to avoid the risk due to the variability that may occur while platoon driving. In order to simulate safe platoon driving, we develop diverse scenarios considering the variability, design and apply safety guards to handle the variability, and extends the detail functions of VENTOS, an open source platooning simulator. Based on the simulation results, we have confirmed that the risks caused form the variability can be removed, and safe platoon driving is possible. We believe that our simulation approach will contribute to research and development to ensure safety in platoon driving.

Deep Learning-Based Prediction of the Quality of Multiple Concurrent Beams in mmWave Band (밀리미터파 대역 딥러닝 기반 다중빔 전송링크 성능 예측기법)

  • Choi, Jun-Hyeok;Kim, Mun-Suk
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.13-20
    • /
    • 2022
  • IEEE 802.11ay Wi-Fi is the next generation wireless technology and operates in mmWave band. It supports the MU-MIMO (Multiple User Multiple Input Multiple Output) transmission in which an AP (Access Point) can transmit multiple data streams simultaneously to multiple STAs (Stations). To this end, the AP should perform MU-MIMO beamforming training with the STAs. For efficient MU-MIMO beamforming training, it is important for the AP to estimate signal strength measured at each STA at which multiple beams are used simultaneously. Therefore, in the paper, we propose a deep learning-based link quality estimation scheme. Our proposed scheme estimates the signal strength with high accuracy by utilizing a deep learning model pre-trained for a certain indoor or outdoor propagation scenario. Specifically, to estimate the signal strength of the multiple concurrent beams, our scheme uses the signal strengths of the respective single beams, which can be obtained without additional signaling overhead, as the input of the deep learning model. For performance evaluation, we utilized a Q-D (Quasi-Deterministic) Channel Realization open source software and extensive channel measurement campaigns were conducted with NIST (National Institute of Standards and Technology) to implement the millimeter wave (mmWave) channel. Our simulation results demonstrate that our proposed scheme outperforms comparison schemes in terms of the accuracy of the signal strength estimation.

A research on the Construction and Sharing of Authority Record-focusing on the Case of Social Networks and Archival Context Project (전거레코드 구축 및 공유에 관한 연구 SNAC 프로젝트 사례를 중심으로)

  • Lee, Eun Yeong
    • The Korean Journal of Archival Studies
    • /
    • no.71
    • /
    • pp.49-89
    • /
    • 2022
  • This study suggests the necessity and domestic application plan a national authority database that promotes an integrated access, richer search, and understanding of historical information sources and archival resources distributed among cultural heritage institutions through the "Social Networks and Archive Context" project case. As the SNAC project was transformed into an international cooperative organization led by NARA, it was possible to secure a sustainable operating system and realize cooperative authority control. In addition, SNAC authority records have the characteristics of providing richer contextual information about life and history and social and intellectual network information compared to libraries. Through case analysis, First, like SNAC, a cooperative body led by the National Archives and having joint ownership of the National Library of Korea should lead the development and expand the scope of participating institutions. Second, in the cooperative method, take a structure in which divisions are made for each field with special strengths, but the main decision-making is made through the administrative team in which the two organizations participate. Third, development of scalable open source software that can collect technical information in various formats when constructing authority data, designing with the structure and elements of archival authority records, designing functions to control the quality of authority records, and building user-friendly interfaces and the need for a platform design reflecting content elements.

A Study on the 3D Precise Modeling of Old Structures Using Merged Point Cloud from Drone Images and LiDAR Scanning Data (드론 화상 및 LiDAR 스캐닝의 정합처리 자료를 활용한 노후 구조물 3차원 정밀 모델링에 관한 연구)

  • Chan-hwi, Shin;Gyeong-jo, Min;Gyeong-Gyu, Kim;PuReun, Jeon;Hoon, Park;Sang-Ho, Cho
    • Explosives and Blasting
    • /
    • v.40 no.4
    • /
    • pp.15-26
    • /
    • 2022
  • With the recent increase in old and dangerous buildings, the demand for technology in the field of structure demolition is rapidly increasing. In particular, in the case of structures with severe deformation of damage, there is a risk of deterioration in stability and disaster due to changes in the load distribution characteristics in the structure, so rapid structure demolition technology that can be efficiently dismantled in a short period of time is drawing attention. However, structural deformation such as unauthorized extension or illegal remodeling occurs frequently in many old structures, which is not reflected in structural information such as building drawings, and acts as an obstacle in the demolition design process. In this study, as an effective way to overcome the discrepancy between the structural information of old structures and the actual structure, access to actual structures through 3D modeling was considered. 3D point cloud data inside and outside the building were obtained through LiDAR and drone photography for buildings scheduled to be blasting demolition, and precision matching between the two spatial data groups was performed using an open-source based spatial information construction system. The 3D structure model was completed by importing point cloud data matched with 3D modeling software to create structural drawings for each layer and forming each member along the structure slab, pillar, beam, and ceiling boundary. In addition, the modeling technique proposed in this study was verified by comparing it with the actual measurement value for selected structure member.

Development of Intelligent OCR Technology to Utilize Document Image Data (문서 이미지 데이터 활용을 위한 지능형 OCR 기술 개발)

  • Kim, Sangjun;Yu, Donghui;Hwang, Soyoung;Kim, Minho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.212-215
    • /
    • 2022
  • In the era of so-called digital transformation today, the need for the construction and utilization of big data in various fields has increased. Today, a lot of data is produced and stored in a digital device and media-friendly manner, but the production and storage of data for a long time in the past has been dominated by print books. Therefore, the need for Optical Character Recognition (OCR) technology to utilize the vast amount of print books accumulated for a long time as big data was also required in line with the need for big data. In this study, a system for digitizing the structure and content of a document object inside a scanned book image is proposed. The proposal system largely consists of the following three steps. 1) Recognition of area information by document objects (table, equation, picture, text body) in scanned book image. 2) OCR processing for each area of the text body-table-formula module according to recognized document object areas. 3) The processed document informations gather up and returned to the JSON format. The model proposed in this study uses an open-source project that additional learning and improvement. Intelligent OCR proposed as a system in this study showed commercial OCR software-level performance in processing four types of document objects(table, equation, image, text body).

  • PDF

Analysis of a Compound-Target Network of Oryeong-san (오령산 구성성분-타겟 네트워크 분석)

  • Kim, Sang-Kyun
    • Journal of the Korea Knowledge Information Technology Society
    • /
    • v.13 no.5
    • /
    • pp.607-614
    • /
    • 2018
  • Oryeong-san is a prescription widely used for diseases where water is stagnant because it has the effect of circulating the water in the body and releasing it into the urine. In order to investigate the mechanisms of oryeong-san, we in this paper construct and analysis the compound-target network of medicinal materials constituting oryeong-san based on a systems pharmacology approach. First, the targets related to the 475 chemical compounds of oryeong-san were searched in the STITCH database, and the search results for the interactions between compounds and targets were downloaded as XML files. The compound-target network of oryeong-san is visualized and explored using Gephi 0.8.2, which is an open-source software for graphs and networks. In the network, nodes are compounds and targets, and edges are interactions between the nodes. The edge is weighted according to the reliability of the interaction. In order to analysis the compound-target network, it is clustered using MCL algorithm, which is able to cluster the weighted network. A total of 130 clusters were created, and the number of nodes in the cluster with the largest number of nodes was 32. In the clustered network, it was revealed that the active compounds of medicinal materials were associated with the targets for regulating the blood pressure in the kidney. In the future, we will clarify the mechanisms of oryeong-san by linking the information on disease databases and the network of this research.

A School-tailored High School Integrated Science Q&A Chatbot with Sentence-BERT: Development and One-Year Usage Analysis (인공지능 문장 분류 모델 Sentence-BERT 기반 학교 맞춤형 고등학교 통합과학 질문-답변 챗봇 -개발 및 1년간 사용 분석-)

  • Gyeongmo Min;Junehee Yoo
    • Journal of The Korean Association For Science Education
    • /
    • v.44 no.3
    • /
    • pp.231-248
    • /
    • 2024
  • This study developed a chatbot for first-year high school students, employing open-source software and the Korean Sentence-BERT model for AI-powered document classification. The chatbot utilizes the Sentence-BERT model to find the six most similar Q&A pairs to a student's query and presents them in a carousel format. The initial dataset, built from online resources, was refined and expanded based on student feedback and usability throughout over the operational period. By the end of the 2023 academic year, the chatbot integrated a total of 30,819 datasets and recorded 3,457 student interactions. Analysis revealed students' inclination to use the chatbot when prompted by teachers during classes and primarily during self-study sessions after school, with an average of 2.1 to 2.2 inquiries per session, mostly via mobile phones. Text mining identified student input terms encompassing not only science-related queries but also aspects of school life such as assessment scope. Topic modeling using BERTopic, based on Sentence-BERT, categorized 88% of student questions into 35 topics, shedding light on common student interests. A year-end survey confirmed the efficacy of the carousel format and the chatbot's role in addressing curiosities beyond integrated science learning objectives. This study underscores the importance of developing chatbots tailored for student use in public education and highlights their educational potential through long-term usage analysis.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.