• Title/Summary/Keyword: software algorithms

Search Result 1,093, Processing Time 0.029 seconds

Trends in quantum reinforcement learning: State-of-thearts and the road ahead

  • Soohyun Park;Joongheon Kim
    • ETRI Journal
    • /
    • v.46 no.5
    • /
    • pp.748-758
    • /
    • 2024
  • This paper presents the basic quantum reinforcement learning theory and its applications to various engineering problems. With the advances in quantum computing and deep learning technologies, various research works have focused on quantum deep learning and quantum machine learning. In this paper, quantum neural network (QNN)-based reinforcement learning (RL) models are discussed and introduced. Moreover, the pros of the QNN-based RL algorithms and models, such as fast training, high scalability, and efficient learning parameter utilization, are presented along with various research results. In addition, one of the well-known multi-agent extensions of QNN-based RL models, the quantum centralized-critic and multiple-actor network, is also discussed and its applications to multi-agent cooperation and coordination are introduced. Finally, the applications and future research directions are introduced and discussed in terms of federated learning, split learning, autonomous control, and quantum deep learning software testing.

Sentiment Analysis of Product Reviews to Identify Deceptive Rating Information in Social Media: A SentiDeceptive Approach

  • Marwat, M. Irfan;Khan, Javed Ali;Alshehri, Dr. Mohammad Dahman;Ali, Muhammad Asghar;Hizbullah;Ali, Haider;Assam, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.830-860
    • /
    • 2022
  • [Introduction] Nowadays, many companies are shifting their businesses online due to the growing trend among customers to buy and shop online, as people prefer online purchasing products. [Problem] Users share a vast amount of information about products, making it difficult and challenging for the end-users to make certain decisions. [Motivation] Therefore, we need a mechanism to automatically analyze end-user opinions, thoughts, or feelings in the social media platform about the products that might be useful for the customers to make or change their decisions about buying or purchasing specific products. [Proposed Solution] For this purpose, we proposed an automated SentiDecpective approach, which classifies end-user reviews into negative, positive, and neutral sentiments and identifies deceptive crowd-users rating information in the social media platform to help the user in decision-making. [Methodology] For this purpose, we first collected 11781 end-users comments from the Amazon store and Flipkart web application covering distant products, such as watches, mobile, shoes, clothes, and perfumes. Next, we develop a coding guideline used as a base for the comments annotation process. We then applied the content analysis approach and existing VADER library to annotate the end-user comments in the data set with the identified codes, which results in a labelled data set used as an input to the machine learning classifiers. Finally, we applied the sentiment analysis approach to identify the end-users opinions and overcome the deceptive rating information in the social media platforms by first preprocessing the input data to remove the irrelevant (stop words, special characters, etc.) data from the dataset, employing two standard resampling approaches to balance the data set, i-e, oversampling, and under-sampling, extract different features (TF-IDF and BOW) from the textual data in the data set and then train & test the machine learning algorithms by applying a standard cross-validation approach (KFold and Shuffle Split). [Results/Outcomes] Furthermore, to support our research study, we developed an automated tool that automatically analyzes each customer feedback and displays the collective sentiments of customers about a specific product with the help of a graph, which helps customers to make certain decisions. In a nutshell, our proposed sentiments approach produces good results when identifying the customer sentiments from the online user feedbacks, i-e, obtained an average 94.01% precision, 93.69% recall, and 93.81% F-measure value for classifying positive sentiments.

A Study on the Field Data Applicability of Seismic Data Processing using Open-source Software (Madagascar) (오픈-소스 자료처리 기술개발 소프트웨어(Madagascar)를 이용한 탄성파 현장자료 전산처리 적용성 연구)

  • Son, Woohyun;Kim, Byoung-yeop
    • Geophysics and Geophysical Exploration
    • /
    • v.21 no.3
    • /
    • pp.171-182
    • /
    • 2018
  • We performed the seismic field data processing using an open-source software (Madagascar) to verify if it is applicable to processing of field data, which has low signal-to-noise ratio and high uncertainties in velocities. The Madagascar, based on Python, is usually supposed to be better in the development of processing technologies due to its capabilities of multidimensional data analysis and reproducibility. However, this open-source software has not been widely used so far for field data processing because of complicated interfaces and data structure system. To verify the effectiveness of the Madagascar software on field data, we applied it to a typical seismic data processing flow including data loading, geometry build-up, F-K filter, predictive deconvolution, velocity analysis, normal moveout correction, stack, and migration. The field data for the test were acquired in Gunsan Basin, Yellow Sea using a streamer consisting of 480 channels and 4 arrays of air-guns. The results at all processing step are compared with those processed with Landmark's ProMAX (SeisSpace R5000) which is a commercial processing software. Madagascar shows relatively high efficiencies in data IO and management as well as reproducibility. Additionally, it shows quick and exact calculations in some automated procedures such as stacking velocity analysis. There were no remarkable differences in the results after applying the signal enhancement flows of both software. For the deeper part of the substructure image, however, the commercial software shows better results than the open-source software. This is simply because the commercial software has various flows for de-multiple and provides interactive processing environments for delicate processing works compared to Madagascar. Considering that many researchers around the world are developing various data processing algorithms for Madagascar, we can expect that the open-source software such as Madagascar can be widely used for commercial-level processing with the strength of expandability, cost effectiveness and reproducibility.

Performance Analysis and Comparison of Stream Ciphers for Secure Sensor Networks (안전한 센서 네트워크를 위한 스트림 암호의 성능 비교 분석)

  • Yun, Min;Na, Hyoung-Jun;Lee, Mun-Kyu;Park, Kun-Soo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.5
    • /
    • pp.3-16
    • /
    • 2008
  • A Wireless Sensor Network (WSN for short) is a wireless network consisting of distributed small devices which are called sensor nodes or motes. Recently, there has been an extensive research on WSN and also on its security. For secure storage and secure transmission of the sensed information, sensor nodes should be equipped with cryptographic algorithms. Moreover, these algorithms should be efficiently implemented since sensor nodes are highly resource-constrained devices. There are already some existing algorithms applicable to sensor nodes, including public key ciphers such as TinyECC and standard block ciphers such as AES. Stream ciphers, however, are still to be analyzed, since they were only recently standardized in the eSTREAM project. In this paper, we implement over the MicaZ platform nine software-based stream ciphers out of the ten in the second and final phases of the eSTREAM project, and we evaluate their performance. Especially, we apply several optimization techniques to six ciphers including SOSEMANUK, Salsa20 and Rabbit, which have survived after the final phase of the eSTREAM project. We also present the implementation results of hardware-oriented stream ciphers and AES-CFB fur reference. According to our experiment, the encryption speeds of these software-based stream ciphers are in the range of 31-406Kbps, thus most of these ciphers are fairly acceptable fur sensor nodes. In particular, the survivors, SOSEMANUK, Salsa20 and Rabbit, show the throughputs of 406Kbps, 176Kbps and 121Kbps using 70KB, 14KB and 22KB of ROM and 2811B, 799B and 755B of RAM, respectively. From the viewpoint of encryption speed, the performances of these ciphers are much better than that of the software-based AES, which shows the speed of 106Kbps.

Comparisons of voice quality parameter values measured with MDVP, Praat, and TF32 (MDVP, Praat, TF32에 따른 음향학적 측정치에 대한 비교)

  • Ko, Hye-Ju;Woo, Mee-Ryung;Choi, Yaelin
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.73-83
    • /
    • 2020
  • Measured values may differ between Multi-Dimensional Voice Program (MDVP), Praat, and Time-Frequency Analysis software (TF32), all of which are widely used in voice quality analysis, due to differences in the algorithms used in each analyzer. Therefore, this study aimed to compare the values of parameters of normal voice measured with each analyzer. After tokens of the vowel sound /a/ were collected from 35 normal adult subjects (19 male and 16 female), they were analyzed with MDVP, Praat, and TF32. The mean values obtained from Praat for jitter variables (J local, J abs, J rap, and J ppq), shimmer variables (S local, S dB, and S apq), and noise-to-harmonics ratio (NHR) were significantly lower than those from MDVP in both males and females (p<.01). The mean values of J local, J abs, and S local were significantly lower in the order MDVP, Praat, and TF32 in both genders. In conclusion, the measured values differed across voice analyzers due to the differences in the algorithms each analyzer uses. Therefore, it is important for clinicians to analyze pathologic voice after understanding the normal criteria used by each analyzer when they use a voice analyzer in clinical practice.

A Study on the Improvement of Injection Molding Process Using CAE and Decision-tree (CAE와 Decision-tree를 이용한 사출성형 공정개선에 관한 연구)

  • Hwang, Soonhwan;Han, Seong-Ryeol;Lee, Hoojin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.580-586
    • /
    • 2021
  • The CAT methodology is a numerical analysis technique using CAE. Recently, a methodology of applying artificial intelligence techniques to a simulation has been studied. A previous study compared the deformation results according to the injection molding process using a machine learning technique. Although MLP has excellent prediction performance, it lacks an explanation of the decision process and is like a black box. In this study, data was generated using Autodesk Moldflow 2018, an injection molding analysis software. Several Machine Learning Algorithms models were developed using RapidMiner version 9.5, a machine learning platform software, and the root mean square error was compared. The decision-tree showed better prediction performance than other machine learning techniques with the RMSE values. The classification criterion can be increased according to the Maximal Depth that determines the size of the Decision-tree, but the complexity also increases. The simulation showed that by selecting an intermediate value that satisfies the constraint based on the changed position, there was 7.7% improvement compared to the previous simulation.

Deep Learning Based Rescue Requesters Detection Algorithm for Physical Security in Disaster Sites (재난 현장 물리적 보안을 위한 딥러닝 기반 요구조자 탐지 알고리즘)

  • Kim, Da-hyeon;Park, Man-bok;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.57-64
    • /
    • 2022
  • If the inside of a building collapses due to a disaster such as fire, collapse, or natural disaster, the physical security inside the building is likely to become ineffective. Here, physical security is needed to minimize the human casualties and physical damages in the collapsed building. Therefore, this paper proposes an algorithm to minimize the damage in a disaster situation by fusing existing research that detects obstacles and collapsed areas in the building and a deep learning-based object detection algorithm that minimizes human casualties. The existing research uses a single camera to determine whether the corridor environment in which the robot is currently located has collapsed and detects obstacles that interfere with the search and rescue operation. Here, objects inside the collapsed building have irregular shapes due to the debris or collapse of the building, and they are classified and detected as obstacles. We also propose a method to detect rescue requesters-the most important resource in the disaster situation-and minimize human casualties. To this end, we collected open-source disaster images and image data of disaster situations and calculated the accuracy of detecting rescue requesters in disaster situations through various deep learning-based object detection algorithms. In this study, as a result of analyzing the algorithms that detect rescue requesters in disaster situations, we have found that the YOLOv4 algorithm has an accuracy of 0.94, proving that it is most suitable for use in actual disaster situations. This paper will be helpful for performing efficient search and rescue in disaster situations and achieving a high level of physical security, even in collapsed buildings.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

A Survey of the Transmission-Power-Control Schemes in Wireless Body-Sensor Networks

  • Lee, Woosik;Kim, Heeyoul;Hong, Min;Kang, Min-Goo;Jeong, Seung Ryul;Kim, Namgi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.4
    • /
    • pp.1854-1868
    • /
    • 2018
  • A wireless body-sensor network (WBSN) refers to a network-configured environment in which sensors are placed on both the inside and outside of the human body. The sensors are much smaller and the energy is more constrained when compared to traditional wireless sensor network (WSN) environments. The critical nature of the energy-constraint issue in WBSN environments has led to numerous studies on the reduction of energy consumption of WBSN sensors. The transmission-power-control (TPC) technique adjusts the transmission-power level (TPL) of sensors in the WBSN and reduces the energy consumption that occurs during communications. To elaborate, when transmission sensors and reception sensors are placed in various parts of the human body, the transmission sensors regularly send sensor data to the reception sensors. As the reception sensors receive data from the transmission sensors, real-time measurements of the received signal-strength indication (RSSI), which is the value that indicates the channel status, are taken to determine the TPL that suits the current-channel status. This TPL information is then sent back to the transmission sensors. The transmission sensors adjust their current TPL based on the TPL that they receive from the reception sensors. The initial TPC algorithm made linear or binary adjustments using only the information of the current-channel status. However, because various data in the WBSN environment can be utilized to create a more efficient TPC algorithm, many different types of TPC algorithms that combine human movements or fuse TPC with other algorithms have emerged. This paper defines and discusses the design and development process of an efficient TPC algorithm for WBSNs. We will describe the WBSN characteristics, model, and closed-loop mechanism, followed by an examination of recent TPC studies.

Experimental Verification of the Versatility of SPAM-based Image Steganalysis (SPAM 기반 영상 스테그아날리시스의 범용성에 대한 실험적 검증)

  • Kim, Jaeyoung;Park, Hanhoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.23 no.4
    • /
    • pp.526-535
    • /
    • 2018
  • Many steganography algorithms have been studied, and steganalysis for detecting stego images which steganography is applied to has also been studied in parallel. Especially, in the case of the image steganalysis, the features such as ALE, SPAM, and SRMQ are extracted from the statistical characteristics of the image, and stego images are classified by learning the classifier using various machine learning algorithms. However, these studies did not consider the effect of image size, aspect ratio, or message-embedding rate, and thus the features might not function normally for images with conditions different from those used in the their studies. In this paper, we analyze the classification rate of the SPAM-based image stegnalysis against variety image sizes aspect ratios and message-embedding rates and verify its versatility.