• Title/Summary/Keyword: 지능정보 기반

Search Result 4,484, Processing Time 0.036 seconds

Development of Yóukè Mining System with Yóukè's Travel Demand and Insight Based on Web Search Traffic Information (웹검색 트래픽 정보를 활용한 유커 인바운드 여행 수요 예측 모형 및 유커마이닝 시스템 개발)

  • Choi, Youji;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.155-175
    • /
    • 2017
  • As social data become into the spotlight, mainstream web search engines provide data indicate how many people searched specific keyword: Web Search Traffic data. Web search traffic information is collection of each crowd that search for specific keyword. In a various area, web search traffic can be used as one of useful variables that represent the attention of common users on specific interests. A lot of studies uses web search traffic data to nowcast or forecast social phenomenon such as epidemic prediction, consumer pattern analysis, product life cycle, financial invest modeling and so on. Also web search traffic data have begun to be applied to predict tourist inbound. Proper demand prediction is needed because tourism is high value-added industry as increasing employment and foreign exchange. Among those tourists, especially Chinese tourists: Youke is continuously growing nowadays, Youke has been largest tourist inbound of Korea tourism for many years and tourism profits per one Youke as well. It is important that research into proper demand prediction approaches of Youke in both public and private sector. Accurate tourism demands prediction is important to efficient decision making in a limited resource. This study suggests improved model that reflects latest issue of society by presented the attention from group of individual. Trip abroad is generally high-involvement activity so that potential tourists likely deep into searching for information about their own trip. Web search traffic data presents tourists' attention in the process of preparation their journey instantaneous and dynamic way. So that this study attempted select key words that potential Chinese tourists likely searched out internet. Baidu-Chinese biggest web search engine that share over 80%- provides users with accessing to web search traffic data. Qualitative interview with potential tourists helps us to understand the information search behavior before a trip and identify the keywords for this study. Selected key words of web search traffic are categorized by how much directly related to "Korean Tourism" in a three levels. Classifying categories helps to find out which keyword can explain Youke inbound demands from close one to far one as distance of category. Web search traffic data of each key words gathered by web crawler developed to crawling web search data onto Baidu Index. Using automatically gathered variable data, linear model is designed by multiple regression analysis for suitable for operational application of decision and policy making because of easiness to explanation about variables' effective relationship. After regression linear models have composed, comparing with model composed traditional variables and model additional input web search traffic data variables to traditional model has conducted by significance and R squared. after comparing performance of models, final model is composed. Final regression model has improved explanation and advantage of real-time immediacy and convenience than traditional model. Furthermore, this study demonstrates system intuitively visualized to general use -Youke Mining solution has several functions of tourist decision making including embed final regression model. Youke Mining solution has algorithm based on data science and well-designed simple interface. In the end this research suggests three significant meanings on theoretical, practical and political aspects. Theoretically, Youke Mining system and the model in this research are the first step on the Youke inbound prediction using interactive and instant variable: web search traffic information represents tourists' attention while prepare their trip. Baidu web search traffic data has more than 80% of web search engine market. Practically, Baidu data could represent attention of the potential tourists who prepare their own tour as real-time. Finally, in political way, designed Chinese tourist demands prediction model based on web search traffic can be used to tourism decision making for efficient managing of resource and optimizing opportunity for successful policy.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Research Trends on Estimation of Soil Moisture and Hydrological Components Using Synthetic Aperture Radar (SAR를 이용한 토양수분 및 수문인자 산출 연구동향)

  • CHUNG, Jee-Hun;LEE, Yong-Gwan;KIM, Seong-Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.26-67
    • /
    • 2020
  • Synthetic Aperture Radar(SAR) is able to photograph the earth's surface regardless of weather conditions, day and night. Because of its possibility to search for hydrological factors such as soil moisture and groundwater, and its importance is gradually increasing in the field of water resources. SAR began to be mounted on satellites in the 1970s, and about 15 or more satellites were launched as of 2020, which around 10 satellites will be launched within the next 5 years. Recently, various types of SAR technologies such as enhancement of observation width and resolution, multiple polarization and multiple frequencies, and diversification of observation angles were being developed and utilized. In this paper, a brief history of the SAR system, as well as studies for estimating soil moisture and hydrological components were investigated. Up to now hydrological components that can be estimated using SAR satellites include soil moisture, subsurface groundwater discharge, precipitation, snow cover area, leaf area index(LAI), and normalized difference vegetation index(NDVI) and among them, soil moisture is being studied in 17 countries in South Korea, North America, Europe, and India by using the physical model, the IEM(Integral Equation Model) and the artificial intelligence-based ANN(Artificial Neural Network). RADARSAT-1, ENVISAT, ASAR, and ERS-1/2 were the most widely used satellite, but the operation has ended, and utilization of RADARSAT-2, Sentinel-1, and SMAP, which are currently in operation, is gradually increasing. Since Korea is developing a medium-sized satellite for water resources and water disasters equipped with C-band SAR with the goal of launching in 2025, various hydrological components estimation researches using SAR are expected to be active.

Study on the Direction of Universal Big Data and Big Data Education-Based on the Survey of Big Data Experts (보편적 빅데이터와 빅데이터 교육의 방향성 연구 - 빅데이터 전문가의 인식 조사를 기반으로)

  • Park, Youn-Soo;Lee, Su-Jin
    • Journal of The Korean Association of Information Education
    • /
    • v.24 no.2
    • /
    • pp.201-214
    • /
    • 2020
  • Big data is gradually expanding in diverse fields, with changing the data-related legislation. Moreover it would be interest in big data education. However, it requires a high level of knowledge and skills in order to utilize Big Data and it takes a long time for education spends a lot of money for training. We study that in order to define Universal Big Data used to the industrial field in a wide range. As a result, we make the paradigm for Big Data education for college students. We survey to the professional the Big Data definition and the Big Data perception. According to the survey, the Big Data related-professional recognize that is a wider definition than Computer Science Big Data is. Also they recognize that the Big Data Processing dose not be required Big Data Processing Frameworks or High Performance Computers. This means that in order to educate Big Data, it is necessary to focus on the analysis methods and application methods of Universal Big Data rather than computer science (Engineering) knowledge and skills. Based on the our research, we propose the Universal Big Data education on the new paradigm.

A Development of Ontology-Based Law Retrieval System: Focused on Railroad R&D Projects (온톨로지 기반 법령 검색시스템의 개발: 철도·교통 분야 연구개발사업을 중심으로)

  • Won, Min-Jae;Kim, Dong-He;Jung, Hae-Min;Lee, Sang Keun;Hong, June Seok;Kim, Wooju
    • The Journal of Society for e-Business Studies
    • /
    • v.20 no.4
    • /
    • pp.209-225
    • /
    • 2015
  • Research and development projects in railroad domain are different from those in other domains in terms of their close relationship with laws. Some cases are reported that new technologies from R&D projects could not be industrialized because of relevant laws restricting them. This problem comes from the fact that researchers don't know exactly what laws can affect the result of R&D projects. To deal with this problem, we suggest a model for law retrieval system that can be used by researchers of railroad R&D projects to find related legislation. Input of this system is a research plan describing the main contents of projects. After laws related to the R&D project is provided with their rankings, which are assigned by scores we developed. A ranking of a law means its order of priority to be checked. By using this system, researchers can search the laws that may affect R&D projects throughout all the stages of project cycle. So, using our system model, researchers can get a list of laws to be considered before the project they participate ends. As a result, they can adjust their project direction by checking the law list, avoiding their elaborate projects being useless.

Intelligent Railway Detection Algorithm Fusing Image Processing and Deep Learning for the Prevent of Unusual Events (철도 궤도의 이상상황 예방을 위한 영상처리와 딥러닝을 융합한 지능형 철도 레일 탐지 알고리즘)

  • Jung, Ju-ho;Kim, Da-hyeon;Kim, Chul-su;Oh, Ryum-duck;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.109-116
    • /
    • 2020
  • With the advent of high-speed railways, railways are one of the most frequently used means of transportation at home and abroad. In addition, in terms of environment, carbon dioxide emissions are lower and energy efficiency is higher than other transportation. As the interest in railways increases, the issue related to railway safety is one of the important concerns. Among them, visual abnormalities occur when various obstacles such as animals and people suddenly appear in front of the railroad. To prevent these accidents, detecting rail tracks is one of the areas that must basically be detected. Images can be collected through cameras installed on railways, and the method of detecting railway rails has a traditional method and a method using deep learning algorithm. The traditional method is difficult to detect accurately due to the various noise around the rail, and using the deep learning algorithm, it can detect accurately, and it combines the two algorithms to detect the exact rail. The proposed algorithm determines the accuracy of railway rail detection based on the data collected.

A Benchmark of AI Application based on Open Source for Data Mining Environmental Variables in Smart Farm (스마트 시설환경 환경변수 분석을 위한 Open source 기반 인공지능 활용법 분석)

  • Min, Jae-Ki;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.159-159
    • /
    • 2017
  • 스마트 시설환경은 대표적으로 원예, 축산 분야 등 여러 형태의 농업현장에 정보 통신 및 데이터 분석 기술을 도입하고 있는 시설화된 생산 환경이라 할 수 있다. 근래에 하드웨어적으로 급증한 스마트 시설환경에서 생산되는 방대한 생육/환경 데이터를 올바르고 적합하게 사용하기 위해서는 일반 산업 현장과는 차별화 된 분석기법이 요구된다고 할 수 있다. 소프트웨어 공학 분야에서 연구된 빅데이터 처리 기술을 기계적으로 농업 분야의 빅데이터에 적용하기에는 한계가 있을 수 있다. 시설환경 내/외부의 다양한 환경 변수는 시계열 데이터의 난해성, 비가역성, 불특정성, 비정형 패턴 등에 기인하여 예측 모델 연구가 매우 난해한 대상이기 때문이라 할 수 있다. 본 연구에서는 근래에 관심이 급증하고 있는 인공신경망 연구 소프트웨어인 Tensorflow (www.tensorflow.org)와 대표적인 Open source인 OpenNN (www.openn.net)을 스마트 시설환경 환경변수 상호간 상관성 분석에 응용하였다. 해당 소프트웨어 라이브러리의 운영환경을 살펴보면 Tensorflow 는 Linux(Ubuntu 16.04.4), Max OS X(EL capitan 10.11), Windows (x86 compatible)에서 활용가능하고, OpenNN은 별도의 운영환경에 대한 바이너리를 제공하지 않고 소스코드 전체를 제공하므로, 해당 운영환경에서 바이너리 컴파일 후 활용이 가능하다. 소프트웨어 개발 언어의 경우 Tensorflow는 python이 기본 언어이며 python(v2.7 or v3.N) 가상 환경 내에서 개발이 수행이 된다. 주의 깊게 살펴볼 부분은 이러한 개발 환경의 제약으로 인하여 Tensorflow의 주요한 장점 중에 하나인 고속 연산 기능 수행이 일부 운영 환경에 국한이 되어 제공이 된다는 점이다. GPU(Graphics Processing Unit)의 제공하는 하드웨어 가속기능은 Linux 운영체제에서 활용이 가능하다. 가상 개발 환경에 운영되는 한계로 인하여 실시간 정보 처리에는 한계가 따르므로 이에 대한 고려가 필요하다. 한편 근래(2017.03)에 공개된 Tensorflow API r1.0의 경우 python, C++, Java언어와 함께 Go라는 언어를 새로 지원하여 개발자의 활용 범위를 매우 높였다. OpenNN의 경우 C++ 언어를 기본으로 제공하며 C++ 컴파일러를 지원하는 임의의 개발 환경에서 모두 활용이 가능하다. 특징은 클러스터링 플랫폼과 연동을 통해 하드웨어 가속 기능의 부재를 일부 극복했다는 점이다. 상기 두 가지 패키지를 이용하여 2016년 2월부터 5월 까지 충북 음성군 소재 딸기 온실 내부에서 취득한 온도, 습도, 조도, CO2에 대하여 Large-scale linear model을 실험적(시간단위, 일단위, 주단위 분할)으로 적용하고, 인접한 세그먼트의 환경변수 예측 모델링을 수행하였다. 동일한 조건의 학습을 수행함에 있어, Tensorflow가 개발 소요 시간과 학습 실행 속도 측면에서 매우 우세하였다. OpenNN을 이용하여 대등한 성능을 보이기 위해선 병렬 클러스터링 기술을 활용해야 할 것이다. 오프라인 일괄(Offline batch)처리 방식의 한계가 있는 인공신경망 모델링 기법과 현장 보급이 불가능한 고성능 하드웨어 연산 장치에 대한 대안 마련을 위한 연구가 필요하다.

  • PDF

Device Mutual Authentication and Key Management Techniques in a Smart Home Environment (스마트 홈 환경에서 디바이스 상호 인증 및 키 관리 기법)

  • Min, So-Yeon;Lee, Jae-Seung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.10
    • /
    • pp.661-667
    • /
    • 2018
  • Recently, the smart home market is growing due to the development of wireless communication technology and sensor devices, and various devices are being utilized. Such an IoT environment collects various vast amount of device information for intelligent services, receives services based on user information, controls various devices, and provides communication between different types of devices. However, with this growth, various security threats are occurring in the smart home environment. In fact, Proofpoint and HP warned about the cases of damage in a smart home environment and the severity of security vulnerabilities, and cases of infringement in various environments were announced. Therefore, in this paper, we have studied secure mutual authentication method between smart nodes used in smart home to solve security problems that may occur in smart home environment. In the case of the proposed thesis, security evaluations are performed using random numbers and frequently updated session keys and secret keys for well-known vulnerabilities that can occur in IoT environments and sensor devices such as sniffing, spoofing, device mutual authentication, And safety. In addition, it is confirmed that it is superior in security and key management through comparison with existing smart home security protocol.

T-Commerce Sale Prediction Using Deep Learning and Statistical Model (딥러닝과 통계 모델을 이용한 T-커머스 매출 예측)

  • Kim, Injung;Na, Kihyun;Yang, Sohee;Jang, Jaemin;Kim, Yunjong;Shin, Wonyoung;Kim, Deokjung
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.803-812
    • /
    • 2017
  • T-commerce is technology-fusion service on which the user can purchase using data broadcasting technology based on bi-directional digital TVs. To achieve the best revenue under a limited environment in regard to the channel number and the variety of sales goods, organizing broadcast programs to maximize the expected sales considering the selling power of each product at each time slot. For this, this paper proposes a method to predict the sales of goods when it is assigned to each time slot. The proposed method predicts the sales of product at a time slot given the week-in-year and weather of the target day. Additionally, it combines a statistical predict model applying SVD (Singular Value Decomposition) to mitigate the sparsity problem caused by the bias in sales record. In experiments on the sales data of W-shopping, a T-commerce company, the proposed method showed NMAE (Normalized Mean Absolute Error) of 0.12 between the prediction and the actual sales, which confirms the effectiveness of the proposed method. The proposed method is practically applied to the T-commerce system of W-shopping and used for broadcasting organization.