• Title/Summary/Keyword: 채널활용도

Search Result 1,370, Processing Time 0.025 seconds

Settlement Instrumentation of Greenhouse Foundation in Reclaimed Land (간척지 온실 기초의 침하량 검토)

  • Choi, Man Kwon;Yun, Sung Wook;Yu, In Ho;Lee, Jong-Won;Lee, Si Young;Yoon, Yong Cheol
    • Journal of Bio-Environment Control
    • /
    • v.24 no.2
    • /
    • pp.85-92
    • /
    • 2015
  • This study examined the settlement of a 1-2W type greenhouse using a timber pile, which was recently established on Gyehwa-do reclaimed land, in order to obtain base data for the construction of a greenhouse on reclaimed land. The results of this study are as follows. foundation and timber pile increased over time, irrespective of the interior and exterior of the upon investigation of the ground, it was confirmed that there was no soft rock stratum (bedrock), and that a sedimentary stratum existed under the fill deposit, which is estimated to have been reclaimed during the site renovation. It was also found that a weathered zone was located under the fill deposit and sedimentary stratum, and that the soil texture of the entire ground floor consisted of clay mixed with sand, silty clay, and granite gneiss, in that order, regardless of boreholes. In addition, the underground water level was 0.3m below ground, regardless of boreholes. Despite a slight difference, the settlement of the greenhouse or measurement sites (channels). With regard to the pillar inside the greenhouse, except in the case of CH-2, the data at a site located on the side wall of the greenhouse (wind barrier side) indicated vibrations of relatively larger amplitude. Moreover, the settlement showed a significant increase during a certain period, which was subsequently somewhat reversed. Based on these phenomena, it was verified that the settlement range of each site in the interior and exterior of the greenhouse was between 1.0 and 7.5mm at this time, except in the case of CH-1. The results of the regression analysis indicated good correlation, with the coefficient of determination by site ranging between 0.6362 and 0.9340. Furthermore, the coefficient of determination ranged between 0.6046 and 0.8822 on the exterior of the greenhouse, which is lower than inside the greenhouse, but still indicates significant correlation.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

A Study on the Effect of Using Sentiment Lexicon in Opinion Classification (오피니언 분류의 감성사전 활용효과에 대한 연구)

  • Kim, Seungwoo;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.133-148
    • /
    • 2014
  • Recently, with the advent of various information channels, the number of has continued to grow. The main cause of this phenomenon can be found in the significant increase of unstructured data, as the use of smart devices enables users to create data in the form of text, audio, images, and video. In various types of unstructured data, the user's opinion and a variety of information is clearly expressed in text data such as news, reports, papers, and various articles. Thus, active attempts have been made to create new value by analyzing these texts. The representative techniques used in text analysis are text mining and opinion mining. These share certain important characteristics; for example, they not only use text documents as input data, but also use many natural language processing techniques such as filtering and parsing. Therefore, opinion mining is usually recognized as a sub-concept of text mining, or, in many cases, the two terms are used interchangeably in the literature. Suppose that the purpose of a certain classification analysis is to predict a positive or negative opinion contained in some documents. If we focus on the classification process, the analysis can be regarded as a traditional text mining case. However, if we observe that the target of the analysis is a positive or negative opinion, the analysis can be regarded as a typical example of opinion mining. In other words, two methods (i.e., text mining and opinion mining) are available for opinion classification. Thus, in order to distinguish between the two, a precise definition of each method is needed. In this paper, we found that it is very difficult to distinguish between the two methods clearly with respect to the purpose of analysis and the type of results. We conclude that the most definitive criterion to distinguish text mining from opinion mining is whether an analysis utilizes any kind of sentiment lexicon. We first established two prediction models, one based on opinion mining and the other on text mining. Next, we compared the main processes used by the two prediction models. Finally, we compared their prediction accuracy. We then analyzed 2,000 movie reviews. The results revealed that the prediction model based on opinion mining showed higher average prediction accuracy compared to the text mining model. Moreover, in the lift chart generated by the opinion mining based model, the prediction accuracy for the documents with strong certainty was higher than that for the documents with weak certainty. Most of all, opinion mining has a meaningful advantage in that it can reduce learning time dramatically, because a sentiment lexicon generated once can be reused in a similar application domain. Additionally, the classification results can be clearly explained by using a sentiment lexicon. This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of movie reviews. Additionally, various parameters in the parsing and filtering steps of the text mining may have affected the accuracy of the prediction models. However, this research contributes a performance and comparison of text mining analysis and opinion mining analysis for opinion classification. In future research, a more precise evaluation of the two methods should be made through intensive experiments.

Estimation of Precipitable Water from the GMS-5 Split Window Data (GMS-5 Split Window 자료를 이용한 가강수량 산출)

  • 손승희;정효상;김금란;이정환
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.1
    • /
    • pp.53-68
    • /
    • 1998
  • Observation of hydrometeors' behavior in the atmosphere is important to understand weather and climate. By conventional observations, we can get the distribution of water vapor at limited number of points on the earth. In this study, the precipitable water has been estimated from the split window channel data on GMS-5 based upon the technique developed by Chesters et al.(1983). To retrieve the precipitable water, water vapor absorption parameter depending on filter function of sensor has been derived using the regression analysis between the split window channel data and the radiosonde data observed at Osan, Pohang, Kwangiu and Cheju staions for 4 months. The air temperature of 700 hPa from the Global Spectral Model of Korea Meteorological Administration (GSM/KMA) has been used as mean air temperature for single layer radiation model. The retrieved precipitable water for the period from August 1996 through December 1996 are compared to radiosonde data. It is shown that the root mean square differences between radiosonde observations and the GMS-5 retrievals range from 0.65 g/$cm^2$ to 1.09 g/$cm^2$ with correlation coefficient of 0.46 on hourly basis. The monthly distribution of precipitable water from GMS-5 shows almost good representation in large scale. Precipitable water is produced 4 times a day at Korea Meteorological Administration in the form of grid point data with 0.5 degree lat./lon. resolution. The data can be used in the objective analysis for numerical weather prediction and to increase the accuracy of humidity analysis especially under clear sky condition. And also, the data is a useful complement to existing data set for climatological research. But it is necessary to get higher correlation between radiosonde observations and the GMS-5 retrievals for operational applications.

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

Popularization of Marathon through Social Network Big Data Analysis : Focusing on JTBC Marathon (소셜 네트워크 빅데이터 분석을 통한 마라톤 대중화 : JTBC 마라톤대회를 중심으로)

  • Lee, Ji-Su;Kim, Chi-Young
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.3
    • /
    • pp.27-40
    • /
    • 2020
  • The marathon has long been established as a representative lifestyle for all ages. With the recent expansion of the Work and Life Balance trend across the society, marathon with a relatively low barrier to entry is gaining popularity among young people in their 20s and 30s. By analyzing the issues and related words of the marathon event, we will analyze the spottainment elements of the marathon event that is popular among young people through keywords, and suggest a development plan for the differentiated event. In order to analyze keywords and related words, blogs, cafes and news provided by Naver and Daum were selected as analysis channels, and 'JTBC Marathon' and 'Culture' were extracted as key words for data search. The data analysis period was limited to a three-month period from August 13, 2019 to November 13, 2019, when the application for participation in the 2019 JTBC Marathon was started. For data collection and analysis, frequency and matrix data were extracted through social matrix program Textom. In addition, the degree of the relationship was quantified by analyzing the connection structure and the centrality of the degree of connection between the words. Although the marathon is a personal movement, young people share a common denominator of "running" and form a new cultural group called "running crew" with other young people. Through this, it was found that a marathon competition culture was formed as a festival venue where people could train together, participate together, and escape from the image of a marathon run alone and fight with themselves.

Development and Assessment of a Non-face-to-face Obesity-Management Program During the Pandemic (팬데믹 시기 비대면 비만관리 프로그램의 개발 및 평가)

  • Park, Eun Jin;Hwang, Tae-Yoon;Lee, Jung Jeung;Kim, Keonyeop
    • Journal of agricultural medicine and community health
    • /
    • v.47 no.3
    • /
    • pp.166-180
    • /
    • 2022
  • Objective: This study evaluated the effects of a non-face-to-face obesity management program, implemented during the pandemic. Methods: The non-face-to-face obesity management program used the Intervention mapping protocol (IMP). The program was put into effect over the course of eight weeks, from September 14 to November 13, 2020 in 48 overweight and obese adults, who applied to participate through the Daegu Citizen Health Support Center. Results: IMP was first a needs assessment was conducted; second, goal setting for behavior change was established; third, evidence-based selection of arbitration method and performance strategy was performed; fourth, program design and validation; fifth, the program was run; and sixth, the results were evaluated. The average weight after participation in the program was reduced by 1.2kg, average WC decreased by 3cm, and average BMI decreased by 0.8kg/m2 (p<0.05). The results of the health behavior survey showed a positive improvement in lifestyle factors, including average daily intake calories, fruit intake, and time spent in walking exercise before and after participation in the program. A statistically significant difference was seen (p<0.05). The satisfaction level for program process evaluation was high, at 4.57±0.63 point. Conclusion: The non-face-to-face obesity management program was useful for obesity management for adults in communities, as it enables individual counseling by experts and active participation through self-body measurement and recording without restriction by time and place. However, the program had some restrictions on participation that may relate to the age of the subject, such as skill and comfort in using a mobile app.

Estimation for Ground Air Temperature Using GEO-KOMPSAT-2A and Deep Neural Network (심층신경망과 천리안위성 2A호를 활용한 지상기온 추정에 관한 연구)

  • Taeyoon Eom;Kwangnyun Kim;Yonghan Jo;Keunyong Song;Yunjeong Lee;Yun Gon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.207-221
    • /
    • 2023
  • This study suggests deep neural network models for estimating air temperature with Level 1B (L1B) datasets of GEO-KOMPSAT-2A (GK-2A). The temperature at 1.5 m above the ground impact not only daily life but also weather warnings such as cold and heat waves. There are many studies to assume the air temperature from the land surface temperature (LST) retrieved from satellites because the air temperature has a strong relationship with the LST. However, an algorithm of the LST, Level 2 output of GK-2A, works only clear sky pixels. To overcome the cloud effects, we apply a deep neural network (DNN) model to assume the air temperature with L1B calibrated for radiometric and geometrics from raw satellite data and compare the model with a linear regression model between LST and air temperature. The root mean square errors (RMSE) of the air temperature for model outputs are used to evaluate the model. The number of 95 in-situ air temperature data was 2,496,634 and the ratio of datasets paired with LST and L1B show 42.1% and 98.4%. The training years are 2020 and 2021 and 2022 is used to validate. The DNN model is designed with an input layer taking 16 channels and four hidden fully connected layers to assume an air temperature. As a result of the model using 16 bands of L1B, the DNN with RMSE 2.22℃ showed great performance than the baseline model with RMSE 3.55℃ on clear sky conditions and the total RMSE including overcast samples was 3.33℃. It is suggested that the DNN is able to overcome cloud effects. However, it showed different characteristics in seasonal and hourly analysis and needed to append solar information as inputs to make a general DNN model because the summer and winter seasons showed a low coefficient of determinations with high standard deviations.

Development of High-Resolution Fog Detection Algorithm for Daytime by Fusing GK2A/AMI and GK2B/GOCI-II Data (GK2A/AMI와 GK2B/GOCI-II 자료를 융합 활용한 주간 고해상도 안개 탐지 알고리즘 개발)

  • Ha-Yeong Yu;Myoung-Seok Suh
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1779-1790
    • /
    • 2023
  • Satellite-based fog detection algorithms are being developed to detect fog in real-time over a wide area, with a focus on the Korean Peninsula (KorPen). The GEO-KOMPSAT-2A/Advanced Meteorological Imager (GK2A/AMI, GK2A) satellite offers an excellent temporal resolution (10 min) and a spatial resolution (500 m), while GEO-KOMPSAT-2B/Geostationary Ocean Color Imager-II (GK2B/GOCI-II, GK2B) provides an excellent spatial resolution (250 m) but poor temporal resolution (1 h) with only visible channels. To enhance the fog detection level (10 min, 250 m), we developed a fused GK2AB fog detection algorithm (FDA) of GK2A and GK2B. The GK2AB FDA comprises three main steps. First, the Korea Meteorological Satellite Center's GK2A daytime fog detection algorithm is utilized to detect fog, considering various optical and physical characteristics. In the second step, GK2B data is extrapolated to 10-min intervals by matching GK2A pixels based on the closest time and location when GK2B observes the KorPen. For reflectance, GK2B normalized visible (NVIS) is corrected using GK2A NVIS of the same time, considering the difference in wavelength range and observation geometry. GK2B NVIS is extrapolated at 10-min intervals using the 10-min changes in GK2A NVIS. In the final step, the extrapolated GK2B NVIS, solar zenith angle, and outputs of GK2A FDA are utilized as input data for machine learning (decision tree) to develop the GK2AB FDA, which detects fog at a resolution of 250 m and a 10-min interval based on geographical locations. Six and four cases were used for the training and validation of GK2AB FDA, respectively. Quantitative verification of GK2AB FDA utilized ground observation data on visibility, wind speed, and relative humidity. Compared to GK2A FDA, GK2AB FDA exhibited a fourfold increase in spatial resolution, resulting in more detailed discrimination between fog and non-fog pixels. In general, irrespective of the validation method, the probability of detection (POD) and the Hanssen-Kuiper Skill score (KSS) are high or similar, indicating that it better detects previously undetected fog pixels. However, GK2AB FDA, compared to GK2A FDA, tends to over-detect fog with a higher false alarm ratio and bias.

Structural features and Diffusion Patterns of Gartner Hype Cycle for Artificial Intelligence using Social Network analysis (인공지능 기술에 관한 가트너 하이프사이클의 네트워크 집단구조 특성 및 확산패턴에 관한 연구)

  • Shin, Sunah;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.107-129
    • /
    • 2022
  • It is important to preempt new technology because the technology competition is getting much tougher. Stakeholders conduct exploration activities continuously for new technology preoccupancy at the right time. Gartner's Hype Cycle has significant implications for stakeholders. The Hype Cycle is a expectation graph for new technologies which is combining the technology life cycle (S-curve) with the Hype Level. Stakeholders such as R&D investor, CTO(Chef of Technology Officer) and technical personnel are very interested in Gartner's Hype Cycle for new technologies. Because high expectation for new technologies can bring opportunities to maintain investment by securing the legitimacy of R&D investment. However, contrary to the high interest of the industry, the preceding researches faced with limitations aspect of empirical method and source data(news, academic papers, search traffic, patent etc.). In this study, we focused on two research questions. The first research question was 'Is there a difference in the characteristics of the network structure at each stage of the hype cycle?'. To confirm the first research question, the structural characteristics of each stage were confirmed through the component cohesion size. The second research question is 'Is there a pattern of diffusion at each stage of the hype cycle?'. This research question was to be solved through centralization index and network density. The centralization index is a concept of variance, and a higher centralization index means that a small number of nodes are centered in the network. Concentration of a small number of nodes means a star network structure. In the network structure, the star network structure is a centralized structure and shows better diffusion performance than a decentralized network (circle structure). Because the nodes which are the center of information transfer can judge useful information and deliver it to other nodes the fastest. So we confirmed the out-degree centralization index and in-degree centralization index for each stage. For this purpose, we confirmed the structural features of the community and the expectation diffusion patterns using Social Network Serice(SNS) data in 'Gartner Hype Cycle for Artificial Intelligence, 2021'. Twitter data for 30 technologies (excluding four technologies) listed in 'Gartner Hype Cycle for Artificial Intelligence, 2021' were analyzed. Analysis was performed using R program (4.1.1 ver) and Cyram Netminer. From October 31, 2021 to November 9, 2021, 6,766 tweets were searched through the Twitter API, and converting the relationship user's tweet(Source) and user's retweets (Target). As a result, 4,124 edgelists were analyzed. As a reult of the study, we confirmed the structural features and diffusion patterns through analyze the component cohesion size and degree centralization and density. Through this study, we confirmed that the groups of each stage increased number of components as time passed and the density decreased. Also 'Innovation Trigger' which is a group interested in new technologies as a early adopter in the innovation diffusion theory had high out-degree centralization index and the others had higher in-degree centralization index than out-degree. It can be inferred that 'Innovation Trigger' group has the biggest influence, and the diffusion will gradually slow down from the subsequent groups. In this study, network analysis was conducted using social network service data unlike methods of the precedent researches. This is significant in that it provided an idea to expand the method of analysis when analyzing Gartner's hype cycle in the future. In addition, the fact that the innovation diffusion theory was applied to the Gartner's hype cycle's stage in artificial intelligence can be evaluated positively because the Gartner hype cycle has been repeatedly discussed as a theoretical weakness. Also it is expected that this study will provide a new perspective on decision-making on technology investment to stakeholdes.