• Title/Summary/Keyword: 자동판별

Search Result 310, Processing Time 0.029 seconds

A Study on the Quality Monitoring and Prediction of OTT Traffic in ISP (ISP의 OTT 트래픽 품질모니터링과 예측에 관한 연구)

  • Nam, Chang-Sup
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.2
    • /
    • pp.115-121
    • /
    • 2021
  • This paper used big data and artificial intelligence technology to predict the rapidly increasing internet traffic. There have been various studies on traffic prediction in the past, but they have not been able to reflect the increasing factors that induce huge Internet traffic such as smartphones and streaming in recent years. In addition, event-like factors such as the release of large-capacity popular games or the provision of new contents by OTT (Over the Top) operators are more difficult to predict in advance. Due to these characteristics, it was impossible for an ISP (Internet Service Provider) to reflect real-time service quality management or traffic forecasts in the network business environment with the existing method. Therefore, in this study, in order to solve this problem, an Internet traffic collection system was constructed that searches, discriminates and collects traffic data in real time, separate from the existing NMS. Through this, the flexibility and elasticity to automatically register the data of the collection target are secured, and real-time network quality monitoring is possible. In addition, a large amount of traffic data collected from the system was analyzed by machine learning (AI) to predict future traffic of OTT operators. Through this, more scientific and systematic prediction was possible, and in addition, it was possible to optimize the interworking between ISP operators and to secure the quality of large-scale OTT services.

Development of Damage Evaluation Technology Considering Variability for Cable Damage Detection of Cable-Stayed Bridges (사장교의 케이블 손상 검출을 위한 변동성이 고려된 손상평가 기술 개발)

  • Ko, Byeong-Chan;Heo, Gwang-Hee;Park, Chae-Rin;Seo, Young-Deuk;Kim, Chung-Gil
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.24 no.6
    • /
    • pp.77-84
    • /
    • 2020
  • In this paper, we developed a damage evaluation technique that can determine the damage location of a long-sized structure such as a cable-stayed bridge, and verified the performance of the developed technique through experiments. The damage assessment method aims to extract data that can evaluate the damage of the structure without the undamage data and can determine the damage location only by analyzing the response data of the structure. To complete this goal, we developed a damage assessment technique that considers variability based on the IMD theory, which is a statistical pattern recognition technique, to identify the damage location. To complete this goal, we developed a damage assessment technique that considers variability based on the IMD theory, which is a statistical pattern recognition technique, to identify the damage location. To evaluate the performance of the developed technique experimentally, cable damage experiments were conducted on model cable-stayed bridges. As a result, the damage assessment method considering variability automatically outputs the damageless data according to external force, and it is confirmed that the performance of extracting information that can determine the damage location of the cable through the analysis of the outputted damageless data and the measured damage data is shown.

A Study on Deep Learning based Aerial Vehicle Classification for Armament Selection (무장 선택을 위한 딥러닝 기반의 비행체 식별 기법 연구)

  • Eunyoung, Cha;Jeongchang, Kim
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.936-939
    • /
    • 2022
  • As air combat system technologies developed in recent years, the development of air defense systems is required. In the operating concept of the anti-aircraft defense system, selecting an appropriate armament for the target is one of the system's capabilities in efficiently responding to threats using limited anti-aircraft power. Much of the flying threat identification relies on the operator's visual identification. However, there are many limitations in visually discriminating a flying object maneuvering high speed from a distance. In addition, as the demand for unmanned and intelligent weapon systems on the modern battlefield increases, it is essential to develop a technology that automatically identifies and classifies the aircraft instead of the operator's visual identification. Although some examples of weapon system identification with deep learning-based models by collecting video data for tanks and warships have been presented, aerial vehicle identification is still lacking. Therefore, in this paper, we present a model for classifying fighters, helicopters, and drones using a convolutional neural network model and analyze the performance of the presented model.

Intelligent VOC Analyzing System Using Opinion Mining (오피니언 마이닝을 이용한 지능형 VOC 분석시스템)

  • Kim, Yoosin;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.113-125
    • /
    • 2013
  • Every company wants to know customer's requirement and makes an effort to meet them. Cause that, communication between customer and company became core competition of business and that important is increasing continuously. There are several strategies to find customer's needs, but VOC (Voice of customer) is one of most powerful communication tools and VOC gathering by several channels as telephone, post, e-mail, website and so on is so meaningful. So, almost company is gathering VOC and operating VOC system. VOC is important not only to business organization but also public organization such as government, education institute, and medical center that should drive up public service quality and customer satisfaction. Accordingly, they make a VOC gathering and analyzing System and then use for making a new product and service, and upgrade. In recent years, innovations in internet and ICT have made diverse channels such as SNS, mobile, website and call-center to collect VOC data. Although a lot of VOC data is collected through diverse channel, the proper utilization is still difficult. It is because the VOC data is made of very emotional contents by voice or text of informal style and the volume of the VOC data are so big. These unstructured big data make a difficult to store and analyze for use by human. So that, the organization need to automatic collecting, storing, classifying and analyzing system for unstructured big VOC data. This study propose an intelligent VOC analyzing system based on opinion mining to classify the unstructured VOC data automatically and determine the polarity as well as the type of VOC. And then, the basis of the VOC opinion analyzing system, called domain-oriented sentiment dictionary is created and corresponding stages are presented in detail. The experiment is conducted with 4,300 VOC data collected from a medical website to measure the effectiveness of the proposed system and utilized them to develop the sensitive data dictionary by determining the special sentiment vocabulary and their polarity value in a medical domain. Through the experiment, it comes out that positive terms such as "칭찬, 친절함, 감사, 무사히, 잘해, 감동, 미소" have high positive opinion value, and negative terms such as "퉁명, 뭡니까, 말하더군요, 무시하는" have strong negative opinion. These terms are in general use and the experiment result seems to be a high probability of opinion polarity. Furthermore, the accuracy of proposed VOC classification model has been compared and the highest classification accuracy of 77.8% is conformed at threshold with -0.50 of opinion classification of VOC. Through the proposed intelligent VOC analyzing system, the real time opinion classification and response priority of VOC can be predicted. Ultimately the positive effectiveness is expected to catch the customer complains at early stage and deal with it quickly with the lower number of staff to operate the VOC system. It can be made available human resource and time of customer service part. Above all, this study is new try to automatic analyzing the unstructured VOC data using opinion mining, and shows that the system could be used as variable to classify the positive or negative polarity of VOC opinion. It is expected to suggest practical framework of the VOC analysis to diverse use and the model can be used as real VOC analyzing system if it is implemented as system. Despite experiment results and expectation, this study has several limits. First of all, the sample data is only collected from a hospital web-site. It means that the sentimental dictionary made by sample data can be lean too much towards on that hospital and web-site. Therefore, next research has to take several channels such as call-center and SNS, and other domain like government, financial company, and education institute.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.

Recognition of Resident Registration Card using ART2-based RBF Network and face Verification (ART2 기반 RBF 네트워크와 얼굴 인증을 이용한 주민등록증 인식)

  • Kim Kwang-Baek;Kim Young-Ju
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.1-15
    • /
    • 2006
  • In Korea, a resident registration card has various personal information such as a present address, a resident registration number, a face picture and a fingerprint. A plastic-type resident card currently used is easy to forge or alter and tricks of forgery grow to be high-degree as time goes on. So, whether a resident card is forged or not is difficult to judge by only an examination with the naked eye. This paper proposed an automatic recognition method of a resident card which recognizes a resident registration number by using a refined ART2-based RBF network newly proposed and authenticates a face picture by a template image matching method. The proposed method, first, extracts areas including a resident registration number and the date of issue from a resident card image by applying Sobel masking, median filtering and horizontal smearing operations to the image in turn. To improve the extraction of individual codes from extracted areas, the original image is binarized by using a high-frequency passing filter and CDM masking is applied to the binaried image fur making image information of individual codes better. Lastly, individual codes, which are targets of recognition, are extracted by applying 4-directional contour tracking algorithm to extracted areas in the binarized image. And this paper proposed a refined ART2-based RBF network to recognize individual codes, which applies ART2 as the loaming structure of the middle layer and dynamicaly adjusts a teaming rate in the teaming of the middle and the output layers by using a fuzzy control method to improve the performance of teaming. Also, for the precise judgement of forgey of a resident card, the proposed method supports a face authentication by using a face template database and a template image matching method. For performance evaluation of the proposed method, this paper maked metamorphoses of an original image of resident card such as a forgey of face picture, an addition of noise, variations of contrast variations of intensity and image blurring, and applied these images with original images to experiments. The results of experiment showed that the proposed method is excellent in the recognition of individual codes and the face authentication fur the automatic recognition of a resident card.

  • PDF

Salty-taste Activation of Human Brain Disclosed by Gustatory fMRI Study (뇌기능 자기공명영상 장치를 이용한 짠맛 자극에 따른 인간 뇌의 반응에 대한 기초 연구)

  • Kim S.H.;Choi K.S.;Lee H.Y.;Shin W.J.;Eun C.K.;Mun C.W.
    • Investigative Magnetic Resonance Imaging
    • /
    • v.9 no.1
    • /
    • pp.30-35
    • /
    • 2005
  • Purpose : The purpose of this study is to observe the blood oxygen level dependent (BOLD) contrast changes due to the reaction of human brain at a gustatory sense in response to a salty-taste stimulation. Materials and Methods : Twelve healthy, non-smoking, right-handed male subjects (mean age: 25.6, range: 23-28 years) participated in this salty-taste stimulus functional magnetic resonance (fMRI) study. MRI scans were performed with 1.57 GE Signa, using a multi-slice GE-EPI sequence according to a blood-oxy-gen-level dependent (BOLD) experiment paradigm. Scan parameters included matrix size $128\times128$, FOV 250 mm, TR 5000 msec, TE 60 msec, TH/GAP 5/2 mm. Sequential data acquisitions were carried out for 42 measurements with a repetition time of 5 sec for each taste-stimulus experiments. Analysis of fMRI data was carried out using SPM99 implemented in Matlab. NaCl solution $(3\%)$ was used as a salty stimulus. The task paradigm consisted of alternating rest-stimulus cycles (30-second rest, 15-second stimulus) for 210 seconds. During the stimulus period, NaCl-solution was presented to the subject's mouth through plastic tubes as a bolus of delivered every 5 sec using -processor controlled auto-syringe pump. Results : Insula, frontal opercular taste cortex, amygdala and orbitofrontal cortex (OFC) were activated by a salty-taste stimulation $(NaCl,\;3\%)$ in the fMRI experiments. And dosolateral prefrontal cortex (DLPFC) was also significantly responded to salty-taste stimuli. Activation areas of the right side hemisphere were more superior to the left side hemisphere. Conclusion : The results of this study well correspond to the fact that both insula, amygdala, OFC, DLPFC areas are established as taste cortical areas by neuronal recordings in primates. Authors found that laboratory-developed auto-syringe pump is suitable for gustatory fMRI study. Further research in this field will accelerate to inquire into the mechanism of higher order gustatory process.

  • PDF

Screening of the Optimum Filter Media in the Constructed Wetland Systems through Phosphorus Adsorption Capacities (인의 흡착능 평가를 통한 인공습지 하수처리 시스템의 여재 선발)

  • Lee, Hong-Jae;Seo, Dong-Cheol;Cho, Ju-Sik;Heo, Jong-Soo
    • Korean Journal of Environmental Agriculture
    • /
    • v.22 no.2
    • /
    • pp.148-152
    • /
    • 2003
  • The phosphorus(P) adsorption capacities of various filter media were investigated in relation to the size and types of fitter media to screen the optimum condition. The objective of this study was to evaluate the constructed wetland longevity by improving P adsorption capacity. The maximum P adsorption capacities of filter media A($4{\sim}10\;mm$), B($2{\sim}4\;mm$) and C($0.1{\sim}2\;mm$) were 8, 10 and 22 mg/kg, respectively, showing those increased as the filter media size decreased. Among the experimental media, the optimum filter media size was $0.1{\sim}2\;mm$. When the filter Medium was supplemented with organic materials which were piled up and decayed in the constructed wetland, the P adsorption capacity was significantly enhanced Under the conditions of optimum fitter media size, the respective Maximum P adsorption capacities of filter media C when supplemented with Ca, Mg, Al and Fe were higher than that of filter media C. However the addition of Ca, Mg, Al and Fe to constructed wetland were not recommended because of the possibility of their secondary pollution. The maximum P adsorption capacity of filter media C was 22 mg/kg, but this was increased to 36 mg/kg when filter media C was supplemented with 2% oyster shell.

Development of Program for Renal Function Study with Quantification Analysis of Nuclear Medicine Image (핵의학 영상의 정량적 분석을 통한 신장기능 평가 프로그램 개발)

  • Song, Ju-Young;Lee, Hyoung-Koo;Suh, Tae-Suk;Choe, Bo-Young;Shinn, Kyung-Sub;Chung, Yong-An;Kim, Sung-Hoon;Chung, Soo-Kyo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.35 no.2
    • /
    • pp.89-99
    • /
    • 2001
  • Purpose: In this study, we developed a new software tool for the analysis of renal scintigraphy which can be modified more easily by a user who needs to study new clinical applications, and the appropriateness of the results from our program was studied. Materials and Methods: The analysis tool was programmed with IDL5.2 and designed for use on a personal computer running Windows. For testing the developed tool and studying the appropriateness of the calculated glomerular filtration rate (GFR), $^{99m}Tc$-DTPA was administered to 10 adults in normal condition. In order to study the appropriateness of the calculated mean transit time (MTT), $^{99m}Tc-DTPA\;and\;^{99m}Tc-MAG3$ were administered to 11 adults in normal condition and 22 kidneys were analyzed. All the images were acquired with ORBITOR. the Siemens gamma camera. Results: With the developed tool, we could show dynamic renal images and time activity curve (TAC) in each ROI and calculate clinical parameters of renal function. The results calculated by the developed tool were not different statistically from the results obtained by the Siemens application program (Tmax: p=0.68, Relative Renal Function: p:1.0, GFR: p=0.25) and the developed program proved reasonable. The MTT calculation tool proved to be reasonable by the evaluation of the influence of hydration status on MTT. Conclusion: We have obtained reasonable clinical parameters for the evaluation of renal function with the software tool developed in this study. The developed tool could prove more practical than conventional, commercial programs.

  • PDF

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.