• Title/Summary/Keyword: time domain data

Search Result 1,310, Processing Time 0.026 seconds

Marine-Life-Detection and Density-Estimation Algorithms Based on Underwater Images and Scientific Sonar Systems (수중영상과 과학어탐 시스템 기반 해양생물 탐지 밀도추정 알고리즘 연구)

  • Young-Tae Son;Sang-yeup Jin;Jongchan Lee;Mookun Kim;Ju Young Byon;Hyung Tae Moo;Choong Hun Shin
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.5
    • /
    • pp.373-386
    • /
    • 2024
  • The aim of this study is to establish a system for the early detection of high-density harmful marine organisms. Considering its accuracy and processing speed, YOLOv8m (You Only Look Once version 8 medium) is selected as a suitable model for real-time underwater image-based object detection. Applying the detection algorithm allows one to detect numerous fish and the occasional occurrence of jellyfish. The average precision, recall rate, and mAP (mean Average Precision) of the trained model are 0.931, 0.881, and 0.948 for the validation data, respectively. Also, the mAP for each class is 0.97 for fish, 0.97 for jellyfish and 0.91 for salpa, all of which exceed 0.9 (90%) for classes demonstrating the excellent performance of the model. A scientific sonar system is used to address the object-detection range and validate the detection results. Additionally, integrating and grid averaging the echo strength allows the detection results to be smoothed in space and time. Mean-volume back-scattering strength values are obtained to reflect the detection variability within the analysis domain. Furthermore, an underwater image-based object (marine lives) detection algorithm, an image-correction technique based on the underwater environmental conditions (including nights), and quantified detection results based on a scientific sonar system are presented, which demonstrate the utility of the detection system in various applications.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

The Effects of Health Exercise Program on Walking ability, Depression and WHOQOL-BREF in the Fall experienced Women Elderly (건강체조 프로그램이 낙상경험 여성노인의 보행능력, 우울 및 삶의 질에 미치는 효과)

  • Kim, Young-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.10
    • /
    • pp.3726-3732
    • /
    • 2010
  • The purpose of this study was to investigate the effects of health exercise program to old Women's walking ability, depression and WHOQOL-BREF. Data were collected from April to June, 2007 from the falls experienced 70 women elderly. All subjects participated in 12 week health exercise program which was designed in order to develop walking ability. The data were analyzed using frequency, %, paired t-test. The results of this study were as followings; First, there was significant differences in the average time of chair stand (t=2.291, p=.025), one leg standing(Rt. leg)(t=2.236, p=.029), step length between before and after(t=4.015, p=.000) training of 12 week health exercise program. Second, there was non significant differences in depression(t=1.044, p=.300) but, significant differences in WHOQOL-BREF(t=3.528, p=.001). The WHOQOL-BREF in general quality of life(t=2.923, p=.005), physical(t=3.039, p=.003), psychological(t=2.481, p=.016), social(t=2.531, p=.014) and environment domain(t=4.259, p=.000) were significant differences. The results suggest that the 12 week health exercise program can improve the muscle endurance and balance, QOL.

Auto Frame Extraction Method for Video Cartooning System (동영상 카투닝 시스템을 위한 자동 프레임 추출 기법)

  • Kim, Dae-Jin;Koo, Ddeo-Ol-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.28-39
    • /
    • 2011
  • While the broadband multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading. Most of all, digital cartoon market like internet cartoon has been rapidly large so video cartooning continuously has been researched because of lack and variety of cartoon. Until now, video cartooning system has been focused in non-photorealistic rendering and word balloon. But the meaningful frame extraction must take priority for cartooning system when applying in service. In this paper, we propose new automatic frame extraction method for video cartooning system. At frist, we separate video and audio from movie and extract features parameter like MFCC and ZCR from audio data. Audio signal is classified to speech, music and speech+music comparing with already trained audio data using GMM distributor. So we can set speech area. In the video case, we extract frame using general scene change detection method like histogram method and extract meaningful frames in the cartoon using face detection among the already extracted frames. After that, first of all existent face within speech area image transition frame extract automatically. Suitable frame about movie cartooning automatically extract that extraction image transition frame at continuable period of time domain.

Soil Water Monitoring in Below-Ground Ectomycorrhizal Colony of Tricholoma Matsutake

  • Koo, Chang-Duck;Kim, Je-Su;Lee, Sang-Hee;Park, Jae-In;Kwang- Tae Ahn
    • The Korean Journal of Quaternary Research
    • /
    • v.17 no.2
    • /
    • pp.129-133
    • /
    • 2003
  • Water is critically important for Tricholoma matsutake(Tm) growth because it is the major component of the mushroom by over 90%. The mushroom absorbs water through the below ground hyphal colony. Therefore, the objectives of our study were to investigate spatio-temporal water changes in Tm colonies. This study was carried out at Tm fruiting sites in Sogni Mt National Park, where the below-ground mushroom colonies have been irrigated. To identify spatial water status within the Tm soil colony soil moisture and ergosterol content were measured at six positions including a mushroom fruiting position on the line of the colony radius. To investigate temporal soil moisture changes in the soil colony, Time Domain Reflectometry(TDR) sensors were established at the non-colony and colony front edge, and water data were recorded with CR10X data logger from late August to late October. Before irrigation, whereas it was 12.8% at non-colony, the soil water content within Tm colony was 8.0% at 0-5cm from the colony front edge, 6.2% at 10-15cm and 6.5-7.5% at 20-40cm. And the content was 12.1% at 80cm distance from the colony edge, which is similar to that at the non-colony. In contrast, ergosterol content which is proportional to the live hyphal biomass was only 0.4${\mu}g$/g fresh soil at the uncolonized soil, while 4.9 $\mu\textrm{g}$/g fresh soil at the front edge where the hyphae actively grow, and 3.8 ${\mu}g$/g fresh soil at the fruiting position, l.1${\mu}g$/g at 20cm distance and 0.4${\mu}g$/g in the 40cm rear area. Generally, in the Tm fungal colony the water content changes were reversed to the ergosterol content changes. While the site was watered during August to October, the soil water contents were 13.5∼23.0% within the fungal colony, whereas it was 14.5∼26.0% at the non-colony. That is, soil water content in the colony was lower by 1.0∼3.0% than that in the non-colonized soil. Our results show that Tm colony consumes more soil water than other parts. Especially the front 30cm within the hyphal colony parts is more critical for soil water absorption.

  • PDF

Implementation of a Static Analyzer for Detecting the PHP File Inclusion Vulnerabilities (PHP 파일 삽입 취약성 검사를 위한 정적 분석기의 구현)

  • Ahn, Joon-Seon;Lim, Seong-Chae
    • The KIPS Transactions:PartA
    • /
    • v.18A no.5
    • /
    • pp.193-204
    • /
    • 2011
  • Since web applications are accessed by anonymous users via web, more security risks are imposed on those applications. In particular, because security vulnerabilities caused by insecure source codes cannot be properly handled by the system-level security system such as the intrusion detection system, it is necessary to eliminate such problems in advance. In this paper, to enhance the security of web applications, we develop a static analyzer for detecting the well-known security vulnerability of PHP file inclusion vulnerability. Using a semantic based static analysis, our vulnerability analyzer guarantees the soundness of the vulnerability detection and imposes no runtime overhead, differently from the other approaches such as the penetration test method and the application firewall method. For this end, our analyzer adopts abstract interpretation framework and uses an abstract analysis domain designed for the detection of the target vulnerability in PHP programs. Thus, our analyzer can efficiently analyze complicated data-flow relations in PHP programs caused by extensive usage of string data. The analysis results can be browsed using a JAVA GUI tool and the memory states and variable values at vulnerable program points can also be checked. To show the correctness and practicability of our analyzer, we analyzed the source codes of open PHP applications using the analyzer. Our experimental results show that our analyzer has practical performance in analysis capability and execution time.

The study of security management for application of blockchain technology in the Internet of Things environment (Focusing on security cases in autonomous vehicles including driving environment sensing data and occupant data) (사물인터넷 환경에서 블록체인 기술을 이용한 보안 관리에 관한 소고(주행 환경 센싱 데이터 및 탑승자 데이터를 포함한 자율주행차량에서의 보안 사례를 중심으로))

  • Jang Mook KANG
    • Convergence Security Journal
    • /
    • v.22 no.4
    • /
    • pp.161-168
    • /
    • 2022
  • After the corona virus, as non-face-to-face services are activated, domain services that guarantee integrity by embedding sensing information of the Internet of Things (IoT) with block chain technology are expanding. For example, in areas such as safety and security using CCTV, a process is required to safely update firmware in real time and to confirm that there is no malicious intrusion. In the existing safe security processing procedures, in many cases, the person in charge performing official duties carried a USB device and directly updated the firmware. However, when private blockchain technology such as Hyperledger is used, the convenience and work efficiency of the Internet of Things environment can be expected to increase. This article describes scenarios in how to prevent vulnerabilities in the operating environment of various customers such as firmware updates and device changes in a non-face-to-face environment. In particular, we introduced the optimal blockchain technique for the Internet of Things (IoT), which is easily exposed to malicious security risks such as hacking and information leakage. In this article, we tried to present the necessity and implications of security management that guarantees integrity through operation applying block chain technology in the increasingly expanding Internet of Things environment. If this is used, it is expected to gain insight into how to apply the blockchain technique to guidelines for strengthening the security of the IoT environment in the future.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Construction and estimation of soil moisture site with FDR and COSMIC-ray (SM-FC) sensors for calibration/validation of satellite-based and COSMIC-ray soil moisture products in Sungkyunkwan university, South Korea (위성 토양수분 데이터 및 COSMIC-ray 데이터 보정/검증을 위한 성균관대학교 내 FDR 센서 토양수분 측정 연구(SM-FC) 및 데이터 분석)

  • Kim, Hyunglok;Sunwoo, Wooyeon;Kim, Seongkyun;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.2
    • /
    • pp.133-144
    • /
    • 2016
  • In this study, Frequency Domain Reflectometry (FDR) and COSMIC-ray soil moisture (SM) stations were installed at Sungkyunkwan University in Suwon, South Korea. To provide reliable information about SM, soil property test, time series analysis of measured soil moisture, and comparison of measured SM with satellite-based SM product are conducted. In 2014, six FDR stations were set up for obtaining SM. Each of the stations had four FDR sensors with soil depth from 5 cm to 40 cm at 5~10 cm different intervals. The result showed that study region had heterogeneous soil layer properties such as sand and loamy sand. The measured SM data showed strong coupling with precipitation. Furthermore, they had a high correlation coefficient and a low root mean square deviation (RMSD) as compared to the satellite-based SM products. After verifying the accuracy of the data in 2014, four FDR stations and one COSMIC-ray station were additionally installed to establish the Soil Moisture site with FDR and COSMIC-ray, called SM-FC. COSMIC-ray-based SM had a high correlation coefficient of 0.95 compared with mean SM of FDR stations. From these results, the SM-FC will give a valuable insight for researchers into investigate satellite- and model-based SM validation study in South Korea.

Development Process for User Needs-based Chatbot: Focusing on Design Thinking Methodology (사용자 니즈 기반의 챗봇 개발 프로세스: 디자인 사고방법론을 중심으로)

  • Kim, Museong;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.221-238
    • /
    • 2019
  • Recently, companies and public institutions have been actively introducing chatbot services in the field of customer counseling and response. The introduction of the chatbot service not only brings labor cost savings to companies and organizations, but also enables rapid communication with customers. Advances in data analytics and artificial intelligence are driving the growth of these chatbot services. The current chatbot can understand users' questions and offer the most appropriate answers to questions through machine learning and deep learning. The advancement of chatbot core technologies such as NLP, NLU, and NLG has made it possible to understand words, understand paragraphs, understand meanings, and understand emotions. For this reason, the value of chatbots continues to rise. However, technology-oriented chatbots can be inconsistent with what users want inherently, so chatbots need to be addressed in the area of the user experience, not just in the area of technology. The Fourth Industrial Revolution represents the importance of the User Experience as well as the advancement of artificial intelligence, big data, cloud, and IoT technologies. The development of IT technology and the importance of user experience have provided people with a variety of environments and changed lifestyles. This means that experiences in interactions with people, services(products) and the environment become very important. Therefore, it is time to develop a user needs-based services(products) that can provide new experiences and values to people. This study proposes a chatbot development process based on user needs by applying the design thinking approach, a representative methodology in the field of user experience, to chatbot development. The process proposed in this study consists of four steps. The first step is 'setting up knowledge domain' to set up the chatbot's expertise. Accumulating the information corresponding to the configured domain and deriving the insight is the second step, 'Knowledge accumulation and Insight identification'. The third step is 'Opportunity Development and Prototyping'. It is going to start full-scale development at this stage. Finally, the 'User Feedback' step is to receive feedback from users on the developed prototype. This creates a "user needs-based service (product)" that meets the process's objectives. Beginning with the fact gathering through user observation, Perform the process of abstraction to derive insights and explore opportunities. Next, it is expected to develop a chatbot that meets the user's needs through the process of materializing to structure the desired information and providing the function that fits the user's mental model. In this study, we present the actual construction examples for the domestic cosmetics market to confirm the effectiveness of the proposed process. The reason why it chose the domestic cosmetics market as its case is because it shows strong characteristics of users' experiences, so it can quickly understand responses from users. This study has a theoretical implication in that it proposed a new chatbot development process by incorporating the design thinking methodology into the chatbot development process. This research is different from the existing chatbot development research in that it focuses on user experience, not technology. It also has practical implications in that companies or institutions propose realistic methods that can be applied immediately. In particular, the process proposed in this study can be accessed and utilized by anyone, since 'user needs-based chatbots' can be developed even if they are not experts. This study suggests that further studies are needed because only one field of study was conducted. In addition to the cosmetics market, additional research should be conducted in various fields in which the user experience appears, such as the smart phone and the automotive market. Through this, it will be able to be reborn as a general process necessary for 'development of chatbots centered on user experience, not technology centered'.