• Title/Summary/Keyword: Monitoring and Analysis Systems

Search Result 1,181, Processing Time 0.028 seconds

The Effects of Ownership Structure on Analysts' Earnings Forecasts (기업지배구조가 재무분석가의 이익 예측오차와 정확성에 미치는 영향)

  • Park, Bum-Jin
    • The Korean Journal of Financial Management
    • /
    • v.27 no.1
    • /
    • pp.31-62
    • /
    • 2010
  • This paper analyzes empirically how analysts' forecasts affected by ownership structure. This study examine a sample of 1,037~1,629 the analysts' forecasts of firms registered in Korean Stock Exchange in the period from 2000 to 2006. The empirical results are summarized as follows. First, from the analysis, companies which have higher major shareholder's holdings tend to increase earnings forecast errors and earnings forecast accuracy. Meanwhile, companies which have higher institution shareholder's holdings tend to decrease earnings forecast errors and earnings forecast accuracy. This result is in line with the view of previous works that companies with higher major shareholder's holdings look towards more of analysts' optimistic forecasts in order to maintain friendly relations with major shareholders. Because of analysts' private information use from major shareholders, earnings forecast accuracy is higher in high major shareholder's holdings firm than in high institution shareholder's holdings it. Second, this analysis is whether the minimal required selection condition of outside directors, audit committee adoption and audit quality affect the relation between ownership structure and analysts' forecasts. This result is that variables related corporate governance do not affect statically the relation between ownership structure and analysts' forecasts. The meanings of this paper is to suggest the positive relations between ownership structure and analysts' forecasts. After this, if analysts will notice forecasts of more many firms, capital market will be more efficient and this field works are plentiful. Also it will need monitoring systems not to distort market efficiency by analysts' dishonest forecasts.

  • PDF

Estimation of irrigation return flow from paddy fields on agricultural watersheds (농업유역의 논 관개 회귀수량 추정)

  • Kim, Ha-Young;Nam, Won-Ho;Mun, Young-Sik;An, Hyun-Uk;Kim, Jonggun;Shin, Yongchul;Do, Jong-Won;Lee, Kwang-Ya
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.1
    • /
    • pp.1-10
    • /
    • 2022
  • Irrigation water supplied to the paddy field is consumed in the amount of evapotranspiration, underground infiltration, and natural and artificial drainage from the paddy field. Irrigation return flow is defined as the excess of irrigation water that is not consumed by evapotranspiration and crop, and which returns to an aquifer by infiltration or drainage. The research on estimating the return flow play an important part in water circulation management of agricultural watershed. However, the return flow rate calculations are needs because the result of calculating return flow is different depending on irrigation channel water loss, analysis methods, and local characteristics. In this study, the irrigation return flow rate of agricultural watershed was estimated using the monitoring and SWMM (Storm Water Management Model) modeling from 2017 to 2020 for the Heungeop reservoir located in Wonju, Gangwon-do. SWMM modeling was performed by weather data and observation data, water of supply and drainage were estimated as the result of SWMM model analysis. The applicability of the SWMM model was verified using RMSE and R-square values. The result of analysis from 2017 to 2020, the average annual quick return flow rate was 53.1%. Based on these results, the analysis of water circulation characteristics can perform, it can be provided as basic data for integrated water management.

Study on the Prediction of Motion Response of Fishing Vessels using Recurrent Neural Networks (순환 신경망 모델을 이용한 소형어선의 운동응답 예측 연구)

  • Janghoon Seo;Dong-Woo Park;Dong Nam
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.5
    • /
    • pp.505-511
    • /
    • 2023
  • In the present study, a deep learning model was established to predict the motion response of small fishing vessels. Hydrodynamic performances were evaluated for two small fishing vessels for the dataset of deep learning model. The deep learning model of the Long Short-Term Memory (LSTM) which is one of the recurrent neural network was utilized. The input data of LSTM model consisted of time series of six(6) degrees of freedom motions and wave height and the output label was selected as the time series data of six(6) degrees of freedom motions. The hyperparameter and input window length studies were performed to optimize LSTM model. The time series motion response according to different wave direction was predicted by establised LSTM. The predicted time series motion response showed good overall agreement with the analysis results. As the length of the time series increased, differences between the predicted values and analysis results were increased, which is due to the reduced influence of long-term data in the training process. The overall error of the predicted data indicated that more than 85% of the data showed an error within 10%. The established LSTM model is expected to be utilized in monitoring and alarm systems for small fishing vessels.

The Study of Volume Data Aggregation Method According to Lane Usage Ratio (차로이용률을 고려한 지점 교통량 자료의 집락화 방법에 관한 연구)

  • An Kwang-Hun;Baek Seung-Kirl;NamKoong Sung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.4 no.3 s.8
    • /
    • pp.33-43
    • /
    • 2005
  • Traffic condition monitoring system serves as the foundation for all intelligent transportation system operation. Loop detectors and Video Image Processing are the most widely common technology approach to condition monitoring in korea Highways. Lane Usage is defined as the proportion of total link volume served by each lane. In this research, the lane Usage(LU) of two lane link for one day. Interval is 56% : 44%. The LU of three lane link is 39% : 37% : 24%. The LU of four lane link is 25% : 29% : 26% : 21%. These analysis reveal that each lane distributions of link are not same. This research investigates the general concept of lane usage by using collected loop detector data and the investigated that lane distribution is different by traffic lane and lane usage is consistent by time of day.

  • PDF

Analysis of a Groundwater Flow System in Fractured Rock Mass Using the Concept of Hydraulic Compartment (수리영역 개념을 적용한 단열암반의 지하수유동체계 해석)

  • Cho Sung-Il;Kim Chun-Soo;Bae Dae-Seok;Kim Kyung-Su;Song Moo-Young
    • The Journal of Engineering Geology
    • /
    • v.16 no.1 s.47
    • /
    • pp.69-83
    • /
    • 2006
  • This study aims to evaluate a complex groundwater flow system around the underground oil storage caverns using the concept of hydraulic compartment. For the hydrogeological analysis, the hydraulic testing data, the evolution of groundwater levels in 28 surface monitoring boreholes and pressure variation of 95 horizontal and 63 vertical water curtain holes in the caverns were utilized. At the cavern level, the Hydraulic Conductor Domains(fracture zones) are characterized one local major fracture zone(NE-1)and two local fracture zones between the FZ-1 and FZ-2 fracture zones. The Hydraulic Rock Domain(rock mass) is divided into four compartments by the above local fracture zones. Two Hydraulic Rock Domains(A, B) around the FZ-2 zone have a relatively high initial groundwater pressures up to $15kg/cm^2$ and the differences between the upper and lower groundwater levels, measured from the monitoring holes equipped with double completion, are in the range of 10 and 40 m throughout the construction stage, indicating relatively good hydraulic connection between the near surface and bedrock groundwater systems. On the other hand, two Hydraulic Rock Domains(C, D) adjacent to the FZ-1, the groundwater levels in the upper and lower zones are shown a great difference in the maximum of 120 m and the high water levels in the upper groundwater system were not varied during the construction stage. This might be resulted from the very low hydraulic conductivity$(7.2X10^{-10}m/sec)$ in the zone, six times lower than that of Domain C, D. Groundwater recharge rates obtained from the numerical modeling are 2% of the annual mean precipitation(1,356mm/year) for 20 years.

Analysis of weighted usable area and estimation of optimum environmental flow based on growth stages of target species for improving fish habitat in regulated and non-regulated rivers (조절 및 비조절 하천의 어류 서식처 개선을 위한 성장 단계별 가중가용면적 분석 및 최적 환경생태유량 산정)

  • Jung, Sanghwa;Ji, Un;Kim, Kyu-ho;Jang, Eun-kyung
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.spc2
    • /
    • pp.811-822
    • /
    • 2019
  • Environmental flows in the downstream sections of Yongdam Dam, Wonju Stream Dam, and Hongcheon River were estimated with selected target fish species such as Nigra for the site of Yongdam Dam, Splendidus for the site of Wonju Stream Dam, and Signifer for the site of Hongcheon River by considering endangered and domestic species. Physical habitat analysis was performed to estimate environmental flows for the study sites by applying the Physical Habitat Simulation (PHABSIM) and RIVER2D which combined hydraulic and habitat models. Based on the monitored data for ecological environment, the Habitat Suitability Index (HSI) for the target species was estimated by applying the Instream Flow and Aquatic Systems Group (IFASG). In particular, based on the result of fish monitoring, the HSI for each stage of the growth for target species was analyzed. As a result, the Weighted Usable Area (WUA) was maximized at $4.9m^3/s$ of flow discharge during spawning, $5.8m^3/s$ during the period of juvenile, and $8.9m^3/s$ during the adult fish season at the downstream section of Yongdam Dam. The result of the Wonju Stream Dam showed an optimal environmental flow of $0.4m^3/s$, $1.0m^3/s$, and $1.5m^3/s$ during the period of spawning, juvenile, and adult. The habitat analysis for the site of Hongcheon River, which is a non-regulated stream, produced an optimum environmental flow of $5m^3/s$ in the spawning period, $4m^3/s$ in the juvenile stage and $6m^3/s$ in the adult stage.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Analysis of News Agenda Using Text mining and Semantic Network Analysis: Focused on COVID-19 Emotions (텍스트 마이닝과 의미 네트워크 분석을 활용한 뉴스 의제 분석: 코로나 19 관련 감정을 중심으로)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.47-64
    • /
    • 2021
  • The global spread of COVID-19 around the world has not only affected many parts of our daily life but also has a huge impact on many areas, including the economy and society. As the number of confirmed cases and deaths increases, medical staff and the public are said to be experiencing psychological problems such as anxiety, depression, and stress. The collective tragedy that accompanies the epidemic raises fear and anxiety, which is known to cause enormous disruptions to the behavior and psychological well-being of many. Long-term negative emotions can reduce people's immunity and destroy their physical balance, so it is essential to understand the psychological state of COVID-19. This study suggests a method of monitoring medial news reflecting current days which requires striving not only for physical but also for psychological quarantine in the prolonged COVID-19 situation. Moreover, it is presented how an easier method of analyzing social media networks applies to those cases. The aim of this study is to assist health policymakers in fast and complex decision-making processes. News plays a major role in setting the policy agenda. Among various major media, news headlines are considered important in the field of communication science as a summary of the core content that the media wants to convey to the audiences who read it. News data used in this study was easily collected using "Bigkinds" that is created by integrating big data technology. With the collected news data, keywords were classified through text mining, and the relationship between words was visualized through semantic network analysis between keywords. Using the KrKwic program, a Korean semantic network analysis tool, text mining was performed and the frequency of words was calculated to easily identify keywords. The frequency of words appearing in keywords of articles related to COVID-19 emotions was checked and visualized in word cloud 'China', 'anxiety', 'situation', 'mind', 'social', and 'health' appeared high in relation to the emotions of COVID-19. In addition, UCINET, a specialized social network analysis program, was used to analyze connection centrality and cluster analysis, and a method of visualizing a graph using Net Draw was performed. As a result of analyzing the connection centrality between each data, it was found that the most central keywords in the keyword-centric network were 'psychology', 'COVID-19', 'blue', and 'anxiety'. The network of frequency of co-occurrence among the keywords appearing in the headlines of the news was visualized as a graph. The thickness of the line on the graph is proportional to the frequency of co-occurrence, and if the frequency of two words appearing at the same time is high, it is indicated by a thick line. It can be seen that the 'COVID-blue' pair is displayed in the boldest, and the 'COVID-emotion' and 'COVID-anxiety' pairs are displayed with a relatively thick line. 'Blue' related to COVID-19 is a word that means depression, and it was confirmed that COVID-19 and depression are keywords that should be of interest now. The research methodology used in this study has the convenience of being able to quickly measure social phenomena and changes while reducing costs. In this study, by analyzing news headlines, we were able to identify people's feelings and perceptions on issues related to COVID-19 depression, and identify the main agendas to be analyzed by deriving important keywords. By presenting and visualizing the subject and important keywords related to the COVID-19 emotion at a time, medical policy managers will be able to be provided a variety of perspectives when identifying and researching the regarding phenomenon. It is expected that it can help to use it as basic data for support, treatment and service development for psychological quarantine issues related to COVID-19.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.