• Title/Summary/Keyword: work classification structure

Search Result 135, Processing Time 0.03 seconds

A study on Digital Agriculture Data Curation Service Plan for Digital Agriculture

  • Lee, Hyunjo;Cho, Han-Jin;Chae, Cheol-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.171-177
    • /
    • 2022
  • In this paper, we propose a service method that can provide insight into multi-source agricultural data, way to cluster environmental factor which supports data analysis according to time flow, and curate crop environmental factors. The proposed curation service consists of four steps: collection, preprocessing, storage, and analysis. First, in the collection step, the service system collects and organizes multi-source agricultural data by using an OpenAPI-based web crawler. Second, in the preprocessing step, the system performs data smoothing to reduce the data measurement errors. Here, we adopt the smoothing method for each type of facility in consideration of the error rate according to facility characteristics such as greenhouses and open fields. Third, in the storage step, an agricultural data integration schema and Hadoop HDFS-based storage structure are proposed for large-scale agricultural data. Finally, in the analysis step, the service system performs DTW-based time series classification in consideration of the characteristics of agricultural digital data. Through the DTW-based classification, the accuracy of prediction results is improved by reflecting the characteristics of time series data without any loss. As a future work, we plan to implement the proposed service method and apply it to the smart farm greenhouse for testing and verification.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Quantitative Flood Forecasting Using Remotely-Sensed Data and Neural Networks

  • Kim, Gwangseob
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2002.05a
    • /
    • pp.43-50
    • /
    • 2002
  • Accurate quantitative forecasting of rainfall for basins with a short response time is essential to predict streamflow and flash floods. Previously, neural networks were used to develop a Quantitative Precipitation Forecasting (QPF) model that highly improved forecasting skill at specific locations in Pennsylvania, using both Numerical Weather Prediction (NWP) output and rainfall and radiosonde data. The objective of this study was to improve an existing artificial neural network model and incorporate the evolving structure and frequency of intense weather systems in the mid-Atlantic region of the United States for improved flood forecasting. Besides using radiosonde and rainfall data, the model also used the satellite-derived characteristics of storm systems such as tropical cyclones, mesoscale convective complex systems and convective cloud clusters as input. The convective classification and tracking system (CCATS) was used to identify and quantify storm properties such as life time, area, eccentricity, and track. As in standard expert prediction systems, the fundamental structure of the neural network model was learned from the hydroclimatology of the relationships between weather system, rainfall production and streamflow response in the study area. The new Quantitative Flood Forecasting (QFF) model was applied to predict streamflow peaks with lead-times of 18 and 24 hours over a five year period in 4 watersheds on the leeward side of the Appalachian mountains in the mid-Atlantic region. Threat scores consistently above .6 and close to 0.8 ∼ 0.9 were obtained fur 18 hour lead-time forecasts, and skill scores of at least 4% and up to 6% were attained for the 24 hour lead-time forecasts. This work demonstrates that multisensor data cast into an expert information system such as neural networks, if built upon scientific understanding of regional hydrometeorology, can lead to significant gains in the forecast skill of extreme rainfall and associated floods. In particular, this study validates our hypothesis that accurate and extended flood forecast lead-times can be attained by taking into consideration the synoptic evolution of atmospheric conditions extracted from the analysis of large-area remotely sensed imagery While physically-based numerical weather prediction and river routing models cannot accurately depict complex natural non-linear processes, and thus have difficulty in simulating extreme events such as heavy rainfall and floods, data-driven approaches should be viewed as a strong alternative in operational hydrology. This is especially more pertinent at a time when the diversity of sensors in satellites and ground-based operational weather monitoring systems provide large volumes of data on a real-time basis.

  • PDF

A Study on the Retailer's Global Expansion Strategy and Supply Chain Management : Focus on the Metro Group (소매업체의 글로벌 확장전략과 공급사슬관리에 관한 연구: 메트로 그룹을 중심으로)

  • Kim, Dong-Yun;Moon, Mi-Jin;Lee, Sang-Youn
    • Journal of Distribution Science
    • /
    • v.11 no.12
    • /
    • pp.25-37
    • /
    • 2013
  • Purpose - The structure of retailing has changed as retailers develop markets in response to business environment changes. This study aims to analyze the general situation of retailers in order to predict future global strategy using case studies of overseas expansion strategy and the Metro Group's global strategy. Research design, data, and methodology - The backgrounds to the new retail business model and retailer classification are analyzed as theoretical data. In addition, the key success point of the Metro Group's "cash and carry" strategy is analyzed as is the Metro Group's global CFAR (collaborative planning, forecasting, and replenishment) strategy. Finally, the plan for cooperation and precise forecasting under the Metro Group's supply chain management are analyzed from the promotion environment viewpoint. Related materials analyzed included the 2012 annual report, the Metro Group's web page, and a video interview with the executive in charge of global strategy and the new market development department. Some data were revised to avoid disrupting essential aspects of the case studies. Results - The important finding was that the Metro Group could be a world-class retail company with its successful global expansion strategy. The Metro Group's global strategy's primary goal is to have a leading business position in Eastern and Western Europe. The "cash and carry" strategy is highest priority in its overseas expansion strategy. Moreover, the Metro Group has standardized product planning capacity, which could be applied in various countries with different structural and cultural backgrounds. This is the main reason that the Metro Group could rapidly become successful in the Eastern Europe and Asian markets through its structural overseas expansion strategies. In addition, the Metro Group emphasizes the importance of supply chain management. Conclusions - First, retailers should create additional value through utilizing the domestic market, market power, and economies of scale to launch a global strategy to maximize benefits from diversification. Second, the political, economic, and cultural background of the target country needs to be understood to successfully implement the overseas expansion strategy. Third, the main factor of successful cooperation with a local partner is how quickly the company gains total understanding of the business resources and core competence of its partner. All organizations should focus on the achievement of goals in order to successfully operate the partnership. Fourth, retailers should improve their business, financial and organizational structure. Moreover, the work processes and company culture should also be improved to respond strongly in the competitive global market. Fifth, the essential point of a successful retail business is the control capacity of its branding and format. The retailer could avoid forecasting errors through supply chain management by perfectly distributing the actual amount of its inventory. In addition, the risks along the supply chain are effectively shared between the supply chain partners. Finally, the central tendency of the market is to gain in strength with this taking place across all parts of the business.

A Case Study on the Interior design characteristics of Integrated CCTV Control Center - Focused at Human Factor Design aspect (CCTV 통합관제센터의 실내공간특성에 대한 사례분석연구 - 인간공학디자인(HFD)의 관점에서)

  • Han, Ji Eun;Kwon, Gyu Hyun
    • Design Convergence Study
    • /
    • v.16 no.3
    • /
    • pp.103-118
    • /
    • 2017
  • It is expected that the integrated control service of the public sector will be increased for the safety of citizens in the future. Therefore, In this study, we analyzed the classification of CCTV control center and the characteristics of interior design. The survey was conducted at eight control centers in Seoul that were constructed since 2007 and analyzed according to the criteria of general matters, services, spatial basic information, spatial structure, and internal structure. The results of the survey are summarized as follows. Based on the results of the study, the Integrated Control Center is a space where the ratio of the physical environment is not high but performs important tasks for the citizens of the city, which are operated 24 hours a day, and security and security. It is characterized by the efficient space allocation for the treatment, the design of the moving line, and the connection according to the urgent work flow. The results of this study are expected to be used as basic data for other integrated control center environment.

Analysis of Church based parish nursing activities in Teagu city (목회간호사의 업무활동분석)

  • Kim, Chung-Nam;Park, Jeong-Sook;Kwon, Young-Sook
    • Research in Community and Public Health Nursing
    • /
    • v.7 no.2
    • /
    • pp.384-399
    • /
    • 1996
  • The concept of parish nursing began in the late 1960s in the United States when increasing numbers of churches employed registered nurses (RNs) to provide holistic, preventive health care to the members of their congregations. Parish nursing role was developed in 1983 by Lutheran chaplain Granger Westberg, and provides care to a variety of church congregation of various denominations. The parish nurse functions as health educator, counselor, group facilitator, client advocate, and liaison to community resources. Since these activities are complementary to the population-focused practice of community health' CNSs, parish nurses either have a strong public health background or work directly with both baccalaureate-prepared public health nurses and CNSs. In a Midwest community in U.S.A., the Healthy People 2000(1991) objectives are being addressed in health ministries through a coalition between public health nurses and parish nurses. Parish nursing is in the beginning state in Korea and up untill now, there has been no research was conducted on concrete role of korean parish nurses. The main purpose of this study was to identify, classify and analyze activities of parish nurses. The other important objective of this study was to establish an effective approach and direction for parish nursing and provide a database for korean parish nursing model through analysis and' classification of the content of the nursing record which included nursing activities. This study was a descriptive survey research. The parish nurses were working in churches where the demonstration project developed on parish nursing. The study was done on all nursing records which were working in churches where the demonstration project developed on parish nursing. The study was done on all nursing records which were documented by parish nurses in three churches from March, 1995 to February, 1996. Namsan, Taegu Jeei and Nedang presbyterian churches in Taegu and Keimyung nursing college incooperated together for the parish nursing demonstration project. The data analysis procedure was as follows: First, a record analysis tool was developed and second, the data was collected, coded and analyzed, the classification for nursing activities was developed through a literature review, from which the basic analysis tool was produced and cotent validity review was also done. The classification of the activities of parish nurses showed 7 activitity categories. 7 activity categories consisted of visitation nursing, health check-ups, health education, referring, attending staff meetings, attending inservices and seminar, volunteers coordinating. The percentage of activities were as follows: Visitation nursing(A: 51.6%, B: 55%, C: 42.6%) Health check-ups(A: 13.5%, B: 12.1%, C: 22.3%) Health education(A: 13.5%, B: 13.2%, C: 18.2%) Referring(A: 1.4%, B: 4.2%, C: 2.4%) Attending staff meeting(A: 18.8%, B: 13.0%, C: 12.2%) Attending inservices and seminar(A: 1.5%, B: 2.2%, C: 2.1%) Volunteers coordinating(A: 0.3%, B: 0.4%, C: 0.0%) To establish and develope parish nursing delivery network in Korea, parish nurses role, activities and boundaries of practice should be continuously monitored and refined every 2 years. Also, It is needed to develope effective nursing recording system based on the need assessment research data of various congregation members. role, activities and boundaries of practice and arrangement of the working structure, continuing education, cooperation with community resources and structuring and organizing parish nursing delivery network. Also, It is needed to develope effective nursing recording system based on the need assessment research data of various congregation members.

  • PDF

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

A Study on Establishment of the Levee GIS Database Using LiDAR Data and WAMIS Information (LiDAR 자료와 WAMIS 정보를 활용한 제방 GIS 데이터베이스 구축에 관한 연구)

  • Choing, Yun-Jae
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.3
    • /
    • pp.104-115
    • /
    • 2014
  • A levee is defined as an man-made structure protecting the areas from temporary flooding. This paper suggests a methodology for establishing the levee GIS database using the airborne topographic LiDAR(Light Detection and Ranging) data taken in the Nakdong river basins and the WAMIS(WAter Management Information System) information. First, the National Levee Database(NLD) established by the USACE(United States Army Corps Engineers) and the levee information tables established by the WAMIS are compared and analyzed. For extracting the levee information from the LiDAR data, the DSM(Digital Surface Model) is generated from the LiDAR point clouds by using the interpolation method. Then, the slope map is generated by calculating the maximum rates of elevation difference between each pixel of the DSM and its neighboring pixels. The slope classification method is employed to extract the levee component polygons such as the levee crown polygons and the levee slope polygons from the slope map. Then, the levee information database is established by integrating the attributes extracted from the identified levee crown and slope polygons with the information provided by the WAMIS. Finally, this paper discusses the advantages and limitations of the levee GIS database established by only using the LiDAR data and suggests a future work for improving the quality of the database.

Collaboration and Node Migration Method of Multi-Agent Using Metadata of Naming-Agent (네이밍 에이전트의 메타데이터를 이용한 멀티 에이전트의 협력 및 노드 이주 기법)

  • Kim, Kwang-Jong;Lee, Yon-Sik
    • The KIPS Transactions:PartD
    • /
    • v.11D no.1
    • /
    • pp.105-114
    • /
    • 2004
  • In this paper, we propose a collaboration method of diverse agents each others in multi-agent model and describe a node migration algorithm of Mobile-Agent (MA) using by the metadata of Naming-Agent (NA). Collaboration work of multi-agent assures stability of agent system and provides reliability of information retrieval on the distributed environment. NA, an important part of multi-agent, identifies each agents and series the unique name of each agents, and each agent references the specified object using by its name. Also, NA integrates and manages naming service by agents classification such as Client-Push-Agent (CPA), Server-Push-Agent (SPA), and System-Monitoring-Agent (SMA) based on its characteristic. And, NA provides the location list of mobile nodes to specified MA. Therefore, when MA does move through the nodes, it is needed to improve the efficiency of node migration by specified priority according to hit_count, hit_ratio, node processing and network traffic time. Therefore, in this paper, for the integrated naming service, we design Naming Agent and show the structure of metadata which constructed with fields such as hit_count, hit_ratio, total_count of documents, and so on. And, this paper presents the flow of creation and updating of metadata and the method of node migration with hit_count through the collaboration of multi-agent.

HFACS-K: A Method for Analyzing Human Error-Related Accidents in Manufacturing Systems: Development and Case Study (제조업의 인적오류 관련 사고분석을 위한 HFACS-K의 개발 및 사례연구)

  • Lim, Jae Geun;Choi, Joung Dock;Kang, Tae Won;Kim, Byung Chul;Ham, Dong-Han
    • Journal of the Korean Society of Safety
    • /
    • v.35 no.4
    • /
    • pp.64-73
    • /
    • 2020
  • As Korean government and safety-related organizations make continuous efforts to reduce the number of industrial accidents, accident rate has steadily declined since 2010, thereby recording 0.48% in 2017. However, the number of fatalities due to industrial accidents was 1,987 in 2017, which means that more efforts should be made to reduce the number of industrial accidents. As an essential activity for enhancing the system safety, accident analysis can be effectively used for reducing the number of industrial accidents. Accident analysis aims to understand the process of an accident scenario and to identify the plausible causes of the accident. Accident analysis offers useful information for developing measures for preventing the recurrence of an accident or its similar accidents. However, it seems that the current practice of accident analysis in Korean manufacturing companies takes a simplistic accident model, which is based on a linear and deterministic cause-effect relation. Considering the actual complexities underlying accidents, this would be problematic; it could be more significant in the case of human error-related accidents. Accordingly, it is necessary to use a more elaborated accident model for addressing the complexity and nature of human-error related accidents more systematically. Regarding this, HFACS(Human Factors Analysis and Classification System) can be a viable accident analysis method. It is based on the Swiss cheese model and offers a range of causal factors of a human error-related accident, some of which can be judged as the plausible causes of an accident. HFACS has been widely used in several work domains(e.g. aviation and rail industry) and can be effectively used in Korean industries. However, as HFACS was originally developed in aviation industry, the taxonomy of causal factors may not be easily applied to accidents in Korean industries, particularly manufacturing companies. In addition, the typical characteristics of Korean industries need to be reflected as well. With this issue in mind, we developed HFACS-K as a method for analyzing accidents happening in Korean industries. This paper reports the process of developing HFACS-K, the structure and contents of HFACS-K, and a case study for demonstrating its usefulness.