• Title/Summary/Keyword: Big data Processing

Search Result 1,063, Processing Time 0.033 seconds

Visual Model of Pattern Design Based on Deep Convolutional Neural Network

  • Jingjing Ye;Jun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.311-326
    • /
    • 2024
  • The rapid development of neural network technology promotes the neural network model driven by big data to overcome the texture effect of complex objects. Due to the limitations in complex scenes, it is necessary to establish custom template matching and apply it to the research of many fields of computational vision technology. The dependence on high-quality small label sample database data is not very strong, and the machine learning system of deep feature connection to complete the task of texture effect inference and speculation is relatively poor. The style transfer algorithm based on neural network collects and preserves the data of patterns, extracts and modernizes their features. Through the algorithm model, it is easier to present the texture color of patterns and display them digitally. In this paper, according to the texture effect reasoning of custom template matching, the 3D visualization of the target is transformed into a 3D model. The high similarity between the scene to be inferred and the user-defined template is calculated by the user-defined template of the multi-dimensional external feature label. The convolutional neural network is adopted to optimize the external area of the object to improve the sampling quality and computational performance of the sample pyramid structure. The results indicate that the proposed algorithm can accurately capture the significant target, achieve more ablation noise, and improve the visualization results. The proposed deep convolutional neural network optimization algorithm has good rapidity, data accuracy and robustness. The proposed algorithm can adapt to the calculation of more task scenes, display the redundant vision-related information of image conversion, enhance the powerful computing power, and further improve the computational efficiency and accuracy of convolutional networks, which has a high research significance for the study of image information conversion.

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

A Study on the Tree Surgery Problem and Protection Measures in Monumental Old Trees (천연기념물 노거수 외과수술 문제점 및 보존 관리방안에 관한 연구)

  • Jung, Jong Soo
    • Korean Journal of Heritage: History & Science
    • /
    • v.42 no.1
    • /
    • pp.122-142
    • /
    • 2009
  • This study explored all domestic and international theories for maintenance and health enhancement of an old and big tree, and carried out the anatomical survey of the operation part of the tree toward he current status of domestic surgery and the perception survey of an expert group, and drew out following conclusion through the process of suggesting its reform plan. First, as a result of analyzing the correlation of the 67 subject trees with their ages, growth status. surroundings, it revealed that they were closely related to positional characteristic, damage size, whereas were little related to materials by fillers. Second, the size of the affected part was the most frequent at the bough sheared part under $0.09m^2$, and the hollow size by position(part) was the biggest at 'root + stem' starting from the behind of the main root and stem As a result of analyzing the correlation, the same result was elicited at the group with low correlation. Third, the problem was serious in charging the fillers (especially urethane) in the big hollow or exposed root produced at the behind of the root and stem part, or surface-processing it. The benefit by charging the hollow part was analyzed as not so much. Fourth, the surface-processing of fillers currently used (artificial bark) is mainly 'epoxy+woven fabric+cork', but it is not flexible, so it has brought forth problems of frequent cracks and cracked surface at the joint part with the treetextured part. Fifth, the correlation with the external status of the operated part was very high with the closeness, surface condition, formation of adhesive tissue and internal survey result. Sixth, the most influential thing on flushing by the wrong management of an old and big tree was banking, and a wrong pruning was the source of the ground part damage. In pruning a small bough can easily recover itself from its damage as its formation of adhesive tissue when it is cut by a standard method. Seventh, the parameters affecting the times of related business handling of an old and big tree are 'the need of the conscious reform of the manager and related business'. Eighth, a reform plan in an institutional aspect can include the arrangement of the law and organization of the old and big tree management and preservation at an institutional aspect. This study for preparing a reform plan through the status survey of the designated old and big tree, has a limit inducing a reform plan based on the status survey through individual research, and a weak point suggesting grounds by any statistical data. This can be complemented by subsequent studies.

Design of method to analyze UI structure of contents based on the Morphology (형태적 관점의 콘텐츠 UI구조 분석 방법 설계)

  • Yun, Bong Shik
    • Smart Media Journal
    • /
    • v.8 no.4
    • /
    • pp.58-63
    • /
    • 2019
  • The growth of the mobile device market has changed the education market and led to the quantitative growth of various media education. In particular, smart devices, which have better interaction than existing PCs or consoles, can develop more user-friendly content, allowing various types of educational content and inducing changes in traditional education methods for consumers. Although many researchers recently suggest viable development methods or marketing elements of contents, development companies, and developers, until now, merely rely on the human senses. Therefore, it is necessary to study the actual user's smart-device based usability and experience environment. This study aims to propose an intuitive statistical processing method for analyzing the usability of game-type educational contents in terms of form, for popular games that have been released as a basis for analyzing the user experience environment. In particular, because the game industry has a sufficient number of similar examples, it is possible to conduct research based on big data and to use them for immediate decision-making between multiple co-developers through the analysis method proposed by the research. It is expected to become an analytical model that can communicate with other industries because it is effective in securing data sources.

Causes of Food Poisoning and HACCP Accreditation in September 2018 (2018년 하절기 식중독 사고 발생 현황과 HACCP인증제와의 관련성)

  • Kim, Yoon-Jeong;Kim, Ji-Yun;Kim, Hyeon-Jeong;Choi, A-Young;Lee, Sung-won
    • Journal of Industrial Convergence
    • /
    • v.17 no.3
    • /
    • pp.9-16
    • /
    • 2019
  • In this study, we wanted to analyze the causes of food poisoning and its major occurrence in September 2018 and analyze the relevance of the HACCP certification system to report the correlation. Based on three-year food poisoning cases and causative substances data, and big data on HACCP certification companies and food poisoning frequency, Hygiene 1: 'Salmonella would have spread through school food processing medium.' Hypothesis 2: The difference in the number of food poisoning cases in the last three years as the number of HACCP certifier increases, the number of food poisoning cases will be verified and the cause of food poisoning in September 2018. Studies show that the food poisoning in September 2018 was caused by salmonella bacteria and that outsourced food provided through school meals was the cause. It was also shown that the expansion of HACCP certification did not significantly contribute to the reduction of food poisoning. Therefore, the management operation measures were proposed as a solution to prevent salmonella and to become HACCP certification that could reduce food poisoning.

Study of Improved CNN Algorithm for Object Classification Machine Learning of Simple High Resolution Image (고해상도 단순 이미지의 객체 분류 학습모델 구현을 위한 개선된 CNN 알고리즘 연구)

  • Hyeopgeon Lee;Young-Woon Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.1
    • /
    • pp.41-49
    • /
    • 2023
  • A convolutional neural network (CNN) is a representative algorithm for implementing artificial neural networks. CNNs have improved on the issues of rapid increase in calculation amount and low object classification rates, which are associated with a conventional multi-layered fully-connected neural network (FNN). However, because of the rapid development of IT devices, the maximum resolution of images captured by current smartphone and tablet cameras has reached 108 million pixels (MP). Specifically, a traditional CNN algorithm requires a significant cost and time to learn and process simple, high-resolution images. Therefore, this study proposes an improved CNN algorithm for implementing an object classification learning model for simple, high-resolution images. The proposed method alters the adjacency matrix value of the pooling layer's max pooling operation for the CNN algorithm to reduce the high-resolution image learning model's creation time. This study implemented a learning model capable of processing 4, 8, and 12 MP high-resolution images for each altered matrix value. The performance evaluation result showed that the creation time of the learning model implemented with the proposed algorithm decreased by 36.26% for 12 MP images. Compared to the conventional model, the proposed learning model's object recognition accuracy and loss rate were less than 1%, which is within the acceptable error range. Practical verification is necessary through future studies by implementing a learning model with more varied image types and a larger amount of image data than those used in this study.

Energy-Efficient Subpaging for the MRAM-based SSD File System (MRAM 기반 SSD 파일 시스템의 에너지 효율적 서브페이징)

  • Lee, JaeYoul;Han, Jae-Il;Kim, Young-Man
    • Journal of Information Technology Services
    • /
    • v.12 no.4
    • /
    • pp.369-380
    • /
    • 2013
  • The advent of the state-of-the-art technologies such as cloud computing and big data processing stimulates the provision of various new IT services, which implies that more servers are required to support them. However, the need for more servers will lead to more energy consumption and the efficient use of energy in the computing environment will become more important. The next generation nonvolatile RAM has many desirable features such as byte addressability, low access latency, high density and low energy consumption. There are many approaches to adopt them especially in the area of the file system involving storage devices, but their focus lies on the improvement of system performance, not on energy reduction. This paper suggests a novel approach for energy reduction in which the MRAM-based SSD is utilized as a storage device instead of the hard disk and a downsized page is adopted instead of the 4KB page that is the size of a page in the ordinary file system. The simulation results show that energy efficiency of a new approach is very effective in case of accessing the small number of bytes and is improved up to 128 times better than that of NAND Flash memory.

Managerial Factors Influencing Dose Reduction of the Nozzle Dam Installation and Removal Tasks Inside a Steam Generator Water Chamber (증기발생기 수실 노즐댐 설치 및 제거작업의 피폭선량 저감에 영향을 주는 관리요인에 관한 연구)

  • Lee, Dhong Ha
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.5
    • /
    • pp.559-568
    • /
    • 2017
  • Objective: The aim of this study is to investigate the effective managerial factors influencing dose reduction of the nozzle dam installation and removal tasks ranking within top 3 in viewpoint of average collective dose of nuclear power plant maintenance job. Background: International Commission on Radiation Protection (ICRP) recommended to reduce unnecessary dose and to minimize the necessary dose on the participants of maintenance job in radiation fields. Method: Seven sessions of nozzle dam installation and removal task logs yielded a multiple regression model with collective dose as a dependent variable and work time, number of participants, space doses before and after shield as independent variables. From the sessions in which a significant reduction in collective dose occurred, the effective managerial factors were elicited. Results: Work time was the most important factor contributing to collective dose reduction of nozzle dam installation and removal task. Introduction of new technology in nozzle dam design or maintenance job is the most important factor for work time reduction. Conclusion: With extended task logs and big data processing technique, the more accurate prediction model illustrating the relationship between collective dose reduction and effective managerial factors would be developed. Application: The effective managerial factors will be useful to reduce collective dose of decommissioning tasks as well as regular preventive maintenance tasks for a nuclear power plant.

OLAP4R: A Top-K Recommendation System for OLAP Sessions

  • Yuan, Youwei;Chen, Weixin;Han, Guangjie;Jia, Gangyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.6
    • /
    • pp.2963-2978
    • /
    • 2017
  • The Top-K query is currently played a key role in a wide range of road network, decision making and quantitative financial research. In this paper, a Top-K recommendation algorithm is proposed to solve the cold-start problem and a tag generating method is put forward to enhance the semantic understanding of the OLAP session. In addition, a recommendation system for OLAP sessions called "OLAP4R" is designed using collaborative filtering technique aiming at guiding the user to find the ultimate goals by interactive queries. OLAP4R utilizes a mixed system architecture consisting of multiple functional modules, which have a high extension capability to support additional functions. This system structure allows the user to configure multi-dimensional hierarchies and desirable measures to analyze the specific requirement and gives recommendations with forthright responses. Experimental results show that our method has raised 20% recall of the recommendations comparing the traditional collaborative filtering and a visualization tag of the recommended sessions will be provided with modified changes for the user to understand.

Optimal Buffer Allocation in Multi-Product Repairable Production Lines Based on Multi-State Reliability and Structural Complexity

  • Duan, Jianguo;Xie, Nan;Li, Lianhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1579-1602
    • /
    • 2020
  • In the design of production system, buffer capacity allocation is a major step. Through polymorphism analysis of production capacity and production capability, this paper investigates a buffer allocation optimization problem aiming at the multi-stage production line including unreliable machines, which is concerned with maximizing the system theoretical production rate and minimizing the system state entropy for a certain amount of buffers simultaneously. Stochastic process analysis is employed to establish Markov models for repairable modular machines. Considering the complex structure, an improved vector UGF (Universal Generating Function) technique and composition operators are introduced to construct the system model. Then the measures to assess the system's multi-state reliability and structural complexity are given. Based on system theoretical production rate and system state entropy, mathematical model for buffer capacity optimization is built and optimized by a specific genetic algorithm. The feasibility and effectiveness of the proposed method is verified by an application of an engine head production line.