• Title/Summary/Keyword: data processing technique

Search Result 1,981, Processing Time 0.03 seconds

Design and Implementation of an Efficient Web Services Data Processing Using Hadoop-Based Big Data Processing Technique (하둡 기반 빅 데이터 기법을 이용한 웹 서비스 데이터 처리 설계 및 구현)

  • Kim, Hyun-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.726-734
    • /
    • 2015
  • Relational databases used by structuralizing data are the most widely used in data management at present. However, in relational databases, service becomes slower as the amount of data increases because of constraints in the reading and writing operations to save or query data. Furthermore, when a new task is added, the database grows and, consequently, requires additional infrastructure, such as parallel configuration of hardware, CPU, memory, and network, to support smooth operation. In this paper, in order to improve the web information services that are slowing down due to increase of data in the relational databases, we implemented a model to extract a large amount of data quickly and safely for users by processing Hadoop Distributed File System (HDFS) files after sending data to HDFSs and unifying and reconstructing the data. We implemented our model in a Web-based civil affairs system that stores image files, which is irregular data processing. Our proposed system's data processing was found to be 0.4 sec faster than that of a relational database system. Thus, we found that it is possible to support Web information services with a Hadoop-based big data processing technique in order to process a large amount of data, as in conventional relational databases. Furthermore, since Hadoop is open source, our model has the advantage of reducing software costs. The proposed system is expected to be used as a model for Web services that provide fast information processing for organizations that require efficient processing of big data because of the increase in the size of conventional relational databases.

The application of shallow seismic reflection method for Chechon limestone area (제천 석회석 지역의 탄성파 반사법의 적용)

  • Suh, Beak-Soo;Lee, Duk-Jae
    • Journal of Industrial Technology
    • /
    • v.20 no.A
    • /
    • pp.303-309
    • /
    • 2000
  • Seismic reflection method is applied to detect shallow location of limestone in Chechon area. The data using hammer source is compared with that of weight drop. Small size hammer and weight-drop are used as energy source and 100Hz geophones are used for data aquisition. Data processing is conducted utilizing the available processing technique of "Geobit", which is seismic data processing software developed by KIGAM. The result of above data processing, the velocity of topsoil layer is 1,250m/sec. The velocity of this area is higher than other area because loading trucks pass this area and make this layer compact. And in limestone area, hammer is proposed to energy source instead of weight drop because the energy propagates the layer very well.

  • PDF

End-mill Manufacturing and Developing of Processing Verification via Cutting Simulation (Cutting Simulation을 이용한 End-milling Cutter의 제작 및 가공 검증 기술 개발)

  • Kim J.H.;Kim J.H.;Ko T.J.;Park J.W.;Kim H.S.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.453-454
    • /
    • 2006
  • This paper describes a processing verification technique for developing about end-milling cutters. Developed software is processing verification module for manufacturing. By using cutting simulation method, we can obtain center points of finding wheel via Boolean operation between a grinding wheel and a cylindrical workpiece. The obtained CL data can be used for calculating NC data. After then, we can simulate by using designed grinding machine and NC data. This research has been implemented on a commercial CAD system by using the API function programming. The operator can evaluate the cutting simulation process and reduce the time of design and manufacturing.

  • PDF

Real-time and Parallel Semantic Translation Technique for Large-Scale Streaming Sensor Data in an IoT Environment (사물인터넷 환경에서 대용량 스트리밍 센서데이터의 실시간·병렬 시맨틱 변환 기법)

  • Kwon, SoonHyun;Park, Dongwan;Bang, Hyochan;Park, Youngtack
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.54-67
    • /
    • 2015
  • Nowadays, studies on the fusion of Semantic Web technologies are being carried out to promote the interoperability and value of sensor data in an IoT environment. To accomplish this, the semantic translation of sensor data is essential for convergence with service domain knowledge. The existing semantic translation technique, however, involves translating from static metadata into semantic data(RDF), and cannot properly process real-time and large-scale features in an IoT environment. Therefore, in this paper, we propose a technique for translating large-scale streaming sensor data generated in an IoT environment into semantic data, using real-time and parallel processing. In this technique, we define rules for semantic translation and store them in the semantic repository. The sensor data is translated in real-time with parallel processing using these pre-defined rules and an ontology-based semantic model. To improve the performance, we use the Apache Storm, a real-time big data analysis framework for parallel processing. The proposed technique was subjected to performance testing with the AWS observation data of the Meteorological Administration, which are large-scale streaming sensor data for demonstration purposes.

The Technique for Gathering, Analyzing and Processing of Field Information (현장정보 수집 및 분석.처리기술)

  • Kim, Yong-Soo
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2007.09a
    • /
    • pp.431-439
    • /
    • 2007
  • Data logger is generally needed for gathering field information automatically(or manually), using various sensors. However it, currently used in the county, is produced and developed by overseas. It is difficult to upgrade related program due to developed system and limited user in terms of environment it is used. On the other hand, several companies developed and sold unique data logger based their technique expertise in the country. But enhancement and practical use of related technology has much difficulties because feasibility, marketability and demand for technique are uncertain. In this study, it is the main purpose to offer basic information to select equipment suitable for field characteristic through analysis and comparison with features about method of information collection, operating program and process technique data logger which is used primarily in geotechnical engineering in both domestic and international.

  • PDF

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.

Hadoop Based Wavelet Histogram for Big Data in Cloud

  • Kim, Jeong-Joon
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.668-676
    • /
    • 2017
  • Recently, the importance of big data has been emphasized with the development of smartphone, web/SNS. As a result, MapReduce, which can efficiently process big data, is receiving worldwide attention because of its excellent scalability and stability. Since big data has a large amount, fast creation speed, and various properties, it is more efficient to process big data summary information than big data itself. Wavelet histogram, which is a typical data summary information generation technique, can generate optimal data summary information that does not cause loss of information of original data. Therefore, a system applying a wavelet histogram generation technique based on MapReduce has been actively studied. However, existing research has a disadvantage in that the generation speed is slow because the wavelet histogram is generated through one or more MapReduce Jobs. And there is a high possibility that the error of the data restored by the wavelet histogram becomes large. However, since the wavelet histogram generation system based on the MapReduce developed in this paper generates the wavelet histogram through one MapReduce Job, the generation speed can be greatly increased. In addition, since the wavelet histogram is generated by adjusting the error boundary specified by the user, the error of the restored data can be adjusted from the wavelet histogram. Finally, we verified the efficiency of the wavelet histogram generation system developed in this paper through performance evaluation.

Defining and Processing XML View of Relational Data with Publication Functions of SQL/XML (SQL/XML의 출판 함수를 이용한 관계 데이터의 XML 뷰 정의 및 처리)

  • Lee, Sang-Wook;Kim, Jin;Kang, Hyun-Chul
    • Journal of Information Technology Applications and Management
    • /
    • v.16 no.4
    • /
    • pp.245-261
    • /
    • 2009
  • Since XML emerged as a standard for data exchange on the web, it has been widely used for applications like e-Commerce, CRM, and BI. However, it is common that most of business data is stored in relational database systems, and it is expected that business data management would still be centered around the relational database systems. As such, the technique of viewing relational data as XML and processing XML queries against it is required. To meet such a need, in the SQL/XML standard, the functions to publish relational data as XML are provided. In this paper, we propose the techniques of providing an XML view of relational data defined by an SQL/XML statement in DTD(Document Type Definition), and of processing XPath queries against the XML view by translating them into SQL/XML statements, and describe the validation of such techniques through implementation and tests.

  • PDF

3D Seismic Data Processing Methodology using Public Domain Software System (공유 소프트웨어 시스템을 이용한 3차원 탄성파 자료처리 방법론)

  • Ji, Jun;Choi, Yun-Gyeong
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.2
    • /
    • pp.159-168
    • /
    • 2010
  • Recent trend in petroleum/gas exploration is an application of 3D seismic exploration technique. Unlike the conventional 2D seismic data processing, 3D seismic data processing is considered as the one which requires expensive commercial software systems and high performance computer. This paper propose a practical 3D seismic processing methodology on a personal computer using public domain software such as SU, SEPlib, and SEPlib3D. The applicability of the proposed method has been demonstrated by successful application to a well known realistic 3D synthetic data, SEG/EAGE 3D salt model data.

Extended Storage Management System for Spatial Data Processing (공간데이타 처리를 위한 확장된 저장관리시스템)

  • 김재홍;배해영
    • Spatial Information Research
    • /
    • v.1 no.1
    • /
    • pp.7-16
    • /
    • 1993
  • Current computer technologies developing. our requirements are changing from simple alpha-numeric processing to graphic and image. spatial data processing which are easy for user to understand and use. Geographic information system is a kind of spatial database system that can not only print out the data in the form of maps but also manipulate. store. retrieve. and analyze the geographic data. It is efficient system that can process the spatial data which has a geographical feature and its relative attribute data. Conventional relational database management systems are not suitable for spatial data processing, so we need to design the spatial database mana-gement system which is suitable for efficient spatial data processing. In this paper we design the extended storage management system that supports the spatial index technique that allows user to access fast and store and manage the enormous spatial data efficiently like geographic information system.

  • PDF