• Title/Summary/Keyword: parallel data collection

Search Result 55, Processing Time 0.026 seconds

Optimization of Fuzzy Set Fuzzy Model by Means of Hierarchical Fair Competition-based Genetic Algorithm using UNDX operator (UNDX연산자를 이용한 계층적 공정 경쟁 유전자 알고리즘을 이용한 퍼지집합 퍼지 모델의 최적화)

  • Kim, Gil-Sung;Choi, Jeoung-Nae;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.204-206
    • /
    • 2007
  • In this study, we introduce the optimization method of fuzzy inference systems that is based on Hierarchical Fair Competition-based Parallel Genetic Algorithms (HFCGA) and information data granulation, The granulation is realized with the aid of the Hard C-means clustering and HFCGA is a kind of multi-populations of Parallel Genetic Algorithms (PGA), and it is used for structure optimization and parameter identification of fuzzy model. It concerns the fuzzy model-related parameters such as the number of input variables to be used, a collection of specific subset of input variables, the number of membership functions, the order of polynomial, and the apexes of the membership function. In the optimization process, two general optimization mechanisms are explored. The structural optimization is realized via HFCGA and HCM method whereas in case of the parametric optimization we proceed with a standard least square method as well as HFCGA method as well. A comparative analysis demonstrates that the proposed algorithm is superior to the conventional methods. Particularly, in parameter identification, we use the UNDX operator which uses multiple parents and generate offsprings around the geographic center off mass of these parents.

  • PDF

Development of Information Technology Infrastructures through Construction of Big Data Platform for Road Driving Environment Analysis (도로 주행환경 분석을 위한 빅데이터 플랫폼 구축 정보기술 인프라 개발)

  • Jung, In-taek;Chong, Kyu-soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.669-678
    • /
    • 2018
  • This study developed information technology infrastructures for building a driving environment analysis platform using various big data, such as vehicle sensing data, public data, etc. First, a small platform server with a parallel structure for big data distribution processing was developed with H/W technology. Next, programs for big data collection/storage, processing/analysis, and information visualization were developed with S/W technology. The collection S/W was developed as a collection interface using Kafka, Flume, and Sqoop. The storage S/W was developed to be divided into a Hadoop distributed file system and Cassandra DB according to the utilization of data. Processing S/W was developed for spatial unit matching and time interval interpolation/aggregation of the collected data by applying the grid index method. An analysis S/W was developed as an analytical tool based on the Zeppelin notebook for the application and evaluation of a development algorithm. Finally, Information Visualization S/W was developed as a Web GIS engine program for providing various driving environment information and visualization. As a result of the performance evaluation, the number of executors, the optimal memory capacity, and number of cores for the development server were derived, and the computation performance was superior to that of the other cloud computing.

An Adaptive Workflow Scheduling Scheme Based on an Estimated Data Processing Rate for Next Generation Sequencing in Cloud Computing

  • Kim, Byungsang;Youn, Chan-Hyun;Park, Yong-Sung;Lee, Yonggyu;Choi, Wan
    • Journal of Information Processing Systems
    • /
    • v.8 no.4
    • /
    • pp.555-566
    • /
    • 2012
  • The cloud environment makes it possible to analyze large data sets in a scalable computing infrastructure. In the bioinformatics field, the applications are composed of the complex workflow tasks, which require huge data storage as well as a computing-intensive parallel workload. Many approaches have been introduced in distributed solutions. However, they focus on static resource provisioning with a batch-processing scheme in a local computing farm and data storage. In the case of a large-scale workflow system, it is inevitable and valuable to outsource the entire or a part of their tasks to public clouds for reducing resource costs. The problems, however, occurred at the transfer time for huge dataset as well as there being an unbalanced completion time of different problem sizes. In this paper, we propose an adaptive resource-provisioning scheme that includes run-time data distribution and collection services for hiding the data transfer time. The proposed adaptive resource-provisioning scheme optimizes the allocation ratio of computing elements to the different datasets in order to minimize the total makespan under resource constraints. We conducted the experiments with a well-known sequence alignment algorithm and the results showed that the proposed scheme is efficient for the cloud environment.

A Design of the OOPP(Optimized Online Portfolio Platform) using Enterprise Competency Information (기업 직무 정보를 활용한 OOPP(Optimized Online Portfolio Platform)설계)

  • Jung, Bogeun;Park, Jinuk;Lee, ByungKwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.493-506
    • /
    • 2018
  • This paper proposes the OOPP(Optimized Online Portfolio Platform) design for the job seekers to search for the job competency necessary for employment and to write and manage portfolio online efficiently. The OOPP consists of three modules. First, JDCM(Job Data Collection Module) stores the help-wanted advertisements of job information sites in a spreadsheet. Second, CSM(Competency Statistical Model) classifies core competencies for each job by text-mining the collected help-wanted ads. Third, OBBM(Optimize Browser Behavior Module) makes users to look up data rapidly by improving the processing speed of a browser. In addition, The OBBM consists of the PSES(Parallel Search Engine Sub-Module) optimizing the computation of a Search Engine and the OILS(Optimized Image Loading Sub-Module) optimizing the loading of image text, etc. The performance analysis of the CSM shows that there is little difference in accuracy between the CSM and the actual advertisement because its data accuracy is 99.4~100%. If Browser optimization is done by using the OBBM, working time is reduced by about 68.37%. Therefore, the OOPP makes users look up the analyzed result in the web page rapidly by analyzing the help-wanted ads. of job information sites accurately.

Smart City Governance Logic Model Converging Hub-and-spoke Data Management and Blockchain Technology (허브 앤 스포크형 데이터 관리 및 블록체인 기술 융합 스마트도시 거버넌스 로직모델)

  • Choi, Sung-Jin
    • Journal of KIBIM
    • /
    • v.14 no.1
    • /
    • pp.30-38
    • /
    • 2024
  • This study aims to propose a smart city governance logic model that can accommodate more diverse information service systems by mixing hub-and-spoke and blockchain technologies as a data management model. Specifically, the research focuses on deriving the logic of an operating system that can work across smart city planning based on the two data governance technologies. The first step of the logic is the generation and collection of information, which is first divided into information that requires information protection and information that can be shared with the public, and the information that requires privacy is blockchainized, and the shared information is integrated and aggregated in a data hub. The next step is the processing and use of the information, which can actively use the blockchain technology, but for the information that can be shared other than the protected information, the governance logic is built in parallel with the hub-and-spoke type. Next is the logic of the distribution stage, where the key is to establish a service contact point between service providers and beneficiaries. Also, This study proposes the establishment of a one-to-one data exchange relationship between information providers, information consumers, and information processors. Finally, in order to expand and promote citizen participation opportunities through a reasonable compensation system in the operation of smart cities, we developed virtual currency as a local currency and designed an open operation logic of local virtual currency that can operate in the compensation dimension of information.

Dilemma of Saudi Arabian Construction Industry

  • Albogamy, Abdullah;Scott, Darren;Dawood, Nashwan
    • Journal of Construction Engineering and Project Management
    • /
    • v.3 no.4
    • /
    • pp.35-40
    • /
    • 2013
  • Currently, the Kingdom of Saudi Arabia (KSA) is the epicentre of building services engineering encapsulating the construction industry. On rise of technological advancements, engineers have the ease to thoroughly investigate engineering aspects. Not only engineers, but other stakeholders, tender related people, financial analysts work in parallel as well. However, there are some factors that are stumbling blocks in the way of progression including delaying factors in the construction industry. The paper provides deep insights of delaying factors regarding public building projects of the KSA. Collection of primary data was carried out by conducting a survey which comprised of 63 chief delay factors. Professionals related to construction industry were asked for ranking the factors in terms of their frequency of occurrence and degree of impact. Seven groups of risk factors are categorized and a correlation analysis is performed by identifying the correlation amongst the variables. Finally, 31 leading delay factors are extracted and reported.

KOMPSAT EOC Grid Reference System

  • Kim, Youn-Soo;Kim, Yong-Seung;Benton, William
    • Proceedings of the KSRS Conference
    • /
    • 1998.09a
    • /
    • pp.349-354
    • /
    • 1998
  • The grid reference system (GRS) has been useful for identifying the geographical location of satellite images. In this study we derive a GRS for the KOMPSAT Electro-Optical Camera (EOC) images. The derivation substantially follows the way that SPOT defines for its GRS, but incorporates the KOMPSAT orbital characteristics. The KOMPSAT EOC GRS (KEGRS) is designed to be a (K,J) coordinate system. The K coordinate parallel to the KOMPSAT ground track denotes the relative longitudinal position and the J coordinate represents the relative latitudinal position. The numbering of K begins with the prime meridian of K=1 with K increasing eastward, and the numbering of J uses a fixed value of J=500 at all center points on the equator with J increasing northward. The lateral and vertical intervals of grids are determined to be 12.5 km about at the 38$^{\circ}$ latitude to allow some margins for the value-added processing. The above design factors are being implemented in a satellite programming module of the KOMPSAT Receiving and Processing System (KRPS) to facilitate the EOC data collection planning over the Korean peninsula.

  • PDF

Fast 3D reconstruction method based on UAV photography

  • Wang, Jiang-An;Ma, Huang-Te;Wang, Chun-Mei;He, Yong-Jie
    • ETRI Journal
    • /
    • v.40 no.6
    • /
    • pp.788-793
    • /
    • 2018
  • 3D reconstruction of urban architecture, land, and roads is an important part of building a "digital city." Unmanned aerial vehicles (UAVs) are gradually replacing other platforms, such as satellites and aircraft, in geographical image collection; the reason for this is not only lower cost and higher efficiency, but also higher data accuracy and a larger amount of obtained information. Recent 3D reconstruction algorithms have a high degree of automation, but their computation time is long and the reconstruction models may have many voids. This paper decomposes the object into multiple regional parallel reconstructions using the clustering principle, to reduce the computation time and improve the model quality. It is proposed to detect the planar area under low resolution, and then reduce the number of point clouds in the complex area.

NREH: Upper Extremity Rehabilitation Robot for Various Exercises and Data Collection at Home (NREH: 다양한 운동과 데이터 수집이 가능한 가정용 상지재활로봇)

  • Jun-Yong Song;Seong-Hoon Lee;Won-Kyung Song
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.376-384
    • /
    • 2023
  • In this paper, we introduce an upper extremity rehabilitation robot, NREH (NRC End-effector based Rehabilitation arm at Home). Through NREH, stroke survivors could continuously exercise their upper extremities at home. NREH allows a user to hold the handle of the end-effector of the robot arm. NREH is a end-effector-based robot that moves the arm on a two-dimensional plane, but the tilt angle can be adjusted to mimic a movement similar to that in a three-dimensional space. Depending on the tilting angle, it is possible to perform customized exercises that can adjust the difficulty for each user. The user can sit down facing the robot and perform exercises such as arm reaching. When the user sits 90 degrees sideways, the user can also exercise their arms on a plane parallel to the sagittal plane. NREH was designed to be as simple as possible considering its use at home. By applying error augmentation, the exercise effect can be increased, and assistance force or resistance force can be applied as needed. Using an encoder on two actuators and a force/torque sensor on the end-effector, NREH can continuously collect and analyze the user's movement data.

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.