• Title/Summary/Keyword: Big data Processing

Search Result 1,063, Processing Time 0.025 seconds

Design Thinking Methodology for Social Innovation using Big Data and Qualitative Research (사회혁신분야에서 근거이론 기반 질적연구와 빅데이터 분석을 활용한 디자인 씽킹 방법론)

  • Park, Sang Hyeok;Oh, Seung Hee;Park, Soon Hwa
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.13 no.4
    • /
    • pp.169-181
    • /
    • 2018
  • Under the constantly intensifying global competition environment, many companies are exploring new business opportunities in the field of social innovation using creating shared value. In seeking social innovation, it is a key starting point of social innovation to clarify the problem to be solved and to grasp the cause of the problem. Among the many problem solving methodologies, design thinking is getting the most attention recently in various fields. Design Thinking is a creative problem solving method which is used as a business innovation tool to empathize with human needs and find out the potential desires that the public does not know, and is actively used as a tool for social innovation to solve social problems. However, one of the difficulties experienced by many of the design thinking project participants is that it is difficult to analyze the observed data efficiently. When analyzing data only offline, it takes a long time to analyze a large amount of data, and it has a limit in processing unstructured data. This makes it difficult to find fundamental problems from the data collected through observation while performing design thinking. The purpose of this study is to integrate qualitative data analysis and quantitative data analysis methods in order to make the data analysis collected at the observation stage of the design thinking project for social innovation more scientific to complement the limit of the design thinking process. The integrated methodology presented in this study is expected to contribute to innovation performance through design thinking by providing practical guidelines and implications for design thinking implementers as a valuable tool for social innovation.

A Study on the Performance Measurement and Analysis on the Virtual Memory based FTL Policy through the Changing Map Data Resource (멥 데이터 자원 변화를 통한 가상 메모리 기반 FTL 정책의 성능 측정 및 분석 연구)

  • Hyun-Seob Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.71-76
    • /
    • 2023
  • Recently, in order to store and manage big data, research and development of a high-performance storage system capable of stably accessing large data have been actively conducted. In particular, storage systems in data centers and enterprise environments use large amounts of SSD (solid state disk) to manage large amounts of data. In general, SSD uses FTL(flash transfer layer) to hide the characteristics of NAND flash memory, which is a medium, and to efficiently manage data. However, FTL's algorithm has a limitation in using DRAM more to manage the location information of NAND where data is stored as the capacity of SSD increases. Therefore, this paper introduces FTL policies that apply virtual memory to reduce DRAM resources used in FTL. The virtual memory-based FTL policy proposed in this paper manages the map data by using LRU (least recently used) policy to load the mapping information of the recently used data into the DRAM space and store the previously used information in NAND. Finally, through experiments, performance and resource usage consumed during data write processing of virtual memory-based FTL and general FTL are measured and analyzed.

Improving the I/O Performance of Disk-Based Graph Engine by Graph Ordering (디스크 기반 그래프 엔진의 입출력 성능 향상을 위한 그래프 오더링)

  • Lim, Keunhak;Kim, Junghyun;Lee, Eunjae;Seo, Jiwon
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.1
    • /
    • pp.40-45
    • /
    • 2018
  • With the advent of big data and social networks, large-scale graph processing becomes popular research topic. Recently, an optimization technique called Gorder has been proposed to improve the performance of in-memory graph processing. This technique improves performance by optimizing the graph layout on memory to have better cache locality. However, since it is designed for in-memory graph processing systems, the technique is not suitable for disk-based graph engines; also the cost for applying the technique is significantly high. To solve the problem, we propose a new graph ordering called I/O Order. I/O Order considers the characteristics of I/O accesses for SSDs and HDDs to improve the performance of disk-based graph engine. In addition, the algorithmic complexity of I/O Order is simple compared to Gorder, hence it is cheaper to apply I/O Ordering. I/O order reduces the cost of pre-processing up to 9.6 times compared to that of Gorder's, still its performance is 2 times higher compared to the Random in low-locality graph algorithms.

A Query Preprocessing Tool for Performance Improvement in Complex Event Stream Query Processing (복합 이벤트 스트림 질의 처리 성능 개선을 위한 질의 전처리 도구)

  • Choi, Joong-Hyun;Cho, Eun-Sun;Lee, Kang-Woo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.513-523
    • /
    • 2015
  • A complex event processing system, becoming useful in real life domains, efficiently processes stream of continuous events like sensor data from IoT systems. However, those systems do not work well on some types of queries yet, so that programmers should be careful about that. For instance, they do not sufficiently provide detailed guide to choose efficient queries among the almost same meaning queries. In this paper, we propose an query preprocessing tool for event stream processing systems, which helps programmers by giving them the hints to improve performance whenever their queries fall in any possible bad formats in the performance sense. We expect that our proposed module would be a big help to increases productivity of writing programs where debugging, testing, and performance tuning are not straightforward.

Fashion attribute-based mixed reality visualization service (패션 속성기반 혼합현실 시각화 서비스)

  • Yoo, Yongmin;Lee, Kyounguk;Kim, Kyungsun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.2-5
    • /
    • 2022
  • With the advent of deep learning and the rapid development of ICT (Information and Communication Technology), research using artificial intelligence is being actively conducted in various fields of society such as politics, economy, and culture and so on. Deep learning-based artificial intelligence technology is subdivided into various domains such as natural language processing, image processing, speech processing, and recommendation system. In particular, as the industry is advanced, the need for a recommendation system that analyzes market trends and individual characteristics and recommends them to consumers is increasingly required. In line with these technological developments, this paper extracts and classifies attribute information from structured or unstructured text and image big data through deep learning-based technology development of 'language processing intelligence' and 'image processing intelligence', and We propose an artificial intelligence-based 'customized fashion advisor' service integration system that analyzes trends and new materials, discovers 'market-consumer' insights through consumer taste analysis, and can recommend style, virtual fitting, and design support.

  • PDF

A Research in Applying Big Data and Artificial Intelligence on Defense Metadata using Multi Repository Meta-Data Management (MRMM) (국방 빅데이터/인공지능 활성화를 위한 다중메타데이터 저장소 관리시스템(MRMM) 기술 연구)

  • Shin, Philip Wootaek;Lee, Jinhee;Kim, Jeongwoo;Shin, Dongsun;Lee, Youngsang;Hwang, Seung Ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.169-178
    • /
    • 2020
  • The reductions of troops/human resources, and improvement in combat power have made Korean Department of Defense actively adapt 4th Industrial Revolution technology (Artificial Intelligence, Big Data). The defense information system has been developed in various ways according to the task and the uniqueness of each military. In order to take full advantage of the 4th Industrial Revolution technology, it is necessary to improve the closed defense datamanagement system.However, the establishment and usage of data standards in all information systems for the utilization of defense big data and artificial intelligence has limitations due to security issues, business characteristics of each military, anddifficulty in standardizing large-scale systems. Based on the interworking requirements of each system, data sharing is limited through direct linkage through interoperability agreement between systems. In order to implement smart defense using the 4th Industrial Revolution technology, it is urgent to prepare a system that can share defense data and make good use of it. To technically support the defense, it is critical to develop Multi Repository Meta-Data Management (MRMM) that supports systematic standard management of defense data that manages enterprise standard and standard mapping for each system and promotes data interoperability through linkage between standards which obeys the Defense Interoperability Management Development Guidelines. We introduced MRMM, and implemented by using vocabulary similarity using machine learning and statistical approach. Based on MRMM, We expect to simplify the standardization integration of all military databases using artificial intelligence and bigdata. This will lead to huge reduction of defense budget while increasing combat power for implementing smart defense.

An Efficient Dual Queue Strategy for Improving Storage System Response Times (저장시스템의 응답 시간 개선을 위한 효율적인 이중 큐 전략)

  • Hyun-Seob Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.19-24
    • /
    • 2024
  • Recent advances in large-scale data processing technologies such as big data, cloud computing, and artificial intelligence have increased the demand for high-performance storage devices in data centers and enterprise environments. In particular, the fast data response speed of storage devices is a key factor that determines the overall system performance. Solid state drives (SSDs) based on the Non-Volatile Memory Express (NVMe) interface are gaining traction, but new bottlenecks are emerging in the process of handling large data input and output requests from multiple hosts simultaneously. SSDs typically process host requests by sequentially stacking them in an internal queue. When long transfer length requests are processed first, shorter requests wait longer, increasing the average response time. To solve this problem, data transfer timeout and data partitioning methods have been proposed, but they do not provide a fundamental solution. In this paper, we propose a dual queue based scheduling scheme (DQBS), which manages the data transfer order based on the request order in one queue and the transfer length in the other queue. Then, the request time and transmission length are comprehensively considered to determine the efficient data transmission order. This enables the balanced processing of long and short requests, thus reducing the overall average response time. The simulation results show that the proposed method outperforms the existing sequential processing method. This study presents a scheduling technique that maximizes data transfer efficiency in a high-performance SSD environment, which is expected to contribute to the development of next-generation high-performance storage systems

A Study on Big Data Based Non-Face-to-Face Identity Proofing Technology (빅데이터 기반 비대면 본인확인 기술에 대한 연구)

  • Jung, Kwansoo;Yeom, Hee Gyun;Choi, Daeseon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.10
    • /
    • pp.421-428
    • /
    • 2017
  • The need for various approaches to non-face-to-face identification technology for registering and authenticating users online is being required because of the growth of online financial services and the rapid development of financial technology. In general, non-face-to-face approaches can be exposed to a greater number of threats than face-to-face approaches. Therefore, identification policies and technologies to verify users by using various factors and channels are being studied in order to complement the risks and to be more reliable non-face-to-face identification methods. One of these new approaches is to collect and verify a large number of personal information of user. Therefore, we propose a big-data based non-face-to-face Identity Proofing method that verifies identity on online based on various and large amount of information of user. The proposed method also provides an identification information management scheme that collects and verifies only the user information required for the identity verification level required by the service. In addition, we propose an identity information sharing model that can provide the information to other service providers so that user can reuse verified identity information. Finally, we prove by implementing a system that verifies and manages only the identity assurance level required by the service through the enhanced user verification in the non-face-to-face identity proofing process.

The Improvement Plan for Indicator System of Personal Information Management Level Diagnosis in the Era of the 4th Industrial Revolution: Focusing on Application of Personal Information Protection Standards linked to specific IT technologies (제4차 산업시대의 개인정보 관리수준 진단지표체계 개선방안: 특정 IT기술연계 개인정보보호기준 적용을 중심으로)

  • Shin, Young-Jin
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.12
    • /
    • pp.1-13
    • /
    • 2021
  • This study tried to suggest ways to improve the indicator system to strengthen the personal information protection. For this purpose, the components of indicator system are derived through domestic and foreign literature, and it was selected as main the diagnostic indicators through FGI/Delphi analysis for personal information protection experts and a survey for personal information protection officers of public institutions. As like this, this study was intended to derive an inspection standard that can be reflected as a separate index system for personal information protection, by classifying the specific IT technologies of the 4th industrial revolution, such as big data, cloud, Internet of Things, and artificial intelligence. As a result, from the planning and design stage of specific technologies, the check items for applying the PbD principle, pseudonymous information processing and de-identification measures were selected as 2 common indicators. And the checklists were consisted 2 items related Big data, 5 items related Cloud service, 5 items related IoT, and 4 items related AI. Accordingly, this study expects to be an institutional device to respond to new technological changes for the continuous development of the personal information management level diagnosis system in the future.

A Selective Compression Strategy for Performance Improvement of Database Compression (데이터베이스 압축 성능 향상을 위한 선택적 압축 전략)

  • Lee, Ki-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.9
    • /
    • pp.371-376
    • /
    • 2015
  • The Internet of Things (IoT) significantly increases the amount of data. Database compression is important for big data because it can reduce costs for storage systems and save I/O bandwidth. However, it could show low performance for write-intensive workloads such as OLTP due to the updates of compressed pages. In this paper, we present practical guidelines for the performance improvement of database compression. Especially, we propose the SELECTIVE strategy, which compresses only tables whose space savings are close to the expected space savings calculated by the compressed page size. Experimental results using the TPC-C benchmark and MySQL show that the strategy can achieve 1.1 times better performance than the uncompressed counterpart with 17.3% space savings.