• Title/Summary/Keyword: Disk Drive

Search Result 593, Processing Time 0.025 seconds

An Analysis on the Role of Enabling Technology in the Relationship between Core Technology and Business Model during the Process of Disruptive Innovation (와해성 혁신과정에서 핵심기술과 비즈니스 모델간의 관계와 보완기술의 중요성 분석: 인터넷 쇼핑몰 사례를 중심으로)

  • Lee, Su;Lee, Sang-Hyun;Kim, Kil-Sun
    • Journal of Technology Innovation
    • /
    • v.19 no.1
    • /
    • pp.79-109
    • /
    • 2011
  • In his highly cited book, Innovator's Dilemma (1997), Christensen introduced a notion of disruptive technology that is based on the observations from disk-drive industry and used it as an explanatory variable through which new entrants outperform incumbents in the industry. In explaining his later observations of disruptive innovations in other industries, however, his early theory based on disruptive technology has been applied to all cases without careful distinction between the notions of technology and business model (Markides, 2006). Furthermore, it has been criticized that his model suffers from lack of enough explanatory power and other important factors that are necessary to fully explain the observed phenomena in various cases (Danneels, 2004). Motivated by the critics in literature, the current study carefully distinguishes between innovation of technology and innovation of business model in the process of disruptive innovation, and apply our framework to the case of internet shopping mall business. Our study yields two main results. First, the internet-related business model which Christensen argued as an example of disruptive innovation is accomplished through two distinctive and separable growth phases: a period of technology growth and a period of business model growth. Second, in the process of disruptive innovation, the notion of enabling technology plays an important bridging role that connects core technology and business model. Furthermore, we confirm that the success of business model innovation depends on the degree of maturity of the enabling technologies. In conclusion, Christensen's notion of disruptive innovation can be further detailed in terms of technology innovation and business model innovation, and if there exist enabling technologies, the chance of success of the business model is higher when the enabling technology is matured rather than when the core technology is merely acknowledged as a disruptive technology.

  • PDF

SSD-based RAID-6 System Architecture for Reliability and Performance Enhancement (신뢰성 향상과 성능개선을 위해 다양한 Erasure 코드를 적용한 SSD 기반 RAID-6 시스템 구조)

  • Song, Jae-Seok;Huh, Joon-Moo;Yang, Yu-Seok;Kim, Deok-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.6
    • /
    • pp.47-56
    • /
    • 2010
  • HDD-based RAIDs have been used in high-capacity storage systems for traditional data server. However, their data reliability are relatively low and they consume lots of power since hard disk drive is weak on shock and its power consumption is high due to frequent spindle motor operation. Therefore, this paper presents new SSD based RAID system architecture using various erasure codes. The proposed methode applys Reed-Solomon, EVENODD, and Liberation coding schemes onto file system level and device driver level, respectively. Besides, it uses data allocation method to minimize the side effect of reducing the lifespan of SSD. Detail experimental results show that Liberation code increase wear-leveling rates of SSD based RAID-6 more than other codes. The SSD based RAID system applying erasure codes at the device driver level shows better performance than that at the file system level. I/O performance of RAID-6 system using SSD is 4.5%~8.5% higher than that of using HDD and the power consumption of the RAID system using SSD is 18%~40% less than that of using HDD.

무심연삭공정의 진원도 형성해석

  • 주종남;김강
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2001.10a
    • /
    • pp.21-25
    • /
    • 2001
  • 기계부품의 소형화 , 고속화, 그리고 저공해, 저소음이 요구되는 세계적인 추세에서 정밀가공기술은 기계 및 전자산 업에서 중요한 위치를 차지하게 되었다. 특히, 무심연삭공정(Centerless Grinding)은 높은 생산성과 정확한 치수 형성의 능력이 있어서 원통형상을 가공하는 중요한 생산공정으로 사용되어 왔다. 예컨대 VCR의 소형 축. Computer Disk Drive, 초소형 모터, 연료분사기등은 쎈터레스 연삭공정을 통하여 높은 정밀도를 얻고 있다. 하지만 이 공정의 특수성과 측정의 어려움으로 인하여 이러한 정밀형상의 형성과정은 아직도 잘 밝혀져있지 않다. 무심연연삭 공정에서는 부품이 기계에 고정되어 있지 않고 공작물 받침날 위에 올려져 있으며 조절바퀴와 연삭바퀴 사이에 눌려져 있다. 조절바퀴가 마찰력으로 공작물을 돌려주며 연삭바퀴에서 연삭가공이 일어나게 된다. 조절바퀴와 연삭바퀴사이의 거리는 기계 자체의 탄성변형으로 인하여 항시 변화하게 되며 이 거리의 변화가 공작물의 정밀형상 형성에 결정적인 영향을 미치게 된다. 본 연구에서는 무심연삭공정중 공작물과 받침날, 조절바퀴, 연삭바퀴의 상대운동을 기하학적으로 해석하였다. 특히 간섭조건을 사용하여 실제 공작물의 운동을 해석하여 순간 명목 절삭깊이를 구하였다. 또한 연삭 특성실험식을 이용하여 수직 연삭력을 구하고 연삭기의 탄성변형을 구하여 순간 실제 절삭깊이를 계산하였다. 그로부터 진원도형성에 관한 기본식을 유도하였다. 본 연구에서 유도된 진원도 형성 식을 이용하여 실험과 동일한 조건으로 컴퓨터 시뮬레이션을 수행하였다. 그리고 원형중의 어떤 이상형상, 즉, 홈또는 돌기는 반복되어서 다른 돌기 또는 홈을 형성 하게되며 그 반복주기는 공작물이 조절바퀴와 연삭바퀴위에 떠있는 각도에 따라 결점 됨을 확인하였다.'유창성' 에 그 목표를 두고 있는 점을 감안한다면, 시작단계부터 반드시 정확한 발음을 지녀야 하는 가의 문제도 생각해 볼 필요가 있다. 경우에 따라서는, 정확한 발음은 그 언어에 대한 숙련도가 점차 높아짐에 따라 이와 병행하여 이루어지는 경우도 흔히 경험하는 일이기 때문이다. 결국 초등영어 교육과정에도 명시되어 있듯이 '...영어에 대한 친숙함과 자신감을 심어주고, 영어에 대한 흥미와 관심을 지속적으로 유지시키는 것이 중요하기' 때문에 무엇보다 중요한 측면은 흥미와 관심을 유지시키는 지적인 학습활동보다는 정의적인 학습활동의 전개가 필요하다고 하겠다. 유리된 AA의 세포독설과 관련된 세포내의 역할에 대해 의문이 제기되었다., PCL에 SOD-1도 경미하게 나타났으나, 경련이 나타난 쥐에서는 KA만을 투여한 흰쥐와 구별되지 않았다. 이상의 APT의 항산화 효과는 KA로 인한 뇌세포 변성 개선에 중요한 인자로 작용할 것으로 사료되나, 보다 명확한 APT의 기전을 검색하고 직접 임상에 응응하기 위하여는 보다 다양한 실험 조건이 보완되어야 찰 것으로 생각된다. 항우울약들의 항혈소판작용은 PKC-기질인 41-43 kD와 20 kD의 인산화를 억제함에 기인되는 것으로 사료된다.다. 것으로 사료된다.다.바와 같이 MCl에서 작은 Dv 값을 갖는데, 이것은 CdCl$_{4}$$^{2-}$ 착이온을 형성하거나 ZnCl$_{4}$$^{2-}$ , ZnCl$_{3}$$^{-}$같은 이온과 MgCl$^{+}$, MgCl$_{2}$같은 이온종을 형성하기 때문인것 같다. 한편 어떠한 용리액에서던지 NH$_{4}$

Shallow Marine Seismic Refraction Data Acquisition and Interpretation Using digital Technique (디지털 技法을 이용한 淺海底 屈折法 彈性波 探査資料의 取得과 解析)

  • 이호영;김철민
    • 한국해양학회지
    • /
    • v.27 no.1
    • /
    • pp.19-34
    • /
    • 1992
  • Marine seismic refraction surveys have been carried out by Korea Institute of Geology, Mining and Materials(KIGAM) since 1984. The recording of refraction data was based on analog instrumentation. Therefore the resolution of refraction data was not good enough to distinguish many layers. The objective of the interpretation of seismic refraction data is the determination of intervals and critically refracted seismic wave propagation velocities through the layers beneath the sea floor. To determine intervals and velocities precisely, the resolution of refraction data should be enhanced. The intent of the study is to improve the quality of shallow marine refraction data by the digital technique using microcomputer- based acquisition and processing system. The system consists of an IBM AT microcomputer clone, an analog-digital(A/D) converter. A mass storage unit and a parallel processing board. The A/D converter has 12 bits of precision and 250 kHz of conversion rate. The magneto-optical disk drive is used for the mass storage of seismic refraction data. Shallow marine seismic refraction surveys have been carried out using the system at 6 locations off Ulsan and Pusan area. The refraction data were acquired by the radio sonobuoy. The refraction profiles have been produced by the laser printer with 300 dpi resolution after the basic computer processing. 5-9 layers were interpreted from digital refraction profiles, whereas 2-4 layers were interpreted from analog refraction profiles. the propagation velocities of sediments were interpreted as 1.6-2.1 km/sec. The propagation velocities of acoustic basement were interpreted as 2.4-2.7 km/sec off Ulsan area, 4.8 km/sec off Pusan area.

  • PDF

Adaptive Mapping Information Management Scheme for High Performance Large Sale Flash Memory Storages (고성능 대용량 플래시 메모리 저장장치의 효과적인 매핑정보 캐싱을 위한 적응적 매핑정보 관리기법)

  • Lee, Yongju;Kim, Hyunwoo;Kim, Huijeong;Huh, Taeyeong;Jung, Sanghyuk;Song, Yong Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.3
    • /
    • pp.78-87
    • /
    • 2013
  • NAND flash memory has been widely used as a storage medium in mobile devices, PCs, and workstations due to its advantages such as low power consumption, high performance, and random accessability compared to a hard disk drive. However, NAND flash cannot support in-place update so that it is mandatory to erase the entire block before overwriting the corresponding page. In order to overcome this drawback, flash storages need a software support, named Flash Translation Layer. However, as the high performance mass NAND flash memory is getting widely used, the size of mapping tables is increasing more than the limited DRAM size. In this paper, we propose an adaptive mapping information caching algorithm based on page mapping to solve this DRAM space shortage problem. Our algorithm uses a mapping information caching scheme which minimize the flash memory access frequency based on the analysis of several workloads. The experimental results show that the proposed algorithm can increase the performance by up to 70% comparing with the previous mapping information caching algorithm.

Shadow Recovery for Column-based Databases (컬럼-기반 데이터베이스를 위한 그림자 복구)

  • Byun, Si-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.4
    • /
    • pp.2784-2790
    • /
    • 2015
  • The column-oriented database storage is a very advanced model for large-volume data transactions because of its superior I/O performance. Traditional data storages exploit row-oriented storage where the attributes of a record are placed contiguously in hard disk for fast write operations. However, for search-mostly data warehouse systems, column-oriented storage has become a more proper model because of its superior read performance. Recently, solid state drive using flash memory is largely recognized as the preferred storage media for high-speed data analysis systems. In this research, we propose a new transaction recovery scheme for a column-oriented database environment which is based on a flash media file system. We improved traditional shadow paging schemes by reusing old data pages which are supposed to be invalidated in the course of writing a new data page in the flash file system environment. In order to reuse these data pages, we exploit reused shadow list structure in our column-oriented shadow recovery(CoSR) scheme. CoSR scheme minimizes the additional storage overhead for keeping shadow pages and minimizes the I/O performance degradation caused by column data compression of traditional recovery schemes. Based on the results of the performance evaluation, we conclude that CoSR outperforms the traditional schemes by 17%.

Column-aware Transaction Management Scheme for Column-Oriented Databases (컬럼-지향 데이터베이스를 위한 컬럼-인지 트랜잭션 관리 기법)

  • Byun, Si-Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.4
    • /
    • pp.125-133
    • /
    • 2014
  • The column-oriented database storage is a very advanced model for large-volume data analysis systems because of its superior I/O performance. Traditional data storages exploit row-oriented storage where the attributes of a record are placed contiguously in hard disk for fast write operations. However, for search-mostly datawarehouse systems, column-oriented storage has become a more proper model because of its superior read performance. Recently, solid state drive using MLC flash memory is largely recognized as the preferred storage media for high-speed data analysis systems. The features of non-volatility, low power consumption, and fast access time for read operations are sufficient grounds to support flash memory as major storage components of modern database servers. However, we need to improve traditional transaction management scheme due to the relatively slow characteristics of column compression and flash operation as compared to RAM memory. In this research, we propose a new scheme called Column-aware Multi-Version Locking (CaMVL) scheme for efficient transaction processing. CaMVL improves transaction performance by using compression lock and multi version reads for efficiently handling slow flash write/erase operation in lock management process. We also propose a simulation model to show the performance of CaMVL. Based on the results of the performance evaluation, we conclude that CaMVL scheme outperforms the traditional scheme.

Flash Translation Layer for Heterogeneous NAND Flash-based Storage Devices Based on Access Patterns of Logical Blocks (논리 블록의 접근경향을 활용한 이종 낸드 플래시 기반 저장장치를 위한 Flash Translation Layer)

  • Bang, Kwanhu;Park, Sang-Hoon;Lee, Hyuk-Jun;Chung, Eui-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.94-101
    • /
    • 2013
  • The market for NAND flash-based storage devices has grown significantly as they rapidly replace traditional disk-based storage devices. Heterogeneous NAND flash-based storage devices using both multi-level cell (MLC) and single-level cell (SLC) NAND flash memories are also actively researched since both types of memories complement each other. Heterogeneous NAND flash-based storage devices suffer from the overheads incurred by migration from SLC to MLC and garbage collection of SLC. This paper proposes a new flash translation layer (FTL) for heterogeneous NAND flash-based storage devices to reduce the overheads by utilizing SLC efficiently. The proposed FTL analyzes the access patterns of logical blocks and selects and stores only logical blocks expected to bring performance improvement in SLC. The experimental results show that the total execution time of heterogeneous NAND flash-based storage devices with our proposed FTL scheme is 35% shorter than that with the previously proposed best FTL scheme.

A Study for Hybrid Honeypot Systems (하이브리드 허니팟 시스템에 대한 연구)

  • Lee, Moon-Goo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.127-133
    • /
    • 2014
  • In order to protect information asset from various malicious code, Honeypot system is implemented. Honeypot system is designed to elicit attacks so that internal system is not attacked or it is designed to collect malicious code information. However, existing honeypot system is designed for the purpose of collecting information, so it is designed to induce inflows of attackers positively by establishing disguised server or disguised client server and by providing disguised contents. In case of establishing disguised server, it should reinstall hardware in a cycle of one year because of frequent disk input and output. In case of establishing disguised client server, it has operating problem such as procuring professional labor force because it has a limit to automize the analysis of acquired information. To solve and supplement operating problem and previous problem of honeypot's hardware, this thesis suggested hybrid honeypot. Suggested hybrid honeypot has honeywall, analyzed server and combined console and it processes by categorizing attacking types into two types. It is designed that disguise (inducement) and false response (emulation) are connected to common switch area to operate high level interaction server, which is type 1 and low level interaction server, which is type 2. This hybrid honeypot operates low level honeypot and high level honeypot. Analysis server converts hacking types into hash value and separates it into correlation analysis algorithm and sends it to honeywall. Integrated monitoring console implements continuous monitoring, so it is expected that not only analyzing information about recent hacking method and attacking tool but also it provides effects of anticipative security response.

Implementation of File-referring Octree for Huge 3D Point Clouds (대용량 3차원 포인트 클라우드를 위한 파일참조 옥트리의 구현)

  • Han, Soohee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.2
    • /
    • pp.109-115
    • /
    • 2014
  • The aim of the study is to present a method to build an octree and to query from it for huge 3D point clouds of which volumes correspond or surpass the main memory, based on the memory-efficient octree developed by Han(2013). To the end, the method directly refers to 3D point cloud stored in a file on a hard disk drive instead of referring to that duplicated in the main memory. In addition, the method can save time to rebuild octree by storing and restoring it from a file. The memory-referring method and the present file-referring one are analyzed using a dataset composed of 18 million points surveyed in a tunnel. In results, the memory-referring method enormously exceeded the speed of the file-referring one when generating octree and querying points. Meanwhile, it is remarkable that a still bigger dataset composed of over 300 million points could be queried by the file-referring method, which would not be possible by the memory-referring one, though an optimal octree destination level could not be reached. Furthermore, the octree rebuilding method proved itself to be very efficient by diminishing the restoration time to about 3% of the generation time.