• Title/Summary/Keyword: hardware cost

Search Result 871, Processing Time 0.025 seconds

Verifying a Virtual Development Environment for Embedded Software (임베디드소프트웨어 가상 개발환경에 대한 검증)

  • Hidayat, Febiansyah;Satria, Hadipurnawan;Kwon, Jin B.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2009.11a
    • /
    • pp.67-68
    • /
    • 2009
  • Increasing use of embedded systems has made many improvements on hardware development for specific purpose. Hardware changes are more expensive and harder to implement rather than software changes. Developers need tools to do design and testing of new hardware. Many simulation tools have been made to mimic the hardware and allow developer to test programs on top of new hardware. Virtual Development Environment for Embedded Software (VDEES) is one of the alternatives available. It provides an open source based platform and an Integrated Development Environment (IDE) that can be used to build and testing newly made component, faster and at low-cost.

A novel hardware design for SIFT generation with reduced memory requirement

  • Kim, Eung Sup;Lee, Hyuk-Jae
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.2
    • /
    • pp.157-169
    • /
    • 2013
  • Scale Invariant Feature Transform (SIFT) generates image features widely used to match objects in different images. Previous work on hardware-based SIFT implementation requires excessive internal memory and hardware logic [1]. In this paper, a new hardware organization is proposed to implement SIFT with less memory and hardware cost than the previous work. To this end, a parallel Gaussian filter bank is adopted to eliminate the buffers that store intermediate results because parallel operations allow all intermediate results available at the same time. Furthermore, the processing order is changed from the raster-scan order to the block-by-block order so that the line buffer size storing the source image is also reduced. These techniques trade the reduction of memory size with a slight increase of the execution time and external memory bandwidth. As a result, the memory size is reduced by 94.4%. The proposed hardware for SIFT implementation includes the Descriptor generation block, which is omitted in the previous work [1]. The addition of the hardwired descriptor generation improves the computation speed by about 30 times when compared with the previous work.

Design and Implementation of Low Cost Z-80 Emulator (저렴한 Z-80 Emulator의 설계 및 제작)

  • 마성원;임상조;정환익;이광형
    • Proceedings of the Korean Institute of Communication Sciences Conference
    • /
    • 1984.10a
    • /
    • pp.98-100
    • /
    • 1984
  • This paper design the emulator of the 8 bit microprocessor based on the z-80. The system control the debugging relation ship concerning the hardware and the software between the target system and the host system. It is purpose that emulator manufacture low cost.

  • PDF

Intents of Acquisitions in Information Technology Industrie (정보기술 산업에서의 인수 유형별 인수 의도 분석)

  • Cho, Wooje;Chang, Young Bong;Kwon, Youngok
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.123-138
    • /
    • 2016
  • This study investigates intents of acquisitions in information technology industries. Mergers and acquisitions are a strategic decision at corporate-level and have been an important tool for a firm to grow. Plenty of firms in information technology industries have acquired startups to increase production efficiency, expand customer base, or improve quality over the last decades. For example, Google has made about 200 acquisitions since 2001, Cisco has acquired about 210 firms since 1993, Oracle has made about 125 acquisitions since 1994, and Microsoft has acquired about 200 firms since 1987. Although there have been many existing papers that theoretically study intents or motivations of acquisitions, there are limited papers that empirically investigate them mainly because it is challenging to measure and quantify intents of M&As. This study examines the intent of acquisitions by measuring specific intents for M&A transactions. Using our measures of acquisition intents, we compare the intents by four acquisition types: (1) the acquisition where a hardware firm acquires a hardware firm, (2) the acquisition where a hardware firm acquires a software/IT service firm, (3) the acquisition where a software/IT service firm acquires a hardware firm, and (4) the acquisition where a software /IT service firm acquires a software/IT service firm. We presume that there are difference in reasons why a hardware firm acquires another hardware firm, why a hardware firm acquires a software firm, why a software/IT service firm acquires a hardware firm, and why a software/IT service firm acquires another software/IT service firm. Using data of the M&As in US IT industries, we identified major intents of the M&As. The acquisition intents are identified based on the press release of M&A announcements and measured with four categories. First, an acquirer may have intents of cost saving in operations by sharing common resources between the acquirer and the target. The cost saving can accrue from economies of scope and scale. Second, an acquirer may have intents of product enhancement/development. Knowledge and skills transferred from the target may enable the acquirer to enhance the product quality or to expand product lines. Third, an acquirer may have intents of gain additional customer base to expand the market, to penetrate the market, or to enter a foreign market. Fourth, a firm may acquire a target with intents of expanding customer channels. By complementing existing channel to the customer, the firm can increase its revenue. Our results show that acquirers have had intents of cost saving more in acquisitions between hardware companies than in acquisitions between software companies. Hardware firms are more likely to acquire with intents of product enhancement or development than software firms. Overall, the intent of product enhancement/development is the most frequent intent in all of the four acquisition types, and the intent of customer base expansion is the second. We also analyze our data with the classification of production-side intents and customer-side intents, which is based on activities of the value chain of a firm. Intents of cost saving operations and those of product enhancement/development can be viewed as production-side intents and intents of customer base expansion and those of expanding customer channels can be viewed as customer-side intents. Our analysis shows that the ratio between the number of customer-side intents and that of production-side intents is higher in acquisitions where a software firm is an acquirer than in the acquisitions where a hardware firm is an acquirer. This study can contribute to IS literature. First, this study provides insights in understanding M&As in IT industries by answering for question of why an IT firm intends to another IT firm. Second, this study also provides distribution of acquisition intents for acquisition types.

Bus and Registor Optimization in Datapath Synthesis (데이터패스 합성에서의 버스와 레지스터의 최적화 기법)

  • Sin, Gwan-Ho;Lee, Geun-Man
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.8
    • /
    • pp.2196-2203
    • /
    • 1999
  • This paper describes the bus scheduling problem and register optimization method in datapath synthesis. Scheduling is process of operation allocation to control steps in order to minimize the cost function under the given circumstances. For that purpose, we propose some formulations to minimize the cost function for bus assignment to get an optimal and minimal cost function in hardware allocations. Especially, bus and register minimization technique are fully considered which are the essential topics in hardware allocation. Register scheduling is done after the operation and bus scheduling. Experiments are done with the DFG model of fifth-order digital ware filter to show its effectiveness. Structural integer programming formulations are used to solve the scheduling problems in order to get the optimal scheduling results in the integer linear programming environment.

  • PDF

Analysis on Voltage and Cost of Substation with PWM Rectifier in DC Traction Power Supply System (PWM 정류기를 적용한 직류급전시스템의 전압강하 및 비용 평가)

  • Kim, Joorak;Park, Kijun;Park, Chang-Reung;Choo, Eun-Sang;Lee, Jun-Young
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.4
    • /
    • pp.640-645
    • /
    • 2015
  • Near surface transit system has should be constructed as installation cost of light rail transit with elevated track. So, distance between two substations is longer than conventional system. The long feeding distance results in severe voltage drop. This paper proposes a PWM rectifier instead of diode rectifier. The PWM rectifier has some advantages. This is able to control output voltage constantly to reduce voltage drop and to use regeneration power without additional inverter. This paper analyse on improved voltage profile and cost of substation with PWM rectifier. The analysis of voltage profile use PSIM, and the installation cost of substation with PWM rectifier is compared to substation with diode rectifier.

A study on the Cost-effective Architecture Design of High-speed Soft-decision Viterbi Decoder for Multi-band OFDM Systems (Multi-band OFDM 시스템용 고속 연판정 비터비 디코더의 효율적인 하드웨어 구조 설계에 관한 연구)

  • Lee, Seong-Joo
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.11 s.353
    • /
    • pp.90-97
    • /
    • 2006
  • In this paper, we present a cost-effective architecture of high-speed soft-decision Viterbi decoder for Multi-band OFDM(MB-OFDM) systems. In the design of modem for MB-OFDM systems, a parallel processing architecture is general]y used for the reliable hardware implementation, because the systems should support a very high-speed data rate of at most 480Mbps. A Viterbi decoder also should be designed by using a parallel processing structure and support a very high-speed data rate. Therefore, we present a optimized hardware architecture for 4-way parallel processing Viterbi decoder in this paper. In order to optimize the hardware of Viterbi decoder, we compare and analyze various ACS architectures and find the optimal one among them with respect to hardware complexity and operating frequency The Viterbi decoder with a optimal hardware architecture is designed and verified by using Verilog HDL, and synthesized into gate-level circuits with TSMC 0.13um library. In the synthesis results, we find that the Viterbi decoder contains about 280K gates and works properly at the speed required in MB-OFDM systems.

High-Performance Hardware Architecture for Stereo Matching (스테레오 정합을 위한 고성능 하드웨어 구조)

  • Seo, Young-Ho;Kim, Woo-Youl;Lee, Yoon-Hyuk;Koo, Ja-Myung;Kim, Bo-Ra;Kim, Yoon-Ju;An, Ho-Myung;Choi, Hyun-Jun;Kim, Dong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.635-637
    • /
    • 2013
  • This paper proposed a new hardware architecture for stereo matching in real time. We minimized the amount of calculation and the number of memory accesses through analyzing calculation of stereo matching. From this, we proposed a new stereo matching calculating cell and a new hardware architecture by expanding it in parallel, which concurrently calculates cost function for all pixels in a search range. After expanding it, we proposed a new hardware architecture to calculate cost function for 2-dimensional region. The implemented hardware can be operated with minimum 250Mhz clock frequence in FPGA environment, and has the performance of 813fps in case of the search range of 64 pixels and the image size of $640{\times}480$.

  • PDF

A Study on The Conversion Factor between Heterogeneous DBMS for Cloud Migration

  • Joonyoung Ahn;Kijung Ryu;Changik Oh;Taekryong Han;Heewon Kim;Dongho Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2450-2463
    • /
    • 2024
  • Many legacy information systems are currently being clouded. This is due to the advantage of being able to respond flexibly to the changes in user needs and system environment while reducing the initial investment cost of IT infrastructure such as servers and storage. The infrastructure of the information system migrated to the cloud is being integrated through the API connections, while being subdivided by using MSA (Micro Service Architecture) internally. DBMS (Database Management System) is also becoming larger after cloud migration. Scale calculation in most layers of the application architecture can be measured and calculated from auto-scaling perspective, but the method of hardware scale calculation for DBMS has not been established as standardized methodology. If there is an error in hardware scale calculation of DBMS, problems such as poor performance of the information system or excessive auto-scaling may occur. In addition, evaluating hardware size is more crucial because it also affects the financial cost of the migration. CPU is the factor that has the greatest influence on hardware scale calculation of DBMS. Therefore, this paper aims to calculate the conversion factor for CPU scale calculation that will facilitate the cloud migration between heterogeneous DBMS. In order to do that, we utilize the concept and definition of hardware capacity planning and scale calculation in the on-premise information system. The methods to calculate the conversion factor using TPC-H tests are proposed and verified. In the future, further research and testing should be conducted on the size of the segmented CPU and more heterogeneous DBMS to demonstrate the effectiveness of the proposed test model.

Enhancing GPU Performance by Efficient Hardware-Based and Hybrid L1 Data Cache Bypassing

  • Huangfu, Yijie;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.2
    • /
    • pp.69-77
    • /
    • 2017
  • Recent GPUs have adopted cache memory to benefit general-purpose GPU (GPGPU) programs. However, unlike CPU programs, GPGPU programs typically have considerably less temporal/spatial locality. Moreover, the L1 data cache is used by many threads that access a data size typically considerably larger than the L1 cache, making it critical to bypass L1 data cache intelligently to enhance GPU cache performance. In this paper, we examine GPU cache access behavior and propose a simple hardware-based GPU cache bypassing method that can be applied to GPU applications without recompiling programs. Moreover, we introduce a hybrid method that integrates static profiling information and hardware-based bypassing to further enhance performance. Our experimental results reveal that hardware-based cache bypassing can boost performance for most benchmarks, and the hybrid method can achieve performance comparable to state-of-the-art compiler-based bypassing with considerably less profiling cost.