• Title/Summary/Keyword: hardware cost

Search Result 874, Processing Time 0.033 seconds

A Study on The Marketing Strategy of IoT (Internet of Things)-based Smart Home Service Companies Focusing on The Case of Xiaomi

  • Liang, Jinle;Kang, Min Jung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.20-25
    • /
    • 2021
  • In the background of the rapid development of the IoT, smart home work is becoming more and more important to each science and technology company. Smart home provides a safe, comfortable, high-quality, high-performance smart home living space compared to general homes, and at the same time It is responding to the low-carbon, eco-friendly global trend. Growth drivers driving the smart home market are increasing the number of Internet users, increasing disposable income in developing countries, increasing the importance of remote home monitoring, and increasing the need for energy saving and low carbon. In 2013-2014, Xiaomi launched a series of smart routers and smart home hardware devices. In 2015, it announced the latest product of the Xiaomi Ecological Chain, the "Smart Home Package," and in 2016 launched the MIJIA brand to invest in various smart product companies. In 2017, Xiaomi announced a plan to build an open smart hardware MIOT platform. We investigated the management strategy of Xiaomi home smart service based on IOT. The management strategy was divided into cost lead strategy, differentiation strategy of Xiaomi home service, and AIOT strategy of Xiaomi smart home.

Research Trends in Quantum Error Decoders for Fault-Tolerant Quantum Computing (결함허용 양자 컴퓨팅을 위한 양자 오류 복호기 연구 동향)

  • E.Y. Cho;J.H. On;C.Y. Kim;G. Cha
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.5
    • /
    • pp.34-50
    • /
    • 2023
  • Quantum error correction is a key technology for achieving fault-tolerant quantum computation. Finding the best decoding solution to a single error syndrome pattern counteracting multiple errors is an NP-hard problem. Consequently, error decoding is one of the most expensive processes to protect the information in a logical qubit. Recent research on quantum error decoding has been focused on developing conventional and neural-network-based decoding algorithms to satisfy accuracy, speed, and scalability requirements. Although conventional decoding methods have notably improved accuracy in short codes, they face many challenges regarding speed and scalability in long codes. To overcome such problems, machine learning has been extensively applied to neural-network-based error decoding with meaningful results. Nevertheless, when using neural-network-based decoders alone, the learning cost grows exponentially with the code size. To prevent this problem, hierarchical error decoding has been devised by combining conventional and neural-network-based decoders. In addition, research on quantum error decoding is aimed at reducing the spacetime decoding cost and solving the backlog problem caused by decoding delays when using hardware-implemented decoders in cryogenic environments. We review the latest research trends in decoders for quantum error correction with high accuracy, neural-network-based quantum error decoders with high speed and scalability, and hardware-based quantum error decoders implemented in real qubit operating environments.

Development of Single Channel ECG Signal Based Biometrics System (단채널 심전도 기반 바이오인식 시스템 개발)

  • Gang, Gyeong-Woo;Min, Chul-Hong;Kim, Tae-Seon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.1
    • /
    • pp.1-7
    • /
    • 2012
  • In general, currently developed ECG(electrocardiogram) based biometrics approaches are not suitable for real market applications since they require high cost ECG monitoring device and their measurement methods showed poor usability. In this paper, we developed lead I signal based biometrics system using special purpose ECG measurement hardware. To guarantee signal quality for biometrics from various signal measurement environment in our ordinary life, several filters are applied. In addition, to enhance usability, only two skin on electrodes without reference point are used for measurement. Lead I signals of seventeen candidates are measured from developed hardware and features are extracted. Extracted features are applied to support vector machine (SVM) pattern classifier for biometrics, and the experimental results showed 98.59% of sensitivity (SN) and 97.21% of accuracy (ACC). Compare to conventional ECG biometrics approaches, proposed system showed enhanced usability with low-cost measurement hardware.

Drowsy Driving Detection Algorithm Using a Steering Angle Sensor And State of the Vehicle (조향각센서와 차량상태를 이용한 졸음운전 판단 알고리즘)

  • Moon, Byoung-Joon;Yeon, Kyu-Bong;Lee, Sun-Geol;Hong, Seung-Pyo;Nam, Sang-Yep;Kim, Dong-Han
    • 전자공학회논문지 IE
    • /
    • v.49 no.2
    • /
    • pp.30-39
    • /
    • 2012
  • An effective drowsy driver detection system is needed, because the probability of accident is high for drowsy driving and its severity is high at the time of accident. However, the drowsy driver detection system that uses bio-signals or vision is difficult to be utilized due to high cost. Thus, this paper proposes a drowsy driver detection algorithm by using steering angle sensor, which is attached to the most of vehicles at no additional cost, and vehicle information such as brake switch, throttle position signal, and vehicle speed. The proposed algorithm is based on jerk criterion, which is one of drowsy driver's steering patterns. In this paper, threshold value of each variable is presented and the proposed algorithm is evaluated by using acquired vehicle data from hardware in the loop simulation (HILS) through CAN communication and MATLAB program.

A hardware architecture based on the NCC algorithm for fast disparity estimation in 3D shape measurement systems (고밀도 3D 형상 계측 시스템에서의 고속 시차 추정을 위한 NCC 알고리즘 기반 하드웨어 구조)

  • Bae, Kyeong-Ryeol;Kwon, Soon;Lee, Yong-Hwan;Lee, Jong-Hun;Moon, Byung-In
    • Journal of Sensor Science and Technology
    • /
    • v.19 no.2
    • /
    • pp.99-111
    • /
    • 2010
  • This paper proposes an efficient hardware architecture to estimate disparities between 2D images for generating 3D depth images in a stereo vision system. Stereo matching methods are classified into global and local methods. The local matching method uses the cost functions based on pixel windows such as SAD(sum of absolute difference), SSD(sum of squared difference) and NCC(normalized cross correlation). The NCC-based cost function is less susceptible to differences in noise and lighting condition between left and right images than the subtraction-based functions such as SAD and SSD, and for this reason, the NCC is preferred to the other functions. However, software-based implementations are not adequate for the NCC-based real-time stereo matching, due to its numerous complex operations. Therefore, we propose a fast pipelined hardware architecture suitable for real-time operations of the NCC function. By adopting a block-based box-filtering scheme to perform NCC operations in parallel, the proposed architecture improves processing speed compared with the previous researches. In this architecture, it takes almost the same number of cycles to process all the pixels, irrespective of the window size. Also, the simulation results show that its disparity estimation has low error rate.

The Study on matrix based high performance pattern matching by independence partial match (독립 부분 매칭에 의한 행렬 기반 고성능 패턴 매칭 방법에 관한 연구)

  • Jung, Woo-Sug;Kwon, Taeck-Geun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.9B
    • /
    • pp.914-922
    • /
    • 2009
  • In this paper, we propose a matrix based real-time pattern matching method, called MDPI, for real-time intrusion detection on several Gbps network traffic. Particularly, in order to minimize a kind of overhead caused by buffering, reordering, and reassembling under the circumstance where the incoming packet sequence is disrupted, MDPI adopts independent partial matching in the case dealing with pattern matching matrix. Consequently, we achieved the performance improvement of the amount of 61% and 50% with respect to TCAM method efficiency through several experiments where the average length of the Snort rule set was maintained as 9 bytes, and w=4 bytes and w=8bytes were assigned, respectively, Moreover, we observed the pattern scan speed of MDPI was 10.941Gbps and the consumption of hardware resource was 5.79LC/Char in the pattern classification of MDPI. This means that MDPI provides the optimal performance compared to hardware complexity. Therefore, by decreasing the hardware cost came from the increased TCAM memory efficiency, MDPI is proven the cost effective high performance intrusion detection technique.

A Study on the Hardware Cost Estimation Equation of Professional Service Robot (전문서비스 로봇 하드웨어 비용추정 관계식 개발에 관한 연구)

  • Lee, Jungsoo;Min, Jeongtack;Choi, Yeon-Seo;Park, Myeongjun;Sohn, Dongseop
    • Journal of Digital Convergence
    • /
    • v.16 no.7
    • /
    • pp.89-96
    • /
    • 2018
  • In this paper, we proposed a parametric estimation method for estimating H/W cost by using the development data of professional service robot in Korea. In addition, we derived the factors and weights that we can estimate the costs depending on the application environmental conditions of the robot. For the analysis, we developed the equation of professional service robot cost estimation using parametric method. We also derived the adjustment factors and following weights through FGI and Delphi for environmental conditions. We have developed a cost estimation equation that reflects the weight, volume, and manufacturing difficulty, and can derive a relational equation that reflects the environmental factors(dust/water, heat/cold, safety, test, technology innovation). This provides an objective basis for estimating the cost of professional service robots and will lead to ongoing research for estimating the H/W development cost of professional service robots. In the future, we will increase reliability by collecting abundant data, and will strengthen models through finding functional factors.

A Hardware Implementation of Image Scaler Based on Area Coverage Ratio (면적 점유비를 이용한 영상 스케일러의 설계)

  • 성시문;이진언;김춘호;김이섭
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.3
    • /
    • pp.43-53
    • /
    • 2003
  • Unlike in analog display devices, the physical screen resolution in digital devices are fixed from the manufacturing. It is a weak point on digital devices. The screen resolution displayed in digital display devices is varied. Thus, interpolation or decimation of the resolution on the display is needed to make the input pixels equal to the screen resolution., This process is called image scaling. Many researches have been developed to reduce the hardware cost and distortion of the image of image scaling algorithm. In this paper, we proposed a Winscale algorithm. which modifies the scale up/down in continuous domain to the scale up/down in discrete domain. Thus, the algorithm is suitable to digital display devices. Hardware implementation of the image scaler is performed using Verilog XL and chip is fabricated in a 0.5${\mu}{\textrm}{m}$ Samsung SOG technology. The hardware costs as well as the scalabilities are compared with the conventional image scaling algorithms that are used in other software. This Winscale algorithm is proved more scalable than other image-scaling algorithm, which has similar H/W cost. This image-scaling algorithm can be used in various digital display devices that need image scaling process.

Purposes, Results, and Types of Software Post Life Cycle Changes

  • Koh, Seokha;Han, Man Pil
    • Journal of Information Technology Applications and Management
    • /
    • v.22 no.3
    • /
    • pp.143-167
    • /
    • 2015
  • This paper addresses the issue how the total life cycle cost may be minimized and how the cost should be allocated to the acquirer and developer. This paper differentiates post life cycle change (PLCC) endeavors from PLCC activities, rigorously classifies PLCC endeavors according to the result of PLCC endeavors, and rigorously defines the life cycle cost of a software product. This paper reviews classical definitions of software 'maintenance' types and proposes a new typology of PLCC activities too. The proposed classification schemes are exhaustive and mutually exclusive, and provide a new paradigm to review existing literatures regarding software cost estimation, software 'maintenance,' software evolution, and software architecture from a new perspective. This paper argues that the long-term interest of the acquirer is not protected properly because warranty period is typically too short and because the main concern of warranty service is given to removing the defects detected easily. Based on the observation that defects are caused solely by errors the developer has committed for software while defects are often induced by using for hardware (so, this paper cautiously proposes not to use the term 'maintenance' at all for software), this paper argues that the cost to remove defects should not be borne by the acquirer for software.

A SOC Design Methodology using SystemC (SystemC를 이용한 SOC 설계 방법)

  • 홍진석;김주선;배점한
    • Proceedings of the IEEK Conference
    • /
    • 2000.06b
    • /
    • pp.153-156
    • /
    • 2000
  • This paper presents a SOC design methodology using the newly-emerging SystemC. The suggested methodology firstly uses SystemC to define blocks from the previously-developed system level algorithm with internal behavior and interface being separated and validate such a described blocks' functionality when integrated. Next, the partitioning between software and hardware is considered. With software, the interface to hardware is described cycle-accurate and the other internal behavior in conventional ways. With hardware, I/O transactions are refined gradually in several abstraction levels and internal behavior described on a function basis. Once hardware and software have been completed functionally, system performance analysis is performed on the built model with assumed performance factors and influences such decisions regressively as on optimum algorithm selection, partitioning and etc. The analysis then gives constraint information when hardware description undergoes scheduling and fixed-point trans- formation with the help of automatic translation tools or manually. The methodology enables C/C++ program developers and VHDL/Verilog users to migrate quickly to a co-design & co-verification environment and is suitable for SoC development at a low cost.

  • PDF