• 제목/요약/키워드: embedded computing

검색결과 537건 처리시간 0.028초

An exploratory study of stress wave communication in concrete structures

  • Ji, Qing;Ho, Michael;Zheng, Rong;Ding, Zhi;Song, Gangbing
    • Smart Structures and Systems
    • /
    • 제15권1호
    • /
    • pp.135-150
    • /
    • 2015
  • Large concrete structures are prone to cracks and damages over time from human usage, weathers, and other environmental attacks such as flood, earthquakes, and hurricanes. The health of the concrete structures should be monitored regularly to ensure safety. A reliable method of real time communications can facilitate more frequent structural health monitoring (SHM) updates from hard to reach positions, enabling crack detections of embedded concrete structures as they occur to avoid catastrophic failures. By implementing an unconventional mode of communication that utilizes guided stress waves traveling along the concrete structure itself, we may be able to free structural health monitoring from costly (re-)installation of communication wires. In stress-wave communications, piezoelectric transducers can act as actuators and sensors to send and receive modulated signals carrying concrete status information. The new generation of lead zirconate titanate (PZT) based smart aggregates cause multipath propagation in the homogeneous concrete channel, which presents both an opportunity and a challenge for multiple sensors communication. We propose a time reversal based pulse position modulation (TR-PPM) communication for stress wave communication within the concrete structure to combat multipath channel dispersion. Experimental results demonstrate successful transmission and recovery of TR-PPM using stress waves. Compared with PPM, we can achieve higher data rate and longer link distance via TR-PPM. Furthermore, TR-PPM remains effective under low signal-to-noise (SNR) ratio. This work also lays the foundation for implementing multiple-input multiple-output (MIMO) stress wave communication networks in concrete channels.

스몰 딥러닝을 이용한 아스팔트 도로 포장의 균열 탐지에 관한 연구 (A Study on Crack Detection in Asphalt Road Pavement Using Small Deep Learning)

  • 지봉준
    • 한국지반환경공학회 논문집
    • /
    • 제22권10호
    • /
    • pp.13-19
    • /
    • 2021
  • 아스팔트 포장의 균열은 날씨의 변화나 차량에 의한 충격으로 발생하며, 균열을 방치할 경우 포장 수명이 단축되고 각종 사고를 불러 일으킬 수 있다. 따라서 아스팔트 도로 포장의 균열을 빠르게 감지하여 보수조치를 취하기 위하여 이미지를 통해 균열을 자동으로 탐지하기 위한 연구들이 지속되어 왔다. 특히 최근들어 Convolutional Neural Network를 사용하여 아스팔트 도로 포장의 균열을 탐지하려는 모델들이 많이 연구되고 있으나, 고성능의 컴퓨팅 파워를 요구하기 때문에 실제 활용에는 한계가 있다. 이에 본 논문에서는 모바일 기기에 적용 가능한 스몰 딥러닝 모델을 적용하여 아스팔트 도로 포장의 균열을 탐지하는 모델의 개발을 위한 프레임워크를 제안한다. 사례연구를 통해 제안한 스몰 딥러닝 모델은 일반적인 딥러닝 모델들과 비교 연구되었으며, 상대적으로 적은 파라미터를 가지는 모델임에도 일반적인 딥러닝 모델들과 유사한 성능을 보였다. 개발된 모델은 모바일 기기나 IoT에 임베디드 되어 사용될 수 있을 것으로 기대된다.

Design and Evaluation of the Internet-Of-Small-Things Prototype Powered by a Solar Panel Integrated with a Supercapacitor

  • Park, Sangsoo
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권11호
    • /
    • pp.11-19
    • /
    • 2021
  • 본 논문은 충전식 배터리의 단점을 보완하여 급속 충전과 방전이 가능하고 높은 전력 효율 및 반영구적인 충·방전 사이클 수명의 특성을 갖는 수퍼커패시터를 보조 전력 저장장치로 사용하여 전력 관리 시스템에 결합한 프로토타입 플랫폼을 제안한다. 본 논문의 플랫폼을 위해 물리적인 환경 변화에 따른 태양광 패널에서의 공급 전력 차단 혹은 재개 상태를 마이크로컨트롤러에 연결된 인터럽트를 통해 감지할 수 있는 기법을 개발하였다. 연속적인 전원 공급이 보장되지 않는 컴퓨팅 환경에서 데이터의 유실을 방지하기 위해 전원 공급이 차단되는 경우 휘발성 메모리에 있는 프로그램 문맥 및 데이터를 비휘발성 메모리로 이전하는 낮은 수준의 시스템 소프트웨어를 마이크로컨트롤러에 구현하였다. 실험을 통해 슈퍼커패시터가 보조 전력 저장장치로서 일시적 전원 공급에 효과적으로 하는지를 검증하였으며 다양한 벤치마크를 통해 전원 상태 감지 및 휘발성 메모리에서 비휘발성 메모리로의 프로그램 문맥 및 데이터의 이전 기법이 낮은 오버헤드를 갖음을 확인하였다.

산업용 사물인터넷의 글로벌 기업 동향 연구 (A Study on the Global Companies Trend of Industrial Internet of Things)

  • 김홍한;송성일
    • 한국정보전자통신기술학회논문지
    • /
    • 제12권4호
    • /
    • pp.387-394
    • /
    • 2019
  • 본 연구는 산업용 사물인터넷에서 가장 영향력이 큰 기업들이 분야별로 역량을 발휘하여 상위 경쟁력을 확보하고 있는지를 알아보기 위해 IoTONE에서 발표한 자료를 바탕으로 분야별로 선정한 기업들의 운영현황과 사물인터넷 플랫폼 및 특징을 조사하여 어떠한 강점이 있는지를 분석하고 확인하는데 목적이 있다. 산업용 사물인터넷은 제조업의 공장을 경쟁력 있는 스마트공장으로 만들기 위한 것만 아니라 비즈니스 활동의 전반적인 영역에서 필수적인 요소로 나타나고 있다. 기업전반에 산업용 사물인터넷을 적용하기 위해서 적절한 플랫폼을 적용하여야 한다. 세계적으로 많은 기업들이 산업용 사물인터넷의 플랫폼을 제안하고 있다. 산업용 사물인터넷의 커넥티드 머신, 사이버보안, 분석 플랫폼, 임베디드 컴퓨팅, 플랫폼 연결성, 하드웨어 연결성의 각 분야에서 가장 영향력이 있는 기업들의 강점을 분석하고 운영 현황과 플랫폼의 특징 등 동향을 알아보았다.

A Worker-Driven Approach for Opening Detection by Integrating Computer Vision and Built-in Inertia Sensors on Embedded Devices

  • Anjum, Sharjeel;Sibtain, Muhammad;Khalid, Rabia;Khan, Muhammad;Lee, Doyeop;Park, Chansik
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.353-360
    • /
    • 2022
  • Due to the dense and complicated working environment, the construction industry is susceptible to many accidents. Worker's fall is a severe problem at the construction site, including falling into holes or openings because of the inadequate coverings as per the safety rules. During the construction or demolition of a building, openings and holes are formed in the floors and roofs. Many workers neglect to cover openings for ease of work while being aware of the risks of holes, openings, and gaps at heights. However, there are safety rules for worker safety; the holes and openings must be covered to prevent falls. The safety inspector typically examines it by visiting the construction site, which is time-consuming and requires safety manager efforts. Therefore, this study presented a worker-driven approach (the worker is involved in the reporting process) to facilitate safety managers by developing integrated computer vision and inertia sensors-based mobile applications to identify openings. The TensorFlow framework is used to design Convolutional Neural Network (CNN); the designed CNN is trained on a custom dataset for binary class openings and covered and deployed on an android smartphone. When an application captures an image, the device also extracts the accelerometer values to determine the inclination in parallel with the classification task of the device to predict the final output as floor (openings/ covered), wall (openings/covered), and roof (openings / covered). The proposed worker-driven approach will be extended with other case scenarios at the construction site.

  • PDF

사물인터넷 기반의 헬스케어 시스템의 종단간 보안성 분석 (Analyses of Security into End-to-End Point Healthcare System based on Internet of Things)

  • 김정태
    • 예술인문사회 융합 멀티미디어 논문지
    • /
    • 제7권6호
    • /
    • pp.871-880
    • /
    • 2017
  • 최근 들어, 인터넷 망을 이용한 서비스들이 초연결 구조로 결합 및 융합하여 발전되고 있다. 이러한 사물인터넷망은 기존의 센서 노드, 디바이스, 종단간 단말기 등의 이기종의 디바이스로 구성되며 서로 다른 종류의 프로토콜을 변화하여 실현되고 있다. 그 대표적인 것이 헬스 케어 시스템으로, 사물인터넷을 이용함으로써 의료기기, 환자, 의사들 간의 의료 정보가 매우 신속하게 전달될 수 있는 장점을 가지며, 이동성 및 관리적 측면에서 편리성을 가진다. 그러나 이러한 사물인터넷 망을 이용할 경우 센서 노드에서의 저용량의 메모리 공간, 낮은 컴퓨팅 능력, 저전력 등의 하드웨어적인 제한 요소로 인하여 기존의 암호 엔진을 내장하기는 불가능 하다. 기존의 표준 알고리즘을 구현하기에는 하드웨어적인 제한 요소로 인하여 현재의 기술로는 구현이 어렵다. 따라서 이러한 문제점으로 인해 보안적인 취약성이 존재한다. 현재에는 많은 연구자들은 경량화 알고리즘 및 저전력의 회로 설계에 주안점을 두고 있다. 따라서 본 논문에서는 일반적인 헬스 케어 시스템의 구조를 분석하고, 사물인터넷 기반의 종단간의 헬스 케어 시스템에서의 보안적인 이슈 및 문제점을 분석하였다.

Addressing Inter-floor Noise Issues in Apartment Buildings using On-Sensor AI Embedded with TinyML on Ultra-Low-Power Systems

  • Jae-Won Kwak;In-Yeop Choi
    • 한국컴퓨터정보학회논문지
    • /
    • 제29권3호
    • /
    • pp.75-81
    • /
    • 2024
  • 본 논문은 딥러닝 모델이 포함된 TinyML(Tiny Machine Learning)를 초저전력 시스템에 탑재하여, 층간소음 문제를 실시간으로 처리하는 방법을 제시한다. 이 방법이 가능한 이유는 딥러닝 모델 경량화 기술로 인해 컴퓨팅 리소스가 작은 시스템도 자체적으로 추론을 수행 할 수 있기 때문이다. 기존에 층간소음 문제를 해결하기 위해 제시됐던 방법은 센서에서 수집한 데이터를 서버로 보내어 데이터를 분석한 후에 처리하는 방법 이었다. 하지만 이러한 중앙 처리 방법은 구축 비용이 비싸고 복잡하며, 실시간 처리가 어려운 문제가 있다. 이러한 한계점을 본 논문에서는 TinyML을 사용한 On-Sensor AI(Artificial Intelligent) 로 해결하였다. 본 논문에서 제시한 방법은 시스템 설치가 간단하고 저비용 이면서 문제를 실시간적으로 처리할 수 있다.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • 제20권2호
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

NAND 플래시 메모리 저장 장치에서 블록 재활용 기법의 비용 기반 최적화 (Cost-based Optimization of Block Recycling Scheme in NAND Flash Memory Based Storage System)

  • 이종민;김성훈;안성준;이동희;노삼혁
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제13권7호
    • /
    • pp.508-519
    • /
    • 2007
  • 이동기기의 저장 장치로 사용되는 플래시 메모리는 이제 SSD(Solid State Disk) 형태로 노트북 컴퓨터까지 그 적용 범위가 확대되고 있다. 이러한 플래시 메모리는 무게, 내충격성, 전력 소비량 면에서 장점을 가지고 있지만, erase-before-write 속성과 같은 단점도 가진다. 이러한 단점을 극복하기 위하여 플래시 메모리 기반 저장 장치는 FTL(Flash-memory Translation Layer)이라는 특별한 주소 사상 소프트웨어를 필요로 하며, FTL은 종종 블록을 재활용하기 위하여 병합 연산을 수행해야 한다. NAND 플래시 메모리 기반 저장 장치에서 블록 재활용 비용을 줄이기 위해 본 논문에서는 이주 연산이라는 또 다른 블록 재활용 기법을 도입하였으며, FTL은 블록 재활용시 이주와 병합 연산 중에서 비용이 적게 드는 연산을 선택하도록 하였다. Postmark 벤치마크와 임베디드 시스템 워크로드를 사용한 실험 결과는 이러한 비용 기반 선택이 플래시 메모리 기반 저장 장치의 성능을 향상시킬 수 있음을 보여준다. 아울러 이주/병합 연산이 조합된 각 주기마다 블록 재활용 비용을 최소화하는 이주/병합 순서의 거시적 최적화의 해를 발견하였으며, 실험 결과는 거시적 최적화가 단순 비용 기반 선택보다 플래시 메모리 기반 저장 장치의 성능을 더욱 향상시킬 수 있음을 보여준다.