• Title/Summary/Keyword: Heterogeneous computing

Search Result 398, Processing Time 0.021 seconds

Robust Data, Event, and Privacy Services in Real-Time Embedded Sensor Network Systems (실시간 임베디드 센서 네트워크 시스템에서 강건한 데이터, 이벤트 및 프라이버시 서비스 기술)

  • Jung, Kang-Soo;Kapitanova, Krasimira;Son, Sang-H.;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.324-332
    • /
    • 2010
  • The majority of event detection in real-time embedded sensor network systems is based on data fusion that uses noisy sensor data collected from complicated real-world environments. Current research has produced several excellent low-level mechanisms to collect sensor data and perform aggregation. However, solutions that enable these systems to provide real-time data processing using readings from heterogeneous sensors and subsequently detect complex events of interest in real-time fashion need further research. We are developing real-time event detection approaches which allow light-weight data fusion and do not require significant computing resources. Underlying the event detection framework is a collection of real-time monitoring and fusion mechanisms that are invoked upon the arrival of sensor data. The combination of these mechanisms and the framework has the potential to significantly improve the timeliness and reduce the resource requirements of embedded sensor networks. In addition to that, we discuss about a privacy that is foundation technique for trusted embedded sensor network system and explain anonymization technique to ensure privacy.

Web Service based Recommendation System using Inference Engine (추론엔진을 활용한 웹서비스 기반 추천 시스템)

  • Kim SungTae;Park SooMin;Yang JungJin
    • Journal of Intelligence and Information Systems
    • /
    • v.10 no.3
    • /
    • pp.59-72
    • /
    • 2004
  • The range of Internet usage is drastically broadened and diversed from information retrieval and collection to many different functions. Contrasting to the increase of Internet use, the efficiency of finding necessary information is decreased. Therefore, the need of information system which provides customized information is emerged. Our research proposes Web Service based recommendation system which employes inference engine to find and recommend the most appropriate products for users. Web applications in present provide useful information for users while they still carry the problem of overcoming different platforms and distributed computing environment. The need of standardized and systematic approach is necessary for easier communication and coherent system development through heterogeneous environments. Web Service is programming language independent and improves interoperability by describing, deploying, and executing modularized applications through network. The paper focuses on developing Web Service based recommendation system which will provide benchmarks of Web Service realization. It is done by integrating inference engine where the dynamics of information and user preferences are taken into account.

  • PDF

A Feasibility Study of Goal-based Testing with a Task-based Test Model for Collective Adaptive Systems (군집 적응형 시스템의 목표 기반 테스트를 위한 태스크 기반 테스트 모델 적용 타당성 연구)

  • Lee, Cheonghyun;Jee, Eunkyoung;Lim, Yoo Jin;Bae, Doo-Hwan
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.8
    • /
    • pp.393-398
    • /
    • 2016
  • Collective Adaptive System is an adaptive multi-agent system which accomplishes its goal by collaborating various agents. Because the collective property of the Collective Adaptive System is accomplished by the goal of the system being based on collaboration, testing the goal accomplishment and their interactions among heterogeneous agents is important. This paper presents a feasibility study of applying a model-based testing approach using task-based test model to a Collective Adaptive System. This paper describes additional information to be applied for Collective Adaptive System for future studies. To analyze our approach, we applied the proposed approach to a smart home system as a case study; our results indicated that we can systematically derive test cases to check whether the Collective Adaptive System successfully achieved its goals by modifying and extending the existing task model.

CTIS: Cross-platform Tester Interface Software for Memory Semiconductor (메모리 반도체 검사 장비 인터페이스를 위한 크로스플랫폼 소프트웨어 기술)

  • Kim, Dong Su;Kang, Dong Hyun;Lee, Eun Seok;Lee, Kyu Sung;Eom, Young Ik
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.10
    • /
    • pp.645-650
    • /
    • 2015
  • Tester Interface Software (TIS) provides all software functions that are necessary for a testing device to perform the test process on a memory semiconductor package from the time the device is put into the test equipment until the device is discharged from the equipment. TIS should perform the same work over all types of equipment regardless of their tester models. However, TIS has been developed and managed independently of the tester models because there are various equipment and computer models that are used in the test process. Therefore, more maintenance, time and cost are required for development, which adversely affects the quality of the software, and the problem becomes more serious when the new tester model is introduced. In this paper, we propose the Cross-platform Tester Interface Software (CTIS) framework, which can be integrated and operated on heterogeneous equipment and OSs.

FPGA Prototype Design of Dynamic Frequency Scaling System for Low Power SoC (저전력 SoC을 위한 동적 주파수 제어 시스템의 FPGA 프로토타입 설계)

  • Jung, Eun-Gu;Marculescu, Diana;Lee, Jeong-Gun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.11
    • /
    • pp.801-805
    • /
    • 2009
  • Hardware based dynamic voltage and frequency scaling is a promising technique to reduce power consumption in a globally asynchronous locally synchronous system such as a homogeneous or heterogeneous multi-core system. In this paper, FPGA prototype design of hardware based dynamic frequency scaling is proposed. The proposed techniques are applied to a FIFO based multi-core system for a software defined radio and Network-on-Chip based hardware MPEG2 encoder. Compared with a references system using a single global clock, the first prototype design reduces the power consumption by 78%, but decreases the performance by 5.9%. The second prototype design shows that power consumption decreases by 29.1% while performance decreases by 0.36%.

A review on deep learning-based structural health monitoring of civil infrastructures

  • Ye, X.W.;Jin, T.;Yun, C.B.
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.567-585
    • /
    • 2019
  • In the past two decades, structural health monitoring (SHM) systems have been widely installed on various civil infrastructures for the tracking of the state of their structural health and the detection of structural damage or abnormality, through long-term monitoring of environmental conditions as well as structural loadings and responses. In an SHM system, there are plenty of sensors to acquire a huge number of monitoring data, which can factually reflect the in-service condition of the target structure. In order to bridge the gap between SHM and structural maintenance and management (SMM), it is necessary to employ advanced data processing methods to convert the original multi-source heterogeneous field monitoring data into different types of specific physical indicators in order to make effective decisions regarding inspection, maintenance and management. Conventional approaches to data analysis are confronted with challenges from environmental noise, the volume of measurement data, the complexity of computation, etc., and they severely constrain the pervasive application of SHM technology. In recent years, with the rapid progress of computing hardware and image acquisition equipment, the deep learning-based data processing approach offers a new channel for excavating the massive data from an SHM system, towards autonomous, accurate and robust processing of the monitoring data. Many researchers from the SHM community have made efforts to explore the applications of deep learning-based approaches for structural damage detection and structural condition assessment. This paper gives a review on the deep learning-based SHM of civil infrastructures with the main content, including a brief summary of the history of the development of deep learning, the applications of deep learning-based data processing approaches in the SHM of many kinds of civil infrastructures, and the key challenges and future trends of the strategy of deep learning-based SHM.

Implementation of Autonomous IoT Integrated Development Environment based on AI Component Abstract Model (AI 컴포넌트 추상화 모델 기반 자율형 IoT 통합개발환경 구현)

  • Kim, Seoyeon;Yun, Young-Sun;Eun, Seong-Bae;Cha, Sin;Jung, Jinman
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.71-77
    • /
    • 2021
  • Recently, there is a demand for efficient program development of an IoT application support frameworks considering heterogeneous hardware characteristics. In addition, the scope of hardware support is expanding with the development of neuromorphic architecture that mimics the human brain to learn on their own and enables autonomous computing. However, most existing IoT IDE(Integrated Development Environment), it is difficult to support AI(Artificial Intelligence) or to support services combined with various hardware such as neuromorphic architectures. In this paper, we design an AI component abstract model that supports the second-generation ANN(Artificial Neural Network) and the third-generation SNN(Spiking Neural Network), and implemented an autonomous IoT IDE based on the proposed model. IoT developers can automatically create AI components through the proposed technique without knowledge of AI and SNN. The proposed technique is flexible in code conversion according to runtime, so development productivity is high. Through experimentation of the proposed method, it was confirmed that the conversion delay time due to the VCL(Virtual Component Layer) may occur, but the difference is not significant.

Open Cloud Platform Ecosystem Strategy Using the Container Orchestration Platform (컨테이너 자동편성 플랫폼을 활용한 개방형 클라우드 플랫폼 생태계 전략)

  • Jung, Ki-Bong;Hyun, Jae-Uk;Yoon, Hee-Geun;Kim, Eun-Ju
    • Informatization Policy
    • /
    • v.26 no.3
    • /
    • pp.90-106
    • /
    • 2019
  • The cloud services market is growing rapidly from the on-premises environment to the cloud computing environment and the domestic cloud software market in Korea is expected to grow at a CAGR of around 15%. In Korea, research teams are providing open cloud platforms using open source software under the government taking the initiative, which intends to enhance the reliability and functionality of open cloud platforms, provide users with a world-class open cloud platform-based and developer-friendly environment that is managed on heterogeneous cloud infrastructure and supported by full-lifecycle management of application software. In this paper, we propose a method to utilize CaaS in the open cloud platform, through incorporating the platform with the container orchestration platform. Finally, by providing users with the application runtime and container runtime, it presents how the two platforms can coexist and cooperate in the same ecosystem.

History of the Photon Beam Dose Calculation Algorithm in Radiation Treatment Planning System

  • Kim, Dong Wook;Park, Kwangwoo;Kim, Hojin;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.54-62
    • /
    • 2020
  • Dose calculation algorithms play an important role in radiation therapy and are even the basis for optimizing treatment plans, an important feature in the development of complex treatment technologies such as intensity-modulated radiation therapy. We reviewed the past and current status of dose calculation algorithms used in the treatment planning system for radiation therapy. The radiation-calculating dose calculation algorithm can be broadly classified into three main groups based on the mechanisms used: (1) factor-based, (2) model-based, and (3) principle-based. Factor-based algorithms are a type of empirical dose calculation that interpolates or extrapolates the dose in some basic measurements. Model-based algorithms, represented by the pencil beam convolution, analytical anisotropic, and collapse cone convolution algorithms, use a simplified physical process by using a convolution equation that convolutes the primary photon energy fluence with a kernel. Model-based algorithms allowing side scattering when beams are transmitted to the heterogeneous media provide more precise dose calculation results than correction-based algorithms. Principle-based algorithms, represented by Monte Carlo dose calculations, simulate all real physical processes involving beam particles during transportation; therefore, dose calculations are accurate but time consuming. For approximately 70 years, through the development of dose calculation algorithms and computing technology, the accuracy of dose calculation seems close to our clinical needs. Next-generation dose calculation algorithms are expected to include biologically equivalent doses or biologically effective doses, and doctors expect to be able to use them to improve the quality of treatment in the near future.

Nationally-Funded R&D Projects Data Based Dynamic Convergence Index Development: Focused On Life Science & Public Health Area (국가 연구개발(R&D) 과제 데이터 기반 동적 융합지표에 관한 연구: 생명·보건의료 분야를 중심으로)

  • Lee, Doyeon;Kim, Keunhwan
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.2_2
    • /
    • pp.219-232
    • /
    • 2022
  • The aim of this study is to provide the dynamic convergence index that reflected the inherent characteristics of the convergence phenomenon and utilized the nationally-funded R&D projects data, thereby suggesting useful information about the direction of the national convergence R&D strategy. The dynamic convergence index that we suggested was made of two indicators: persistency and diversity. From a time-series perspective, the persistency index, which measures the degree of continuous convergence of multidisciplinary nationally-funded R&D projects, and the diversity index, which measures the degree of binding with heterogeneous research areas. We conducted the empirical experiment with 151,248 convergence R&D projects during the 2015~2021 time period. The results showed that convergence R&D projects in both public health and life sciences appeared the highest degree of persistency. It was presumed that the degree of persistency has increased again due to the COVID-19 pandemic. Meanwhile, the degree of diversity has risen with combining with disciplinary such as materials, chemical engineering, and brain science areas to solve social problems including mental health, depression, and aging. This study not only provides implications for improving the concept and definition of dynamic convergence in terms of persistency and diversity for national convergence R&D strategy but also presented dynamic convergence index and analysis methods that can be practically applied for directing public R&D programs.