• Title/Summary/Keyword: Partitioning Technique

Search Result 214, Processing Time 0.027 seconds

Adaptive Medical Image Compression Based on Lossy and Lossless Embedded Zerotree Methods

  • Elhannachi, Sid Ahmed;Benamrane, Nacera;Abdelmalik, Taleb-Ahmed
    • Journal of Information Processing Systems
    • /
    • v.13 no.1
    • /
    • pp.40-56
    • /
    • 2017
  • Since the progress of digital medical imaging techniques, it has been needed to compress the variety of medical images. In medical imaging, reversible compression of image's region of interest (ROI) which is diagnostically relevant is considered essential. Then, improving the global compression rate of the image can also be obtained by separately coding the ROI part and the remaining image (called background). For this purpose, the present work proposes an efficient reversible discrete cosine transform (RDCT) based embedded image coder designed for lossless ROI coding in very high compression ratio. Motivated by the wavelet structure of DCT, the proposed rearranged structure is well coupled with a lossless embedded zerotree wavelet coder (LEZW), while the background is highly compressed using the set partitioning in hierarchical trees (SPIHT) technique. Results coding shows that the performance of the proposed new coder is much superior to that of various state-of-art still image compression methods.

Prediction of compressive strength of bacteria incorporated geopolymer concrete by using ANN and MARS

  • X., John Britto;Muthuraj, M.P.
    • Structural Engineering and Mechanics
    • /
    • v.70 no.6
    • /
    • pp.671-681
    • /
    • 2019
  • This paper examines the applicability of artificial neural network (ANN) and multivariate adaptive regression splines (MARS) to predict the compressive strength of bacteria incorporated geopolymer concrete (GPC). The mix is composed of new bacterial strain, manufactured sand, ground granulated blast furnace slag, silica fume, metakaolin and fly ash. The concentration of sodium hydroxide (NaOH) is maintained at 8 Molar, sodium silicate ($Na_2SiO_3$) to NaOH weight ratio is 2.33 and the alkaline liquid to binder ratio of 0.35 and ambient curing temperature ($28^{\circ}C$) is maintained for all the mixtures. In ANN, back-propagation training technique was employed for updating the weights of each layer based on the error in the network output. Levenberg-Marquardt algorithm was used for feed-forward back-propagation. MARS model was developed by establishing a relationship between a set of predictors and dependent variables. MARS is based on a divide and conquers strategy partitioning the training data sets into separate regions; each gets its own regression line. Six models based on ANN and MARS were developed to predict the compressive strength of bacteria incorporated GPC for 1, 3, 7, 28, 56 and 90 days. About 70% of the total 84 data sets obtained from experiments were used for development of the models and remaining 30% data was utilized for testing. From the study, it is observed that the predicted values from the models are found to be in good agreement with the corresponding experimental values and the developed models are robust and reliable.

Design and Implementation of Kernel-Level Split and Merge Operations for Efficient File Transfer in Cyber-Physical System (사이버 물리 시스템에서 효율적인 파일 전송을 위한 커널 레벨 분할 및 결합 연산의 설계와 구현)

  • Park, Hyunchan;Jang, Jun-Hee;Lee, Junseok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.5
    • /
    • pp.249-258
    • /
    • 2019
  • In the cyber-physical system, big data collected from numerous sensors and IoT devices is transferred to the Cloud for processing and analysis. When transferring data to the Cloud, merging data into one single file is more efficient than using the data in the form of split files. However, current merging and splitting operations are performed at the user-level and require many I / O requests to memory and storage devices, which is very inefficient and time-consuming. To solve this problem, this paper proposes kernel-level partitioning and combining operations. At the kernel level, splitting and merging files can be done with very little overhead by modifying the file system metadata. We have designed the proposed algorithm in detail and implemented it in the Linux Ext4 file system. In our experiments with the real Cloud storage system, our technique has achieved a transfer time of up to only 17% compared to the case of transferring split files. It also confirmed that the time required can be reduced by up to 0.5% compared to the existing user-level method.

A Study on the Mold System of Bicycles Gear for Driving Safety (주행 안전을 위한 자전거 기어의 프레스금형에 관한 연구)

  • Jeong, Youn-Seung
    • Journal of the Korea Safety Management & Science
    • /
    • v.20 no.4
    • /
    • pp.1-6
    • /
    • 2018
  • Recently, bicycle has means of effective healthy transportation, and riding the bicycles is considered as popular recreational and sporting activities. Also, the saddle, steering system, driving device and braking device are researched briskly because of consumer's need for driving performance and comfort. Especially, the importance of a cassette responsible for transmission function by transmitting power to the drive shaft through the chain is very focused. The writer conducted structural analysis for the sprocket of each level using the ANSYS widely used for the analysis. Speed shifting performance was enhanced by minimization / simplification of shifting point through a sort of tooth profile of the cassette. By partitioning a clear value type and other shifting point, it has been modified to enable smooth speed-shifting. In addition, as titanium precision forming process, this study studied the molding technique by blanking and dies forging for mass production of the cassette. so it could be expected that the entire drive train would utilize that in the future. The stamping process capability for thin materials for the mass production of the sprockets is applicable to producing automobile parts, so lightweight component production is likely to be possible through that, for the safety of driving.

PartitionTuner: An operator scheduler for deep-learning compilers supporting multiple heterogeneous processing units

  • Misun Yu;Yongin Kwon;Jemin Lee;Jeman Park;Junmo Park;Taeho Kim
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.318-328
    • /
    • 2023
  • Recently, embedded systems, such as mobile platforms, have multiple processing units that can operate in parallel, such as centralized processing units (CPUs) and neural processing units (NPUs). We can use deep-learning compilers to generate machine code optimized for these embedded systems from a deep neural network (DNN). However, the deep-learning compilers proposed so far generate codes that sequentially execute DNN operators on a single processing unit or parallel codes for graphic processing units (GPUs). In this study, we propose PartitionTuner, an operator scheduler for deep-learning compilers that supports multiple heterogeneous PUs including CPUs and NPUs. PartitionTuner can generate an operator-scheduling plan that uses all available PUs simultaneously to minimize overall DNN inference time. Operator scheduling is based on the analysis of DNN architecture and the performance profiles of individual and group operators measured on heterogeneous processing units. By the experiments for seven DNNs, PartitionTuner generates scheduling plans that perform 5.03% better than a static type-based operator-scheduling technique for SqueezeNet. In addition, PartitionTuner outperforms recent profiling-based operator-scheduling techniques for ResNet50, ResNet18, and SqueezeNet by 7.18%, 5.36%, and 2.73%, respectively.

Improving streamflow prediction with assimilating the SMAP soil moisture data in WRF-Hydro

  • Kim, Yeri;Kim, Yeonjoo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.205-205
    • /
    • 2021
  • Surface soil moisture, which governs the partitioning of precipitation into infiltration and runoff, plays an important role in the hydrological cycle. The assimilation of satellite soil moisture retrievals into a land surface model or hydrological model has been shown to improve the predictive skill of hydrological variables. This study aims to improve streamflow prediction with Weather Research and Forecasting model-Hydrological modeling system (WRF-Hydro) by assimilating Soil Moisture Active and Passive (SMAP) data at 3 km and analyze its impacts on hydrological components. We applied Cumulative Distribution Function (CDF) technique to remove the bias of SMAP data and assimilate SMAP data (April to July 2015-2019) into WRF-Hydro by using an Ensemble Kalman Filter (EnKF) with a total 12 ensembles. Daily inflow and soil moisture estimates of major dams (Soyanggang, Chungju, Sumjin dam) of South Korea were evaluated. We investigated how hydrologic variables such as runoff, evaporation and soil moisture were better simulated with the data assimilation than without the data assimilation. The result shows that the correlation coefficient of topsoil moisture can be improved, however a change of dam inflow was not outstanding. It may attribute to the fact that soil moisture memory and the respective memory of runoff play on different time scales. These findings demonstrate that the assimilation of satellite soil moisture retrievals can improve the predictive skill of hydrological variables for a better understanding of the water cycle.

  • PDF

Microservice Identification by Partitioning Monolithic Web Applications Based on Use-Cases

  • Si-Hyun Kim;Daeil Jung;Norhayati Mohd Ali;Abu Bakar Md Sultan;Jaewon Oh
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.268-280
    • /
    • 2023
  • Several companies have migrated their existing monolithic web applications to microservice architectures. Consequently, research on the identification of microservices from monolithic web applications has been conducted. Meanwhile, the use-case model plays a crucial role in outlining the system's functionalities at a high level of abstraction, and studies have been conducted to identify microservices by utilizing this model. However, previous studies on microservice identification utilizing use-cases did not consider the components executed in the presentation layer. Unlike existing approaches, this paper proposes a technique that considers all three layers of web applications (presentation, business logic, and data access layers). Initially, the components used in the three layers of a web application are extracted by executing all the scenarios that constitute its use-cases. Thereafter, the usage rate of each component is determined for each use-case and the component is allocated to the use-case with the highest rate. Then, each use-case is realized as a microservice. To verify the proposed approach, microservice identification is performed using open-source web applications.

A Study on Reconfigurable Network Protocol Stack using Task-based Component Design on a SoC Platform (SoC 플랫폼에서 태스크 기반의 조립형 재구성이 가능한 네트워크 프로토콜 스택에 관한 연구)

  • Kim, Young-Mann;Tak, Sung-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.5
    • /
    • pp.617-632
    • /
    • 2009
  • In this paper we propose a technique of implementing the reconfigurable network protocol stack that allows for partitioning network protocol functions into software and hardware tasks on a SoC (System on Chip) platform. Additionally, we present a method that guarantees the deadline of both an individual task and messages exchanging among tasks in order to meet the deadline of real-time multimedia and networking services. The proposed real-time message exchange method guarantees the deadline of messages generated by multimedia services that are required to meet the real-time properties of multimedia applications. After implementing the networking functions of TCP/IP protocol suite into hardware and software tasks, we verify and validate their performance on the SoC platform. Experimental results indicate that the proposed technique improves the performance of TCP/IP protocol suit as well as application service satisfaction in application-specific real-time.

  • PDF

Preprocessing Stage of Timing Simulator, TSIM1.0 : Partitioning and Dynamic Waveform Storage Management (Timing Simulator인 TSIM1.0에서의 전처리 과정 : 회로분할과 파형정보처리)

  • Kwon, Oh-Bong;Yoon, Hyun-Ro;Lee, Ki-Jun
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.3
    • /
    • pp.153-159
    • /
    • 1989
  • This paper describes the algorithms employed in the preprocessing stage of the timing simulator, TSIM1.0, which is based on the Waveform Relaxation Method (WRM) at the CELL-level. The preprocessing stage in TSIM1.0 (1)partitions a given circuit into DC connected blocks (DCB's) (2) forms strongly connected circuts (SCC's) and (3) orders CELL's Also, the efficient waveform management technique for the WRM is described, which allows the overwriting of the waveform management technique for the WRM is described. which allows the overwriting of the waveform information to save the storage requirements. With TSIM1.0, circuits containing up to 5000 MOSFET's can be analyzed within 1 hour computation time on the IBM PC/AT. The simulation results for several types of MOS digital circuits are given to verify the performance of TSIM1.0.

  • PDF

A Novel Test Scheduling Algorithm Considering Variations of Power Consumption in Embedded Cores of SoCs (시스템 온 칩(system-on-a-chip) 내부 코어들의 전력소모 변화를 고려한 새로운 테스트 스케쥴링 알고리듬 설계)

  • Lee, Jae-Min;Lee, Ho-Jin;Park, Jin-Sung
    • Journal of Digital Contents Society
    • /
    • v.9 no.3
    • /
    • pp.471-481
    • /
    • 2008
  • Test scheduling considering power dissipation is an effective technique to reduce the testing time of complex SoCs and to enhance fault coverage under limitation of allowed maximum power dissipation. In this paper, a modeling technique of test resources and a test scheduling algorithm for efficient test procedures are proposed and confirmed. For test resources modeling, two methods are described. One is to use the maximum point and next maximum point of power dissipation in test resources, the other one is to model test resources by partitioning of them. A novel heuristic test scheduling algorithm, using the extended-tree-growing-graph for generation of maximum embedded cores usable simultaneously by using relations between test resources and cores and power-dissipation-changing-graph for power optimization, is presented and compared with conventional algorithms to verify its efficiency.

  • PDF