• Title/Summary/Keyword: Algorithm save

Search Result 372, Processing Time 0.026 seconds

An Efficient Clustering Protocol with Mode Selection (모드 선택을 이용한 효율적 클러스터링 프로토콜)

  • Aries, Kusdaryono;Lee, Young Han;Lee, Kyoung Oh
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.925-928
    • /
    • 2010
  • Wireless sensor networks are composed of a large number of sensor nodes with limited energy resources. One critical issue in wireless sensor networks is how to gather sensed information in an energy efficient way since the energy is limited. The clustering algorithm is a technique used to reduce energy consumption. It can improve the scalability and lifetime of wireless sensor network. In this paper, we introduce a clustering protocol with mode selection (CPMS) for wireless sensor networks. Our scheme improves the performance of BCDCP (Base Station Controlled Dynamic Clustering Protocol) and BIDRP (Base Station Initiated Dynamic Routing Protocol) routing protocol. In CPMS, the base station constructs clusters and makes the head node with highest residual energy send data to base station. Furthermore, we can save the energy of head nodes using modes selection method. The simulation results show that CPMS achieves longer lifetime and more data messages transmissions than current important clustering protocol in wireless sensor networks.

A Node Grouping Method for Transmission Power Saving in Underwater Acoustic Sensor Network (수중 센서 네트워크에서 노드 그룹화를 통한 전송전력 절약 방안)

  • Hwang, Sung-Ho;Cho, Ho-Shin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.8
    • /
    • pp.774-780
    • /
    • 2009
  • This paper proposes a transmitted power saving method for underwater acoustic sensors considering the acoustic wave propagation characteristic that propagation loss increases more rapidly in higher frequency band. In the proposed scheme, sensor nodes are divided into a few groups based on the distance between sink node and the sensor node, and each group uses its own frequency band. The node group with longer distance uses lower frequency and the node group with shorter distance uses higher frequency. By means of such a distance-dependent frequency allocation, all sensor nodes are able to maintain a certain target signal-to-noise ratio (SNR), but also save transmitted power. In addition, the optimum size of node group is obtained, and also a frequency allocation algorithm is proposed accordingly. Numerical results show that the proposed scheme saves transmitted power by more 10 dB comparing non-grouping methods.

Numerical Simulation of Quasi-Spherical, Supersonic Accretion Flows - Code and Tests

  • Siek Hyung;Seong-Jae Lee
    • Journal of the Korean earth science society
    • /
    • v.45 no.4
    • /
    • pp.292-303
    • /
    • 2024
  • We study quasi-spherical, supersonic accretion flows around black holes using high-accuracy numerical simulations. We describe a code, the Lagrangian Total Variation Diminishing (TVD), and a remap routine to address a specific issue in the Advection Dominated Accretion Flow (ADAF) that is, appropriately handling the angular momentum even near the inner boundary. The Lagrangian TVD code is based on an explicit finite difference scheme on mass-volume grids to track fluid particles with time. The consequences are remapped on fixed grids using the explicit Eulerian finite-difference algorithm with a third-order accuracy. Test results show that one can successfully handle flows and resolve shocks within two to three computational cells. Especially, the calculation of a hydrodynamical accretion disk without viscosity around a black hole shows that one can conserve nearly 100% of specific a ngular momentum in one-and two-dimensional cylindrical coordinates. Thus, we apply this code to obtain a numerically similar ADAF solution. We perform simulations, including viscosity terms in one-dimensional spherical geometry on the non-uniform grids, to obtain greater quantitative consequences and to save computational time. The error of specific angular momentum in Newtonian potential is less than 1% between r~10rs and r~104 rs, where rs is sink size. As Narayan et al. (1997) suggested, the ADAFs in pseudo-Newtonian potential become supersonic flows near the black hole, and the sonic point is rsonic~5.3rg for flow with α =0.3 and γ=1 .5. Such simulations indicate that even the ADAF with γ=5/3 is differentially rotating, as Ogilvie (1999) indicated. Hence, we conclude that the Lagrangian TVD and remap code treat the role of viscosity more precisely than the other scheme, even near the inner boundary in a rotating accretion flow around a nonrotating black hole.

Application of ML algorithms to predict the effective fracture toughness of several types of concret

  • Ibrahim Albaijan;Hanan Samadi;Arsalan Mahmoodzadeh;Hawkar Hashim Ibrahim;Nejib Ghazouani
    • Computers and Concrete
    • /
    • v.34 no.2
    • /
    • pp.247-265
    • /
    • 2024
  • Measuring the fracture toughness of concrete in laboratory settings is challenging due to various factors, such as complex sample preparation procedures, the requirement for precise instruments, potential sample failure, and the brittleness of the samples. Therefore, there is an urgent need to develop innovative and more effective tools to overcome these limitations. Supervised learning methods offer promising solutions. This study introduces seven machine learning algorithms for predicting concrete's effective fracture toughness (K-eff). The models were trained using 560 datasets obtained from the central straight notched Brazilian disc (CSNBD) test. The concrete samples used in the experiments contained micro silica and powdered stone, which are commonly used additives in the construction industry. The study considered six input parameters that affect concrete's K-eff, including concrete type, sample diameter, sample thickness, crack length, force, and angle of initial crack. All the algorithms demonstrated high accuracy on both the training and testing datasets, with R2 values ranging from 0.9456 to 0.9999 and root mean squared error (RMSE) values ranging from 0.000004 to 0.009287. After evaluating their performance, the gated recurrent unit (GRU) algorithm showed the highest predictive accuracy. The ranking of the applied models, from highest to lowest performance in predicting the K-eff of concrete, was as follows: GRU, LSTM, RNN, SFL, ELM, LSSVM, and GEP. In conclusion, it is recommended to use supervised learning models, specifically GRU, for precise estimation of concrete's K-eff. This approach allows engineers to save significant time and costs associated with the CSNBD test. This research contributes to the field by introducing a reliable tool for accurately predicting the K-eff of concrete, enabling efficient decision-making in various engineering applications.

Fast HEVC Encoding based on CU-Depth First Decision (CU 깊이 우선 결정 기반의 HEVC 고속 부호화 방법)

  • Yoo, Sung-Eun;Ahn, Yong-Jo;Sim, Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.40-50
    • /
    • 2012
  • In this paper we propose the fast CU (Coding Unit) mode decision method. To reduce computational complexity and save encoding time of HEVC, we divided CU, PU (Prediction Unit) and TU (Transform Unit) decision process into two stages. In the first stage, because $2N{\times}2N$ PU mode is mostly selected among $2N{\times}2N$, $N{\times}2N$, $2N{\times}N$, $N{\times}N$ PU modes, proposed algorithm uses only $2N{\times}2N$ PU mode deciding depth of each CU in the LCU (Largest CU). And then, proposed method decides exact PU and TU modes at the depth level which is decided in the first stage. In addition, early skip decision rule is applied to the proposed method to obtain more efficient computational complexity reduction. The proposed method reduces computational complexity of the HEVC encoder by simplifying a CU depth decision method. We could obtain about 50% computational complexity reduction in comparison with HM 3.3 HEVC reference software while bitrate compressed by the proposed algorithm increases only 2%.

New VLSI Architecture of Parallel Multiplier-Accumulator Based on Radix-2 Modified Booth Algorithm (Radix-2 MBA 기반 병렬 MAC의 VLSI 구조)

  • Seo, Young-Ho;Kim, Dong-Wook
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.4
    • /
    • pp.94-104
    • /
    • 2008
  • In this paper, we propose a new architecture of multiplier-and-accumulator (MAC) for high speed multiplication and accumulation arithmetic. By combining multiplication with accumulation and devising a hybrid type of carry save adder (CSA), the performance was improved. Since the accumulator which has the largest delay in MAC was removed and its function was included into CSA, the overall performance becomes to be elevated. The proposed CSA tree uses 1's complement-based radix-2 modified booth algorithm (MBA) and has the modified array for the sign extension in order to increase the bit density of operands. The CSA propagates the carries by the least significant bits of the partial products and generates the least significant bits in advance for decreasing the number of the input bits of the final adder. Also, the proposed MAC accumulates the intermediate results in the type of sum and carry bits not the output of the final adder for improving the performance by optimizing the efficiency of pipeline scheme. The proposed architecture was synthesized with $250{\mu}m,\;180{\mu}m,\;130{\mu}m$ and 90nm standard CMOS library after designing it. We analyzed the results such as hardware resource, delay, and pipeline which are based on the theoretical and experimental estimation. We used Sakurai's alpha power low for the delay modeling. The proposed MAC has the superior properties to the standard design in many ways and its performance is twice as much than the previous research in the similar clock frequency.

A Cost-Efficient Job Scheduling Algorithm in Cloud Resource Broker with Scalable VM Allocation Scheme (클라우드 자원 브로커에서 확장성 있는 가상 머신 할당 기법을 이용한 비용 적응형 작업 스케쥴링 알고리즘)

  • Ren, Ye;Kim, Seong-Hwan;Kang, Dong-Ki;Kim, Byung-Sang;Youn, Chan-Hyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.3
    • /
    • pp.137-148
    • /
    • 2012
  • Cloud service users request dedicated virtual computing resource from the cloud service provider to process jobs in independent environment from other users. To optimize this process with automated method, in this paper we proposed a framework for workflow scheduling in the cloud environment, in which the core component is the middleware called broker mediating the interaction between users and cloud service providers. To process jobs in on-demand and virtualized resources from cloud service providers, many papers propose scheduling algorithms that allocate jobs to virtual machines which are dedicated to one machine one job. With this method, the isolation of being processed jobs is guaranteed, but we can't use each resource to its fullest computing capacity with high efficiency in resource utilization. This paper therefore proposed a cost-efficient job scheduling algorithm which maximizes the utilization of managed resources with increasing the degree of multiprogramming to reduce the number of needed virtual machines; consequently we can save the cost for processing requests. We also consider the performance degradation in proposed scheme with thrashing and context switching. By evaluating the experimental results, we have shown that the proposed scheme has better cost-performance feature compared to an existing scheme.

Minimizing non-optimal paths in multi-hop ad hoc network adopted IEEE 802.11 PSM (IEEE 802.11 PSM을 적용한 다중 홉애드 혹 네트워크에서 우회경로의 최소화)

  • Whang, Do-Hyeon;Lee, Jang-Su;Kim, Sung-Chun
    • The KIPS Transactions:PartC
    • /
    • v.14C no.7
    • /
    • pp.583-588
    • /
    • 2007
  • It is easy to implement a temporary network with a mobile ad-hoc network in which mobile nodes have without using a infrastructure network. They depend on their limited power. Recently, it is a hot issue to save the energy in a mobile ad-hoc network because a mobile nodes have a limited energy. Research of IEEE 802.11 PSM was proposed in a single hop ad-hoc assumption. If IEEE 802.11 PSM is applied to multi hop ad-hoc network, non-optimal paths will be generated by the mobile nodes which didn't receive a message of routing request. Non-optimal paths increase not only a network latency but also energy consumption of mobile nodes. Reconfiguring algorithm of non-optimal paths caused by the mobile nodes which didn't receive a message of routing request is proposed in this paper. A mobile node can overhear the data in his range. A wireless medium is shared by all mobile nodes using the same bandwidth. All mobile nodes lookout the non-optimal paths with these properties of a medium, if non-optimal path is generated, optimal reconfiguring will be accomplished by modifying routing table of itself or sending a request message of routing update to nearby nodes. By reconfiguring the non-optimal paths to optimized ones, network latency and energy consumption was decreased. It is confirmed to ignore the overhead caused by a algorithm presented in this paper through the result of the simulation.

Time- and Frequency-Domain Block LMS Adaptive Digital Filters: Part Ⅰ- Realization Structures (시간영역 및 주파수영역 블럭적응 여파기에 관한 연구 : 제1부- 구현방법)

  • Lee, Jae-Chon;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.7 no.4
    • /
    • pp.31-53
    • /
    • 1988
  • In this work we study extensively the structures and performance characteristics of the block least mean-square (BLMS) adaptive digital filters (ADF's) that can be realized efficiently using the fast Fourier transform (FFT). The weights of a BLMS ADF realized using the FFT can be adjusted either in the time domain or in the frequency domain, leading to the time-domain BLMS(TBLMS) algorithm or the frequency-domain BLMS (FBLMS) algorithm, respectively. In Part Ⅰof the paper, we first present new results on the overlap-add realization and the number-theoretic transform realization of the FBLMS ADF's. Then, we study how we can incorporate the concept of different frequency-weighting on the error signals and the self-orthogonalization of weight adjustment in the FBLMS ADF's , and also in the TBLMS ADF's. As a result, we show that the TBLMS ADF can also be made to have the same fast convergence speed as that of the self-orthogonalizing FBLMS ADF. Next, based on the properties of the sectioning operations in weight adjustment, we discuss unconstrained FBLMS algorithms that can reduce two FFT operations both for the overlap-save and overlap-add realizations. Finally, we investigate by computer simulation the effects of different parameter values and different algorithms on the convergence behaviors of the FBLMS and TBLMS ADF's. In Part Ⅱ of the paper, we will analyze the convergence characteristics of the TBLMS and FBLMS ADF's.

  • PDF

Spatio-temporal Mode Selection Methods of Fast H.264 Using Multiple Reference Frames (다중 참조 영상을 이용한 고속 H.264의 움직임 예측 모드 선택 기법)

  • Kwon, Jae-Hyun;Kang, Min-Jung;Ryu, Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.3C
    • /
    • pp.247-254
    • /
    • 2008
  • H.264 provides a good coding efficiency compared with existing video coding standards, H.263, MPEG-4, based on the use of multiple reference frame for variable block size motion estimation, quarter-pixel motion estimation and compensation, $4{\times}4$ integer DCT, rate-distortion optimization, and etc. However, many modules used to increase its performance also require H.264 to have increased complexity so that fast algorithms are to be implemented as practical approach. In this paper, among many approaches, fast mode decision algorithm by skipping variable block size motion estimation and spatial-predictive coding, which occupies most encoder complexity, is proposed. This approach takes advantages of temporal and spatial properties of fast mode selection techniques. Experimental results demonstrate that the proposed approach can save encoding time up to 65% compared with the H.264 standard while maintaining the visual perspectives.