• Title/Summary/Keyword: computational algorithm

Search Result 4,381, Processing Time 0.035 seconds

Analysis of array invariant-based source-range estimation using a horizontal array (수평 배열을 이용한 배열 불변성 기반의 음원 거리 추정 성능 분석)

  • Gu, Hongju;Byun, Gihoon;Byun, Sung-Hoon;Kim, J.S.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.2
    • /
    • pp.231-239
    • /
    • 2019
  • In sonar systems, the passive ranging of a target is an active research area. This paper analyzed the performance of passive ranging based on an array invariant method for different environmental and sonar parameters. The array invariant developed for source range estimation in shallow water. The advantages of this method are that detailed environmental information is not required, and the real-time ranging is possible since the computational burden is very small. Simulation was performed to verify the algorithm. And this method is applied to sea-going experimental data in 2013 near Jinhae port. This study shows the performance of ranging for source orientation, transmission signal length, and length of a receiver through numerical simulation experiments. Also, the results using nested array and uniform line arrays are compared.

FEA based optimization of semi-submersible floater considering buckling and yield strength

  • Jang, Beom-Seon;Kim, Jae Dong;Park, Tae-Yoon;Jeon, Sang Bae
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.11 no.1
    • /
    • pp.82-96
    • /
    • 2019
  • A semi-submersible structure has been widely used for offshore drilling and production of oil and gas. The small water plane area makes the structure very sensitive to weight increase in terms of payload and stability. Therefore, it is necessary to lighten the substructure from the early design stage. This study aims at an optimization of hull structure based on a sophisticated yield and buckling strength in accordance with classification rules. An in-house strength assessment system is developed to automate the procedure such as a generation of buckling panels, a collection of required panel information, automatic buckling and yield check and so on. The developed system enables an automatic yield and buckling strength check of all panels composing the hull structure at each iteration of the optimization. Design variables are plate thickness and stiffener section profiles. In order to overcome the difficulty of large number of design variables and the computational burden of FE analysis, various methods are proposed. The steepest descent method is selected as the optimization algorithm for an efficient search. For a reduction of the number of design variables and a direct application to practical design, the stiffener section variable is determined by selecting one from a pre-defined standard library. Plate thickness is also discretized at 0.5t interval. The number of FE analysis is reduced by using equations to analytically estimating the stress changes in gradient calculation and line search steps. As an endeavor to robust optimization, the number of design variables to be simultaneously optimized is divided by grouping the scantling variables by the plane. A sequential optimization is performed group by group. As a verification example, a central column of a semi-submersible structure is optimized and compared with a conventional optimization of all design variables at once.

A study on the working mechanism of internal pressure of super-large cooling towers based on two-way coupling between wind and rain

  • Ke, Shitang;Yu, Wenlin;Ge, Yaojun
    • Structural Engineering and Mechanics
    • /
    • v.70 no.4
    • /
    • pp.479-497
    • /
    • 2019
  • In the current code design, the use of a uniform internal pressure coefficient of cooling towers as internal suction cannot reflect the 3D characteristics of flow field inside the tower body with different ventilation rate of shutters. Moreover, extreme weather such as heavy rain also has a direct impact on aerodynamic force on the internal surface and changes the turbulence effect of pulsating wind. In this study, the world's tallest cooling tower under construction, which stands 210m, is taken as the research object. The algorithm for two-way coupling between wind and rain is adopted. Simulation of wind field and raindrops is performed iteratively using continuous phase and discrete phase models, respectively, under the general principles of computational fluid dynamics (CFD). Firstly, the rule of influence of 9 combinations of wind speed and rainfall intensity on the volume of wind-driven rain, additional action force of raindrops and equivalent internal pressure coefficient of the tower body is analyzed. The combination of wind velocity and rainfall intensity that is most unfavorable to the cooling tower in terms of distribution of internal pressure coefficient is identified. On this basis, the wind/rain loads, distribution of aerodynamic force and working mechanism of internal pressures of the cooling tower under the most unfavorable working condition are compared between the four ventilation rates of shutters (0%, 15%, 30% and 100%). The results show that the amount of raindrops captured by the internal surface of the tower decreases as the wind velocity increases, and increases along with the rainfall intensity and ventilation rate of the shutters. The maximum value of rain-induced pressure coefficient is 0.013. The research findings lay the basis for determining the precise values of internal surface loads of cooling tower under extreme weather conditions.

Research on the Main Memory Access Count According to the On-Chip Memory Size of an Artificial Neural Network (인공 신경망 가속기 온칩 메모리 크기에 따른 주메모리 접근 횟수 추정에 대한 연구)

  • Cho, Seok-Jae;Park, Sungkyung;Park, Chester Sungchung
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.180-192
    • /
    • 2021
  • One widely used algorithm for image recognition and pattern detection is the convolution neural network (CNN). To efficiently handle convolution operations, which account for the majority of computations in the CNN, we use hardware accelerators to improve the performance of CNN applications. In using these hardware accelerators, the CNN fetches data from the off-chip DRAM, as the massive computational volume of data makes it difficult to derive performance improvements only from memory inside the hardware accelerator. In other words, data communication between off-chip DRAM and memory inside the accelerator has a significant impact on the performance of CNN applications. In this paper, a simulator for the CNN is developed to analyze the main memory or DRAM access with respect to the size of the on-chip memory or global buffer inside the CNN accelerator. For AlexNet, one of the CNN architectures, when simulated with increasing the size of the global buffer, we found that the global buffer of size larger than 100kB has 0.8x as low a DRAM access count as the global buffer of size smaller than 100kB.

A Public-Key Crypto-Core supporting Edwards Curves of Edwards25519 and Edwards448 (에드워즈 곡선 Edwards25519와 Edwards448을 지원하는 공개키 암호 코어)

  • Yang, Hyeon-Jun;Shin, Kyung-Wook
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.174-179
    • /
    • 2021
  • An Edwards curve cryptography (EdCC) core supporting point scalar multiplication (PSM) on Edwards curves of Edwards25519 and Edwards448 was designed. For area-efficient implementation, finite field multiplier based on word-based Montgomery multiplication algorithm was designed, and the extended twisted Edwards coordinates system was adopted to implement point operations without division operation. As a result of synthesizing the EdCC core with 100 MHz clock, it was implemented with 24,073 equivalent gates and 11 kbits RAM, and the maximum operating frequency was estimated to be 285 MHz. The evaluation results show that the EdCC core can compute 299 and 66 PSMs per second on Edwards25519 and Edwards448 curves, respectively. Compared to the ECC core with similar structure, the number of clock cycles required for 256-bit PSM was reduced by about 60%, resulting in 7.3 times improvement in computational performance.

Efficient graph-based two-stage superpixel generation method (효율적인 그래프 기반 2단계 슈퍼픽셀 생성 방법)

  • Park, Sanghyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1520-1527
    • /
    • 2019
  • Superpixel methods are widely used in the preprocessing stage as a method to reduce computational complexity by simplifying images while maintaining the characteristics of images in the field of computer vision. It is common to generate superpixels with a regular size and form based on the pixel values rather than considering the characteristics of the image. In this paper, we propose a method to generate superpixels considering the characteristics of an image according to the application. The proposed method consists of two steps, and the first step is to oversegment an image so that the boundary information of the image is well preserved. In the second step, superpixels are merged based on similarity to produce the desired number of superpixels, where the form of superpixels are controlled by limiting the maximum size of superpixels. Experimental results show that the proposed method preserves the boundaries of an image more accurately than the existing method.

Improvement of Power Consumption of Canny Edge Detection Using Reduction in Number of Calculations at Square Root (제곱근 연산 횟수 감소를 이용한 Canny Edge 검출에서의 전력 소모개선)

  • Hong, Seokhee;Lee, Juseong;An, Ho-Myoung;Koo, Jihun;Kim, Byuncheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.6
    • /
    • pp.568-574
    • /
    • 2020
  • In this paper, we propose a method to reduce the square root computation having high computation complexity in Canny edge detection algorithm using image processing. The proposed method is to reduce the number of operation calculating gradient magnitude using pixel's continuity using make a specific pattern instead of square root computation in gradient magnitude calculating operation. Using various test images and changing number of hole pixels, we can check for calculate match rate about 97% for one hole, and 94%, 90%, 88% when the number of hole is increased and measure decreasing computation time about 0.2ms for one hole, and 0.398ms, 0.6ms, 0.8ms when the number of hole is increased. Through this method, we expect to implement low power embedded vision system through high accuracy and a reduced operation number using two-hole pixels.

An Analysis Study of SW·AI elements of Primary Textbooks based on the 2015 Revised National Curriculum (2015 개정교육과정에 따른 초등학교 교과서의 SW·AI 요소 분석 연구)

  • Park, SunJu
    • Journal of The Korean Association of Information Education
    • /
    • v.25 no.2
    • /
    • pp.317-325
    • /
    • 2021
  • In this paper, the degree of reflection of SW·AI elements and CT elements was investigated and analyzed for a total of 44 textbooks of Korean, social, moral, mathematics and science textbooks based on the 2015 revised curriculum. As a result of the analysis, most of the activities of data collection, data analysis, and data presentation, which are ICT elements, were not reflected, and algorithm and programming elements were not reflected among SW·AI content elements, and there were no abstraction, automation, and generalization elements among CT elements. Therefore, in order to effectively implement SW·AI convergence education in elementary school subjects, we will expand ICT utilization activities to SW·AI utilization activities. Training on the understanding of SW·AI convergence education and improvement of teaching and learning methods using SW·AI is needed for teachers. In addition, it is necessary to establish an information curriculum and secure separate class hours for substantial SW·AI education.

Efficient and Secure User Authentication and Key Agreement In SIP Networks (효율적이고 안전한 SIP 사용자 인증 및 키 교환)

  • Choi, Jae-Duck;Jung, Sou-Hwan
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.3
    • /
    • pp.73-82
    • /
    • 2009
  • This paper proposes an efficient and secure user authentication and key agreement scheme instead of the HTTP digest and TLS between the SIP UA and server. Although a number of security schemes for authentication and key exchange in SIP network are proposed, they still suffer from heavy computation overhead on the UA's side. The proposed scheme uses the HTIP Digest authentication and employs the Diffie-Hellman algorithm to protect user password against dictionary attacks. For a resource-constrained SIP UA, the proposed scheme delegates cryptographically computational operations like an exponentiation operation to the SIP server so that it is more efficient than the existing schemes in terms of energy consumption on the UA. Furthermore, it allows the proposed scheme to be easily applied to the deployed SIP networks since it does not require major modification to the signaling path associated with current SIP standard.

A Heuristic for Service-Parts Lot-Sizing with Disassembly Option (분해옵션 포함 서비스부품 로트사이징 휴리스틱)

  • Jang, Jin-Myeong;Kim, Hwa-Joong;Son, Dong-Hoon;Lee, Dong-Ho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.2
    • /
    • pp.24-35
    • /
    • 2021
  • Due to increasing awareness on the treatment of end-of-use/life products, disassembly has been a fast-growing research area of interest for many researchers over recent decades. This paper introduces a novel lot-sizing problem that has not been studied in the literature, which is the service-parts lot-sizing with disassembly option. The disassembly option implies that the demands of service parts can be fulfilled by newly manufactured parts, but also by disassembled parts. The disassembled parts are the ones recovered after the disassembly of end-of-use/life products. The objective of the considered problem is to maximize the total profit, i.e., the revenue of selling the service parts minus the total cost of the fixed setup, production, disassembly, inventory holding, and disposal over a planning horizon. This paper proves that the single-period version of the considered problem is NP-hard and suggests a heuristic by combining a simulated annealing algorithm and a linear-programming relaxation. Computational experiment results show that the heuristic generates near-optimal solutions within reasonable computation time, which implies that the heuristic is a viable optimization tool for the service parts inventory management. In addition, sensitivity analyses indicate that deciding an appropriate price of disassembled parts and an appropriate collection amount of EOLs are very important for sustainable service parts systems.