• Title/Summary/Keyword: Processing Speed

Search Result 4,286, Processing Time 0.038 seconds

A study on the manufacture of humidity sensors using layered silicate nanocomposite materials (층상 실리케이트계 나노복합 소재 적용 습도센서 제조에 관한 연구)

  • Park, Byoung-Ki
    • Industry Promotion Research
    • /
    • v.9 no.1
    • /
    • pp.31-38
    • /
    • 2024
  • In this study, evaluated the properties of layered silicate-based nanocomposite sensitive film. For the fabrication of nanocomposite materials, we selected organically modified layered silicate materials, specifically Cloisite® and Bentone®, which were treated with quaternary ammonium salts. The impedance of the humidity sensors containing organically modified montmorillonite/hectorite clay decreased with increasing relative humidity(RH%). In the case of the Cloisite® humidity sensor exhibited slightly better impedance linearity and hysteresis compared to the Bentone® 38 humidity sensor. Additionally the impedance of the sensor with Bentone® 38 addition was the lowest when compared to the Cloisite®-modified sensor. Comparing the Cloisite®-modified sensors individually, we observed different moisture absorption characteristics based on the hydrophilic properties of the organic-treated materials. The response speed of Cloisite® 93A tended to be slower due to differences in moisture evaporation rates influenced by the hydrophilic organic components. Based on these results, moisture barriers utilizing organically modified layered silicate materials may exhibit slightly lower moisture absorption properties compared to conventional polymer-based moisture barriers. However, their excellent stability, simple processing, and cost-effectiveness make them suitable for humidity sensor applications.

Simulation analysis and evaluation of decontamination effect of different abrasive jet process parameters on radioactively contaminated metal

  • Lin Zhong;Jian Deng;Zhe-wen Zuo;Can-yu Huang;Bo Chen;Lin Lei;Ze-yong Lei;Jie-heng Lei;Mu Zhao;Yun-fei Hua
    • Nuclear Engineering and Technology
    • /
    • v.55 no.11
    • /
    • pp.3940-3955
    • /
    • 2023
  • A new method of numerical simulating prediction and decontamination effect evaluation for abrasive jet decontamination to radioactively contaminated metal is proposed. Based on the Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) coupled simulation model, the motion patterns and distribution of abrasives can be predicted, and the decontamination effect can be evaluated by image processing and recognition technology. The impact of three key parameters (impact distance, inlet pressure, abrasive mass flow rate) on the decontamination effect is revealed. Moreover, here are experiments of reliability verification to decontamination effect and numerical simulation methods that has been conducted. The results show that: 60Co and other homogeneous solid solution radioactive pollutants can be removed by abrasive jet, and the average removal rate of Co exceeds 80%. It is reliable for the proposed numerical simulation and evaluation method because of the well goodness of fit between predicted value and actual values: The predicted values and actual values of the abrasive distribution diameter are Ф57 and Ф55; the total coverage rate is 26.42% and 23.50%; the average impact velocity is 81.73 m/s and 78.00 m/s. Further analysis shows that the impact distance has a significant impact on the distribution of abrasive particles on the target surface, the coverage rate of the core area increases at first, and then decreases with the increase of the impact distance of the nozzle, which reach a maximum of 14.44% at 300 mm. It is recommended to set the impact distance around 300 mm, because at this time the core area coverage of the abrasive is the largest and the impact velocity is stable at the highest speed of 81.94 m/s. The impact of the nozzle inlet pressure on the decontamination effect mainly affects the impact kinetic energy of the abrasive and has little impact on the distribution. The greater the inlet pressure, the greater the impact kinetic energy, and the stronger the decontamination ability of the abrasive. But in return, the energy consumption is higher, too. For the decontamination of radioactively contaminated metals, it is recommended to set the inlet pressure of the nozzle at around 0.6 MPa. Because most of the Co elements can be removed under this pressure. Increasing the mass and flow of abrasives appropriately can enhance the decontamination effectiveness. The total mass of abrasives per unit decontamination area is suggested to be 50 g because the core area coverage rate of the abrasive is relatively large under this condition; and the nozzle wear extent is acceptable.

A Study on the Capacity Review of One-lane Hi-pass Lanes on Highways : Focusing on Using Bootstrapping Techniques (고속도로 단차로 하이패스차로 용량 검토에 관한 연구 : 부트스트랩 기법 활용 중심으로)

  • Bosung Kim;Donghee Han
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.3
    • /
    • pp.1-16
    • /
    • 2024
  • In the present highway design guidelines suggest that the capacity of one-lane hi-pass lanes is 2,000 veh/h for mainline toll plaza and 1,700 veh/h for interchange toll plaza. However, in a study conducted in early 2010, capacity of the mainline toll plaza was presented with 1,476 veh/h/ln to 1,665 veh/h/ln, and capacity of the interchange toll plaza was presented as 1,443 veh/h/ln. Accordingly, this study examined the feasibility of the capacity of the currently proposed highway one-lane hi-pass lane. Based on the 2021 individual vehicle passing data collected from the one-lane hi-pass gantry, the speed-traffic volume relationship graph and headway were used to calculate and compare capacity. In addition, the bootstrapping technique was introduced to utilize the headway and new processing methods for collected data were reviewed. As a result of the analysis, the one-lane hi-pass capacity could be estimated at 1,700 veh/h/ln for the interchange toll plaza, and at least 1,700 veh/h/ln for the mainline toll plaza. In addition, by using the bootstrap technique when using headway data, it was possible to present an estimated capacity similar to the observed capacity.

Comparative Study of Fish Detection and Classification Performance Using the YOLOv8-Seg Model (YOLOv8-Seg 모델을 이용한 어류 탐지 및 분류 성능 비교연구)

  • Sang-Yeup Jin;Heung-Bae Choi;Myeong-Soo Han;Hyo-tae Lee;Young-Tae Son
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.2
    • /
    • pp.147-156
    • /
    • 2024
  • The sustainable management and enhancement of marine resources are becoming increasingly important issues worldwide. This study was conducted in response to these challenges, focusing on the development and performance comparison of fish detection and classification models as part of a deep learning-based technique for assessing the effectiveness of marine resource enhancement projects initiated by the Korea Fisheries Resources Agency. The aim was to select the optimal model by training various sizes of YOLOv8-Seg models on a fish image dataset and comparing each performance metric. The dataset used for model construction consisted of 36,749 images and label files of 12 different species of fish, with data diversity enhanced through the application of augmentation techniques during training. When training and validating five different YOLOv8-Seg models under identical conditions, the medium-sized YOLOv8m-Seg model showed high learning efficiency and excellent detection and classification performance, with the shortest training time of 13 h and 12 min, an of 0.933, and an inference speed of 9.6 ms. Considering the balance between each performance metric, this was deemed the most efficient model for meeting real-time processing requirements. The use of such real-time fish detection and classification models could enable effective surveys of marine resource enhancement projects, suggesting the need for ongoing performance improvements and further research.

Matching Points Filtering Applied Panorama Image Processing Using SURF and RANSAC Algorithm (SURF와 RANSAC 알고리즘을 이용한 대응점 필터링 적용 파노라마 이미지 처리)

  • Kim, Jeongho;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.144-159
    • /
    • 2014
  • Techniques for making a single panoramic image using multiple pictures are widely studied in many areas such as computer vision, computer graphics, etc. The panorama image can be applied to various fields like virtual reality, robot vision areas which require wide-angled shots as an useful way to overcome the limitations such as picture-angle, resolutions, and internal informations of an image taken from a single camera. It is so much meaningful in a point that a panoramic image usually provides better immersion feeling than a plain image. Although there are many ways to build a panoramic image, most of them are using the way of extracting feature points and matching points of each images for making a single panoramic image. In addition, those methods use the RANSAC(RANdom SAmple Consensus) algorithm with matching points and the Homography matrix to transform the image. The SURF(Speeded Up Robust Features) algorithm which is used in this paper to extract featuring points uses an image's black and white informations and local spatial informations. The SURF is widely being used since it is very much robust at detecting image's size, view-point changes, and additionally, faster than the SIFT(Scale Invariant Features Transform) algorithm. The SURF has a shortcoming of making an error which results in decreasing the RANSAC algorithm's performance speed when extracting image's feature points. As a result, this may increase the CPU usage occupation rate. The error of detecting matching points may role as a critical reason for disqualifying panoramic image's accuracy and lucidity. In this paper, in order to minimize errors of extracting matching points, we used $3{\times}3$ region's RGB pixel values around the matching points' coordinates to perform intermediate filtering process for removing wrong matching points. We have also presented analysis and evaluation results relating to enhanced working speed for producing a panorama image, CPU usage rate, extracted matching points' decreasing rate and accuracy.

A Fluid Analysis Study on Centrifugal Pump Performance Improvement by Impeller Modification (원심펌프 회전차 Modification시 성능개선에 관한 유동해석 연구)

  • Lee, A-Yeong;Jang, Hyun-Jun;Lee, Jin-Woo;Cho, Won-Jeong
    • Journal of the Korean Institute of Gas
    • /
    • v.24 no.2
    • /
    • pp.1-8
    • /
    • 2020
  • Centrifugal pump is a facility that transfers energy to fluid through centrifugal force, which is usually generated by rotating the impeller at high speed, and is a major process facility used in many LNG production bases such as vaporization seawater pump, industrial water and fire extinguishing pump using seawater. to be. Currently, pumps in LNG plant sites are subject to operating conditions that vary depending on the amount of supply desired by the customer for a long period of time. Pumps in particular occupy a large part of the consumption strategy at the plant site, and if the optimum operation condition is not available, it can incur enormous energy loss in long term plant operation. In order to solve this problem, it is necessary to identify the performance deterioration factor through the flow analysis and the result analysis according to the fluctuations of the pump's operating conditions and to determine the optimal operation efficiency. In order to evaluate operation efficiency through experimental techniques, considerable time and cost are incurred, such as on-site operating conditions and manufacturing of experimental equipment. If the performance of the pump is not suitable for the site, and the performance of the pump needs to be reduced, a method of changing the rotation speed or using a special liquid containing high viscosity or solids is used. Especially, in order to prevent disruptions in the operation of LNG production bases, a technology is required to satisfy the required performance conditions by processing the existing impeller of the pump within a short time. Therefore, in this study, the rotation difference of the pump was applied to the ANSYS CFX program by applying the modified 3D modeling shape. In addition, the results obtained from the flow analysis and the curve fitting toolbox of the MATLAB program were analyzed numerically to verify the outer diameter correction theory.

Calculation of Dry Matter Yield Damage of Whole Crop Maize in Accordance with Abnormal Climate Using Machine Learning Model (기계학습 모델을 이용한 이상기상에 따른 사일리지용 옥수수 생산량 피해량)

  • Jo, Hyun Wook;Kim, Min Kyu;Kim, Ji Yung;Jo, Mu Hwan;Kim, Moonju;Lee, Su An;Kim, Kyeong Dae;Kim, Byong Wan;Sung, Kyung Il
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.41 no.4
    • /
    • pp.287-294
    • /
    • 2021
  • The objective of this study was conducted to calculate the damage of whole crop maize in accordance with abnormal climate using the forage yield prediction model through machine learning. The forage yield prediction model was developed through 8 machine learning by processing after collecting whole crop maize and climate data, and the experimental area was selected as Gyeonggi-do. The forage yield prediction model was developed using the DeepCrossing (R2=0.5442, RMSE=0.1769) technique of the highest accuracy among machine learning techniques. The damage was calculated as the difference between the predicted dry matter yield of normal and abnormal climate. In normal climate, the predicted dry matter yield varies depending on the region, it was found in the range of 15,003~17,517 kg/ha. In abnormal temperature, precipitation, and wind speed, the predicted dry matter yield differed according to region and abnormal climate level, and ranged from 14,947 to 17,571, 14,986 to 17,525, and 14,920 to 17,557 kg/ha, respectively. In abnormal temperature, precipitation, and wind speed, the damage was in the range of -68 to 89 kg/ha, -17 to 17 kg/ha, and -112 to 121 kg/ha, respectively, which could not be judged as damage. In order to accurately calculate the damage of whole crop maize need to increase the number of abnormal climate data used in the forage yield prediction model.

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

A Study on Brand Identity of TV Programs in the Digital Culture - Focusing on the comparative research of current issue programs, and development - (디지털 문화에서 TV 방송의 브랜드 아이덴티티 연구 -시사 교양 프로그램의 사례비교 및 개발을 중심으로-)

  • Jeong, Bong-Keum;Chang, Dong-Ryun
    • Archives of design research
    • /
    • v.18 no.4 s.62
    • /
    • pp.53-64
    • /
    • 2005
  • The emergence of new communication media, digital, is something of a wonder, as well as a cultural tension. The industrial technologies that dramatically expand human abilities are being developed much faster than the speed of adaptation by humans. Without an exception, it creates new contents and form of the culture by shaking the very foundation of the notion about human beings. Korean broadcasting environment has stepped into the era of multi-media, multi-channel as the digital technology separated the media into network, cable, satellite and internet. In this digital culture, broadcasting, as a medium of information delivering and communication, has bigger influence than ever. Such changes in broadcasting environment turned the TV viewers into new consumers who participate and play the main role in active communication by choosing and using the media. This study is trying to systemize the question about the core identity of broadcasting through brand as the consumers stand in the center of broadcasting with the power to select channel. The story schema theory can be applied as a cognitive psychological tool to approach the active consumers in order to explain the cognitive processes that are related to information processing. It is a design with stories, which comes up as a case of a brand's story telling. The range of this study covers the current issue and educational programs in network TV during the period of May and August of year 2005. The cases of Korean and foreign programs were compared by the station each program is broadcasted. This study concludes that it is important to take the channel identity into the consideration in the brand strategy of each program. Especially, the leading programs of a station must not be treated as a separate program that has nothing to do with the station's identity. They must be treated to include the contents and form that builds the identity of the channel. Also, this study reconfirmed that building a brand of the anchor person can play as an important factor in the identity of the program's brand.

  • PDF