• Title/Summary/Keyword: computational experiment

Search Result 991, Processing Time 0.026 seconds

A Study on the Thermal Flow Analysis for Heat Performance Improvement of a Wireless Power Charger (열 유동해석을 통한 무선충전기 발열 성능 향상에 관한 연구)

  • Kim, Pyeong-Jun;Park, Dong-Kyou
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.7
    • /
    • pp.310-316
    • /
    • 2019
  • In automotive application, customers are demanding high efficiency and various functions for convenience. The demand for these automotive applications is steadily increasing. In this study, it has been studied the analysis of heat flow to improve the PCB(printed circuit board) heating performance of WPC (wireless power charger) recently developed for convenience. The charging performance of the wireless charger has been reduced due to power dissipation and thermal resistance of PCB. Therefore, it has been proposed optimal PCB design, layout and position of electronic parts through the simulation of heat flow analysis and PCB design was analyzed and decided at each design stage. Then, the experimental test is performed to verify the consistency of the analysis results under actual environmental conditions. In this paper, The PCB modeling and heat flow simulation in transient response were performed using HyperLynx Thermal and FloTHERM. In addition, the measurement was performed using infrared thermal imaging camera and used to verify the analysis results. In the final comparison, the error between analysis and experiment was found to be less than 10 % and the heating performance of PCB was also improved.

Unsupervised Non-rigid Registration Network for 3D Brain MR images (3차원 뇌 자기공명 영상의 비지도 학습 기반 비강체 정합 네트워크)

  • Oh, Donggeon;Kim, Bohyoung;Lee, Jeongjin;Shin, Yeong-Gil
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.64-74
    • /
    • 2019
  • Although a non-rigid registration has high demands in clinical practice, it has a high computational complexity and it is very difficult for ensuring the accuracy and robustness of registration. This study proposes a method of applying a non-rigid registration to 3D magnetic resonance images of brain in an unsupervised learning environment by using a deep-learning network. A feature vector between two images is produced through the network by receiving both images from two different patients as inputs and it transforms the target image to match the source image by creating a displacement vector field. The network is designed based on a U-Net shape so that feature vectors that consider all global and local differences between two images can be constructed when performing the registration. As a regularization term is added to a loss function, a transformation result similar to that of a real brain movement can be obtained after the application of trilinear interpolation. This method enables a non-rigid registration with a single-pass deformation by only receiving two arbitrary images as inputs through an unsupervised learning. Therefore, it can perform faster than other non-learning-based registration methods that require iterative optimization processes. Our experiment was performed with 3D magnetic resonance images of 50 human brains, and the measurement result of the dice similarity coefficient confirmed an approximately 16% similarity improvement by using our method after the registration. It also showed a similar performance compared with the non-learning-based method, with about 10,000 times speed increase. The proposed method can be used for non-rigid registration of various kinds of medical image data.

Single Image Super Resolution Based on Residual Dense Channel Attention Block-RecursiveSRNet (잔여 밀집 및 채널 집중 기법을 갖는 재귀적 경량 네트워크 기반의 단일 이미지 초해상도 기법)

  • Woo, Hee-Jo;Sim, Ji-Woo;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.429-440
    • /
    • 2021
  • With the recent development of deep convolutional neural network learning, deep learning techniques applied to single image super-resolution are showing good results. One of the existing deep learning-based super-resolution techniques is RDN(Residual Dense Network), in which the initial feature information is transmitted to the last layer using residual dense blocks, and subsequent layers are restored using input information of previous layers. However, if all hierarchical features are connected and learned and a large number of residual dense blocks are stacked, despite good performance, a large number of parameters and huge computational load are needed, so it takes a lot of time to learn a network and a slow processing speed, and it is not applicable to a mobile system. In this paper, we use the residual dense structure, which is a continuous memory structure that reuses previous information, and the residual dense channel attention block using the channel attention method that determines the importance according to the feature map of the image. We propose a method that can increase the depth to obtain a large receptive field and maintain a concise model at the same time. As a result of the experiment, the proposed network obtained PSNR as low as 0.205dB on average at 4× magnification compared to RDN, but about 1.8 times faster processing speed, about 10 times less number of parameters and about 1.74 times less computation.

MF sampler: Sampling method for improving the performance of a video based fashion retrieval model (MF sampler: 동영상 기반 패션 검색 모델의 성능 향상을 위한 샘플링 방법)

  • Baek, Sanghun;Park, Jonghyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.329-346
    • /
    • 2022
  • Recently, as the market for short form videos (Instagram, TikTok, YouTube) on social media has gradually increased, research using them is actively being conducted in the artificial intelligence field. A representative research field is Video to Shop, which detects fashion products in videos and searches for product images. In such a video-based artificial intelligence model, product features are extracted using convolution operations. However, due to the limitation of computational resources, extracting features using all the frames in the video is practically impossible. For this reason, existing studies have improved the model's performance by sampling only a part of the entire frame or developing a sampling method using the subject's characteristics. In the existing Video to Shop study, when sampling frames, some frames are randomly sampled or sampled at even intervals. However, this sampling method degrades the performance of the fashion product search model while sampling noise frames where the product does not exist. Therefore, this paper proposes a sampling method MF (Missing Fashion items on frame) sampler that removes noise frames and improves the performance of the search model. MF sampler has improved the problem of resource limitations by developing a keyframe mechanism. In addition, the performance of the search model is improved through noise frame removal using the noise detection model. As a result of the experiment, it was confirmed that the proposed method improves the model's performance and helps the model training to be effective.

Comparative study of laminar and turbulent models for three-dimensional simulation of dam-break flow interacting with multiarray block obstacles (다층 블록 장애물과 상호작용하는 3차원 댐붕괴흐름 모의를 위한 층류 및 난류 모델 비교 연구)

  • Chrysanti, Asrini;Song, Yangheon;Son, Sangyoung
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.spc1
    • /
    • pp.1059-1069
    • /
    • 2023
  • Dam-break flow occurs when an elevated dam suddenly collapses, resulting in the catastrophic release of rapid and uncontrolled impounded water. This study compares laminar and turbulent closure models for simulating three-dimensional dam-break flows using OpenFOAM. The Reynolds-Averaged Navier-Stokes (RANS) model, specifically the k-ε model, is employed to capture turbulent dissipation. Two scenarios are evaluated based on a laboratory experiment and a modified multi-layered block obstacle scenario. Both models effectively represent dam-break flows, with the turbulent closure model reducing oscillations. However, excessive dissipation in turbulent models can underestimate water surface profiles. Improving numerical schemes and grid resolution enhances flow recreation, particularly near structures and during turbulence. Model stability is more significantly influenced by numerical schemes and grid refinement than the use of turbulence closure. The k-ε model's reliance on time-averaging processes poses challenges in representing dam-break profiles with pronounced discontinuities and unsteadiness. While simulating turbulence models requires extensive computational efforts, the performance improvement compared to laminar models is marginal. To achieve better representation, more advanced turbulence models like Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) are recommended, necessitating small spatial and time scales. This research provides insights into the applicability of different modeling approaches for simulating dam-break flows, emphasizing the importance of accurate representation near structures and during turbulence.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

A study on the use of a Business Intelligence system : the role of explanations (비즈니스 인텔리전스 시스템의 활용 방안에 관한 연구: 설명 기능을 중심으로)

  • Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.155-169
    • /
    • 2014
  • With the rapid advances in technologies, organizations are more likely to depend on information systems in their decision-making processes. Business Intelligence (BI) systems, in particular, have become a mainstay in dealing with complex problems in an organization, partly because a variety of advanced computational methods from statistics, machine learning, and artificial intelligence can be applied to solve business problems such as demand forecasting. In addition to the ability to analyze past and present trends, these predictive analytics capabilities provide huge value to an organization's ability to respond to change in markets, business risks, and customer trends. While the performance effects of BI system use in organization settings have been studied, it has been little discussed on the use of predictive analytics technologies embedded in BI systems for forecasting tasks. Thus, this study aims to find important factors that can help to take advantage of the benefits of advanced technologies of a BI system. More generally, a BI system can be viewed as an advisor, defined as the one that formulates judgments or recommends alternatives and communicates these to the person in the role of the judge, and the information generated by the BI system as advice that a decision maker (judge) can follow. Thus, we refer to the findings from the advice-giving and advice-taking literature, focusing on the role of explanations of the system in users' advice taking. It has been shown that advice discounting could occur when an advisor's reasoning or evidence justifying the advisor's decision is not available. However, the majority of current BI systems merely provide a number, which may influence decision makers in accepting the advice and inferring the quality of advice. We in this study explore the following key factors that can influence users' advice taking within the setting of a BI system: explanations on how the box-office grosses are predicted, types of advisor, i.e., system (data mining technique) or human-based business advice mechanisms such as prediction markets (aggregated human advice) and human advisors (individual human expert advice), users' evaluations of the provided advice, and individual differences in decision-makers. Each subject performs the following four tasks, by going through a series of display screens on the computer. First, given the information of the given movie such as director and genre, the subjects are asked to predict the opening weekend box office of the movie. Second, in light of the information generated by an advisor, the subjects are asked to adjust their original predictions, if they desire to do so. Third, they are asked to evaluate the value of the given information (e.g., perceived usefulness, trust, satisfaction). Lastly, a short survey is conducted to identify individual differences that may affect advice-taking. The results from the experiment show that subjects are more likely to follow system-generated advice than human advice when the advice is provided with an explanation. When the subjects as system users think the information provided by the system is useful, they are also more likely to take the advice. In addition, individual differences affect advice-taking. The subjects with more expertise on advisors or that tend to agree with others adjust their predictions, following the advice. On the other hand, the subjects with more knowledge on movies are less affected by the advice and their final decisions are close to their original predictions. The advances in predictive analytics of a BI system demonstrate a great potential to support increasingly complex business decisions. This study shows how the designs of a BI system can play a role in influencing users' acceptance of the system-generated advice, and the findings provide valuable insights on how to leverage the advanced predictive analytics of the BI system in an organization's forecasting practices.

Characteristics of Flow and Sedimentation around the Embankment (방조제 부근에서의 흐름과 퇴적환경의 특성)

  • Lee Moon Ock;Park Il Heum;Lee Yeon Gyu
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.3 no.4
    • /
    • pp.37-55
    • /
    • 2000
  • Two-dimensional numerical experiments and field surveys have been conducted to clarify some environmental variations in the flow and sedimentation in the adjacent seas after the construction of a tidal embankment. Velocities of flow and water levels in the bay decreased after the construction of the barrage. When the freshwater was instantly released into the bay, the conditions of flow were unaltered, with the exception of a minor variation in velocities and tidal levels around the sluices at the ebb flow. The computational results showed that freshwater released at the low water reached the outside of the bay and then returned to the inside with the tidal currents at the high water. The front sea regions of the embankment had a variety of sedimentary phases such as a clayish silt, a silty clay and a sandy clayish silt. However, a clayish silt was prevalent in the middle of the bay. On the other hand, the skewness, which reflects the behaviour of sediments, was $\{pm}0.1$ at the front regions of the embankment while it was more than ±0.3 in the middle of the bay. Analytical results of drilling samples acquired from the front of the sluice gates showed that the lower part of the sediments consists of very fine silty or clayish grains. The upper surface layer consisted of shellfish, such as oyster or barnacle with a thickness of 40~50 cm. Therefore, it seemed that the lower part of the sediments would have been one of intertidal zones prior to the embankment construction while the upper shellfish layer would have been debris of shellfish farms formed in the adjacent seas after the construction of the embankment. This shows the difference of sedimentary phases reflected the influence of a tidal embankment construction.

  • PDF

Study on the Heat Transfer Phenomenon around Underground Concrete Digesters for Bigas Production Systems (생물개스 발생시스템을 위한 지하매설콘크리트 다이제스터의 열전달에 관한 연구)

  • 김윤기;고재균
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.22 no.1
    • /
    • pp.53-66
    • /
    • 1980
  • The research work is concerned with the analytical and experimental studies on the heat transfer phenomenon around the underground concrete digester used for biogas production Systems. A mathematical and computational method was developed to estimate heat losses from underground cylindrical concrete digester used for biogas production systems. To test its feasibility and to evaluate thermal parameters of materials related, the method was applied to six physical model digesters. The cylindrical concrete digester was taken as a physical model, to which the model,atical model of heat balance can be applied. The mathematical model was transformed by means of finite element method and used to analyze temperature distribution with respect to several boundary conditions and design parameters. The design parameters of experimental digesters were selected as; three different sizes 40cm by 80cm, 80cm by 160cm and l00cm by 200cm in diameter and height; two different levels of insulation materials-plain concrete and vermiculite mixing in concrete; and two different types of installation-underground and half-exposed. In order to carry out a particular aim of this study, the liquid within the digester was substituted by water, and its temperature was controlled in five levels-35。 C, 30。 C, 25。 C, 20。C and 15。C; and the ambient air temperature and ground temperature were checked out of the system under natural winter climate conditions. The following results were drawn from the study. 1.The analytical method, by which the estimated values of temperature distribution around a cylindrical digester were obtained, was able to be generally accepted from the comparison of the estimated values with the measured. However, the difference between the estimated and measured temperature had a trend to be considerably increased when the ambient temperature was relatively low. This was mainly related variations of input parameters including the thermal conductivity of soil, applied to the numerical analysis. Consequently, the improvement of these input data for the simulated operation of the numerical analysis is expected as an approach to obtain better refined estimation. 2.The difference between estimated and measured heat losses was shown to have the similar trend to that of temperature distribution discussed above. 3.It was found that a map of isothermal lines drawn from the estimated temperature distribution was very useful for a general observation of the direction and rate of heat transfer within the boundary. From this analysis, it was interpreted that most of heat losses is passed through the triangular section bounded within 45 degrees toward the wall at the bottom edge of the digesten Therefore, any effective insulation should be considered within this region. 4.It was verified by experiment that heat loss per unit volume of liquid was reduced as the size of the digester became larger For instance, at the liquid temperature of 35˚ C, the heat loss per unit volume from the 0. 1m$^3$ digester was 1, 050 Kcal/hr m$^3$, while at for 1. 57m$^3$ digester was 150 Kcal/hr m$^3$. 5.In the light of insulation, the vermiculite concrete was consistently shown to be superior to the plain concrete. At the liquid temperature ranging from 15。 C to 350 C, the reduction of heat loss was ranged from 5% to 25% for the half-exposed digester, while from 10% to 28% for the fully underground digester. 6.In the comparison of heat loss between the half-exposed and underground digesters, the heat loss from the former was fr6m 1,6 to 2, 6 times as much as that from the latter. This leads to the evidence that the underground digester takes advantage of heat conservation during winter.

  • PDF

Dehumidification and Temperature Control for Green Houses using Lithium Bromide Solution and Cooling Coil (리튬브로마이드(LiBr) 용액의 흡습성질과 냉각코일을 이용한 온실 습도 및 온도 제어)

  • Lee, Sang Yeol;Lee, Chung Geon;Euh, Seung Hee;Oh, Kwang Cheol;Oh, Jae Heun;Kim, Dea Hyun
    • Journal of Bio-Environment Control
    • /
    • v.23 no.4
    • /
    • pp.337-341
    • /
    • 2014
  • Due to the nature of the ambient air temperature in summer in korea, the growth of crops in greenhouse normally requires cooling and dehumidification. Even though various cooling and dehumidification methods have been presented, there are many obstacles to figure out in practical application such as excessive energy use, cost, and performance. To overcome this problem, the lab scale experiments using lithium bromide(LiBr) solution and cooling coil for dehumidification and cooling in greenhouses were performed. In this study, preliminary experiment of dehumidification and cooling for the greenhouse was done using LiBr solution as the dehumidifying materials, and cooling coil separately and then combined system was tested as well. Hot and humid air was dehumidified from 85% to 70% by passing through a pad soaked with LiBr, and cooled from 308K to 299K through the cooling coil. computational Fluid Dynamics(CFD) analysis and analytical solution were done for the change of air temperature by heat transfer. Simulation results showed that the final air temperature was calculated 299.7K and 299.9K respectively with the deviation of 0.7K comparing the experimental value having good agreement. From this result, LiBr solution with cooling coil system could be applicable in the greenhouse.