• Title/Summary/Keyword: 알고리즘 시간효율성

Search Result 1,105, Processing Time 0.031 seconds

The Improvement of Computational Efficiency in KIM by an Adaptive Time-step Algorithm (적응시간 간격 알고리즘을 이용한 KIM의 계산 효율성 개선)

  • Hyun Nam;Suk-Jin Choi
    • Atmosphere
    • /
    • v.33 no.4
    • /
    • pp.331-341
    • /
    • 2023
  • A numerical forecasting models usually predict future states by performing time integration considering fixed static time-steps. A time-step that is too long can cause model instability and failure of forecast simulation, and a time-step that is too short can cause unnecessary time integration calculations. Thus, in numerical models, the time-step size can be determined by the CFL (Courant-Friedrichs-Lewy)-condition, and this condition acts as a necessary condition for finding a numerical solution. A static time-step is defined as using the same fixed time-step for time integration. On the other hand, applying a different time-step for each integration while guaranteeing the stability of the solution in time advancement is called an adaptive time-step. The adaptive time-step algorithm is a method of presenting the maximum usable time-step suitable for each integration based on the CFL-condition for the adaptive time-step. In this paper, the adaptive time-step algorithm is applied for the Korean Integrated Model (KIM) to determine suitable parameters used for the adaptive time-step algorithm through the monthly verifications of 10-day simulations (during January and July 2017) at about 12 km resolution. By comparing the numerical results obtained by applying the 25 second static time-step to KIM in Supercomputer 5 (Nurion), it shows similar results in terms of forecast quality, presents the maximum available time-step for each integration, and improves the calculation efficiency by reducing the number of total time integrations by 19%.

Classification of Characteristics in Two-Wheeler Accidents Using Clustering Techniques (클러스터링 기법을 이용한 이륜차 사고의 특징 분류)

  • Heo, Won-Jin;Kang, Jin-ho;Lee, So-hyun
    • Knowledge Management Research
    • /
    • v.25 no.1
    • /
    • pp.217-233
    • /
    • 2024
  • The demand for two-wheelers has increased in recent years, driven by the growing delivery culture, which has also led to a rise in the number of two-wheelers. Although two-wheelers are economically efficient in congested traffic conditions, reckless driving and ambiguous traffic laws for two-wheelers have turned two-wheeler accidents into a significant social issue. Given the high fatality rate associated with two-wheelers, the severity and risk of two-wheeler accidents are considerable. It is, therefore, crucial to thoroughly understand the characteristics of two-wheeler accidents by analyzing their attributes. In this study, the characteristics of two-wheeled vehicle accidents were categorized using the K-prototypes algorithm, based on data from two-wheeled vehicle accidents. As a result, the accidents were divided into four clusters according to their characteristics. Each cluster showed distinct traits in terms of the roads where accidents occurred, the major laws violated, the types of accidents, and the times of accident occurrences. By tailoring enforcement methods and regulations to the specific characteristics of each type of accident, we can reduce the incidence of accidents involving two-wheelers in metropolitan areas, thereby enhancing road safety. Furthermore, by applying machine learning techniques to urban transportation and safety, this study adds to the body of related literature.

Development of a Prestack Generalized-Screen Migration Module for Vertical Transversely Isotropic Media (횡적등방성 매질에 적용 가능한 겹쌓기 전 Generalized-Screen 참반사 보정 모듈 개발)

  • Shin, Sungil;Byun, Joongmoo
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.2
    • /
    • pp.71-78
    • /
    • 2013
  • The one-way wave equation migration is much more computationally efficient comparing with reverse time migration and it can provide better image than the migration algorithm based on the ray theory. We have developed the prestack depth migration module adopting (GS) propagator designed for vertical transverse isotropic media. Since GS propagator considers the higher-order term by expanding the Taylor series of the vertical slowness in the thin slab of the phase-screen propagator, the GS migration can offer more correct image for the complex subsurface with large lateral velocity variation or steep dip. To verify the validity of the developed GS migration module, we analyzed the accuracy with the order of the GS propagator for VTI media (GSVTI propagator) and confirmed that the accuracy of the wavefield propagation with the wide angles increases as the order of the GS propagator increases. Using the synthetic seismic data, we compared the migration results obtained from the isotropic GS migration module with the anisotropic GS migration module. The results show that the anisotropic GS migration provides better images and the improvement is more evident on steeply dipping structures and in a strongly anisotropic medium.

Techniques of XML Query Caching on the Web (웹에서의 XML 질의 캐쉬 기법)

  • Park, Dae-Sung;Kang, Hyun-Chul
    • The Journal of Society for e-Business Studies
    • /
    • v.11 no.1
    • /
    • pp.1-23
    • /
    • 2006
  • As data on the Web is more and more in XML due to proliferation of Web applications such as e-Commerce, it is strongly required to rapidly process XML queries. One of such techniques is XML query caching. For frequently submitted queries, their results could be cached in order to guarantee fast response for the same queries. In this paper, we propose techniques for XML query performance improvement whereby the set of node identifiers(NIS) for an XML query is cached. NIS is most commonly employed as a format of XML query result,, consisting of the identifiers of the XML elements that comprise the query result. With NIS, it is suitable to meet the Web applications data retrieval requirements because reconstruction and/or modification of query results and integration of multiple query results can be efficiently done. Incremental refresh of NIS against its source updates can also be efficiently done. When the query result is requested in XML, however, materialization of NIS is needed by retrieving the source XML elements through their identifiers. In this paper, we consider three different types of NISs. proposing the algorithms of their creation, materialization, and incremental refresh. All of them were implemented using an RDBMS. Through a detailed set of performance experiments, we showed the efficiency of the proposed XML query caching techniques.

  • PDF

A Personal Digital Library on a Distributed Mobile Multiagents Platform (분산 모바일 멀티에이전트 플랫폼을 이용한 사용자 기반 디지털 라이브러리 구축)

  • Cho Young Im
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.12
    • /
    • pp.1637-1648
    • /
    • 2004
  • When digital libraries are developed by the traditional client/sever system using a single agent on the distributed environment, several problems occur. First, as the search method is one dimensional, the search results have little relationship to each other. Second, the results do not reflect the user's preference. Third, whenever a client connects to the server, users have to receive the certification. Therefore, the retrieval of documents is less efficient causing dissatisfaction with the system. I propose a new platform of mobile multiagents for a personal digital library to overcome these problems. To develop this new platform I combine the existing DECAF multiagents platform with the Voyager mobile ORB and propose a new negotiation algorithm and scheduling algorithm. Although there has been some research for a personal digital library, I believe there have been few studies on their integration and systemization. For searches of related information, the proposed platform could increase the relationship of search results by subdividing the related documents, which are classified by a supervised neural network. For the user's preference, as some modular clients are applied to a neural network, the search results are optimized. By combining a mobile and multiagents platform a new mobile, multiagents platform is developed in order to decrease a network burden. Furthermore, a new negotiation algorithm and a scheduling algorithm are activated for the effectiveness of PDS. The results of the simulation demonstrate that as the number of servers and agents are increased, the search time for PDS decreases while the degree of the user's satisfaction is four times greater than with the C/S model.

Collaboration and Node Migration Method of Multi-Agent Using Metadata of Naming-Agent (네이밍 에이전트의 메타데이터를 이용한 멀티 에이전트의 협력 및 노드 이주 기법)

  • Kim, Kwang-Jong;Lee, Yon-Sik
    • The KIPS Transactions:PartD
    • /
    • v.11D no.1
    • /
    • pp.105-114
    • /
    • 2004
  • In this paper, we propose a collaboration method of diverse agents each others in multi-agent model and describe a node migration algorithm of Mobile-Agent (MA) using by the metadata of Naming-Agent (NA). Collaboration work of multi-agent assures stability of agent system and provides reliability of information retrieval on the distributed environment. NA, an important part of multi-agent, identifies each agents and series the unique name of each agents, and each agent references the specified object using by its name. Also, NA integrates and manages naming service by agents classification such as Client-Push-Agent (CPA), Server-Push-Agent (SPA), and System-Monitoring-Agent (SMA) based on its characteristic. And, NA provides the location list of mobile nodes to specified MA. Therefore, when MA does move through the nodes, it is needed to improve the efficiency of node migration by specified priority according to hit_count, hit_ratio, node processing and network traffic time. Therefore, in this paper, for the integrated naming service, we design Naming Agent and show the structure of metadata which constructed with fields such as hit_count, hit_ratio, total_count of documents, and so on. And, this paper presents the flow of creation and updating of metadata and the method of node migration with hit_count through the collaboration of multi-agent.

A selective sparse coding based fast super-resolution method for a side-scan sonar image (선택적 sparse coding 기반 측면주사 소나 영상의 고속 초해상도 복원 알고리즘)

  • Park, Jaihyun;Yang, Cheoljong;Ku, Bonwha;Lee, Seungho;Kim, Seongil;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.1
    • /
    • pp.12-20
    • /
    • 2018
  • Efforts have been made to reconstruct low-resolution underwater images to high-resolution ones by using the image SR (Super-Resolution) method, all to improve efficiency when acquiring side-scan sonar images. As side-scan sonar images are similar with the optical images with respect to exploiting 2-dimensional signals, conventional image restoration methods for optical images can be considered as a solution. One of the most typical super-resolution methods for optical image is a sparse coding and there are studies for verifying applicability of sparse coding method for underwater images by analyzing sparsity of underwater images. Sparse coding is a method that obtains recovered signal from input signal by linear combination of dictionary and sparse coefficients. However, it requires huge computational load to accurately estimate sparse coefficients. In this study, a sparse coding based underwater image super-resolution method is applied while a selective reconstruction method for object region is suggested to reduce the processing time. For this method, this paper proposes an edge detection and object and non object region classification method for underwater images and combine it with sparse coding based image super-resolution method. Effectiveness of the proposed method is verified by reducing the processing time for image reconstruction over 32 % while preserving same level of PSNR (Peak Signal-to-Noise Ratio) compared with conventional method.

An Approach Using LSTM Model to Forecasting Customer Congestion Based on Indoor Human Tracking (실내 사람 위치 추적 기반 LSTM 모델을 이용한 고객 혼잡 예측 연구)

  • Hee-ju Chae;Kyeong-heon Kwak;Da-yeon Lee;Eunkyung Kim
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.3
    • /
    • pp.43-53
    • /
    • 2023
  • In this detailed and comprehensive study, our primary focus has been placed on accurately gauging the number of visitors and their real-time locations in commercial spaces. Particularly, in a real cafe, using security cameras, we have developed a system that can offer live updates on available seating and predict future congestion levels. By employing YOLO, a real-time object detection and tracking algorithm, the number of visitors and their respective locations in real-time are also monitored. This information is then used to update a cafe's indoor map, thereby enabling users to easily identify available seating. Moreover, we developed a model that predicts the congestion of a cafe in real time. The sophisticated model, designed to learn visitor count and movement patterns over diverse time intervals, is based on Long Short Term Memory (LSTM) to address the vanishing gradient problem and Sequence-to-Sequence (Seq2Seq) for processing data with temporal relationships. This innovative system has the potential to significantly improve cafe management efficiency and customer satisfaction by delivering reliable predictions of cafe congestion to all users. Our groundbreaking research not only demonstrates the effectiveness and utility of indoor location tracking technology implemented through security cameras but also proposes potential applications in other commercial spaces.

A Stochastic User Equilibrium Transit Assignment Algorithm for Multiple User Classes (다계층을 고려한 대중교통 확률적사용자균형 알고리즘 개발)

  • Yu, Soon-Kyoung;Lim, Kang-Won;Lee, Young-Ihn;Lim, Yong-Taek
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.7 s.85
    • /
    • pp.165-179
    • /
    • 2005
  • The object of this study is a development of a stochastic user equilibrium transit assignment algorithm for multiple user classes considering stochastic characteristics and heterogeneous attributes of passengers. The existing transit assignment algorithms have limits to attain realistic results because they assume a characteristic of passengers to be equal. Although one group with transit information and the other group without it have different trip patterns, the past studies could not explain the differences. For overcoming the problems, we use following methods. First, we apply a stochastic transit assignment model to obtain the difference of the perceived travel cost between passengers and apply a multiple user class assignment model to obtain the heterogeneous qualify of groups to get realistic results. Second, we assume that person trips have influence on the travel cost function in the development of model. Third, we use a C-logit model for solving IIA(independence of irrelevant alternatives) problems. According to repetition assigned trips and equivalent path cost have difference by each group and each path. The result comes close to stochastic user equilibrium and converging speed is very fast. The algorithm of this study is expected to make good use of evaluation tools in the transit policies by applying heterogeneous attributes and OD data.

Development of Regularized Expectation Maximization Algorithms for Fan-Beam SPECT Data (부채살 SPECT 데이터를 위한 정칙화된 기댓값 최대화 재구성기법 개발)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Soo-Jin;Kim, Kyeong-Min;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.464-472
    • /
    • 2005
  • Purpose: SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. Materials and Methods: The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam protection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. Results: for the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Conclusion: Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions.