• Title/Summary/Keyword: Time-Efficiency of Algorithm

Search Result 1,655, Processing Time 0.032 seconds

Survey of coastal topography using images from a single UAV (단일 UAV를 이용한 해안 지형 측량)

  • Noh, Hyoseob;Kim, Byunguk;Lee, Minjae;Park, Yong Sung;Bang, Ki Young;Yoo, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.spc1
    • /
    • pp.1027-1036
    • /
    • 2023
  • Coastal topographic information is crucial in coastal management, but point measurment based approeaches, which are labor intensive, are generally applied to land and underwater, separately. This study introduces an efficient method enabling land and undetwater surveys using an unmanned aerial vehicle (UAV). This method involves applying two different algorithms to measure the topography on land and water depth, respectively, using UAV imagery and merge them to reconstruct whole coastal digital elevation model. Acquisition of the landside terrain is achieved using the Structure-from-Motion Multi-View Stereo technique with spatial scan imagery. Independently, underwater bathymetry is retrieved by employing a depth inversion technique with a drone-acquired wave field video. After merging the two digital elevation models into a local coordinate, interpolation is performed for areas where terrain measurement is not feasible, ultimately obtaining a continuous nearshore terrain. We applied the proposed survey technique to Jangsa Beach, South Korea, and verified that detailed terrain characteristics, such as berm, can be measured. The proposed UAV-based survey method has significant efficiency in terms of time, cost, and safety compared to existing methods.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

Flood Runoff Simulation Model by Using API (선행강우지수를 고려한 홍수유출 시뮬레이션 모형)

  • Heo, Chang-Hwan;Im, Gi-Seok;An, Gyeong-Su;Ji, Hong-Gi
    • Journal of Korea Water Resources Association
    • /
    • v.35 no.3
    • /
    • pp.331-344
    • /
    • 2002
  • This study is aimed at the development of a deterministic runoff model which can be used for flood runoff. The model is formulated by the watershed runoff model. Based on the assumptions that runoff system is nonlinear, the proposed watershed runoff model is the conceptual model. In the model structure, the conceptual model divides the runoff system into a surface structure and a subsurface structure corresponding to the surface flow, and inter flow and ground water flow respectively. The lag time effect of surface can be represented by the sub-tank of surface structure in the conceptual model. The parameter calibration of inter flow and ground water flow in the subsurface structure of the conceptual model is performed by separating the components with numeric filter The runoff coefficient($\alpha$$_2$) is expressed as the function of antecedent precipitation index(API). The parameters with the surface flow can be calibrated with the runoff coefficient($\alpha$$_1$ and $\alpha$/$_{11}$) in the conceptual model. In the conceptual model, an algorithm is developed to calibrate the parameters automatically based on efficiency criteria. The comparative study shows that simulated value from the conceptual model well agreed to observed value.

Automated Data Extraction from Unstructured Geotechnical Report based on AI and Text-mining Techniques (AI 및 텍스트 마이닝 기법을 활용한 지반조사보고서 데이터 추출 자동화)

  • Park, Jimin;Seo, Wanhyuk;Seo, Dong-Hee;Yun, Tae-Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.4
    • /
    • pp.69-79
    • /
    • 2024
  • Field geotechnical data are obtained from various field and laboratory tests and are documented in geotechnical investigation reports. For efficient design and construction, digitizing these geotechnical parameters is essential. However, current practices involve manual data entry, which is time-consuming, labor-intensive, and prone to errors. Thus, this study proposes an automatic data extraction method from geotechnical investigation reports using image-based deep learning models and text-mining techniques. A deep-learning-based page classification model and a text-searching algorithm were employed to classify geotechnical investigation report pages with 100% accuracy. Computer vision algorithms were utilized to identify valid data regions within report pages, and text analysis was used to match and extract the corresponding geotechnical data. The proposed model was validated using a dataset of 205 geotechnical investigation reports, achieving an average data extraction accuracy of 93.0%. Finally, a user-interface-based program was developed to enhance the practical application of the extraction model. It allowed users to upload PDF files of geotechnical investigation reports, automatically analyze these reports, and extract and edit data. This approach is expected to improve the efficiency and accuracy of digitizing geotechnical investigation reports and building geotechnical databases.

An Energy-Efficient Clustering Using Load-Balancing of Cluster Head in Wireless Sensor Network (센서 네트워크에서 클러스터 헤드의 load-balancing을 통한 에너지 효율적인 클러스터링)

  • Nam, Do-Hyun;Min, Hong-Ki
    • The KIPS Transactions:PartC
    • /
    • v.14C no.3 s.113
    • /
    • pp.277-284
    • /
    • 2007
  • The routing algorithm many used in the wireless sensor network features the clustering method to reduce the amount of data transmission from the energy efficiency perspective. However, the clustering method results in high energy consumption at the cluster head node. Dynamic clustering is a method used to resolve such a problem by distributing energy consumption through the re-selection of the cluster head node. Still, dynamic clustering modifies the cluster structure every time the cluster head node is re-selected, which causes energy consumption. In other words, the dynamic clustering approaches examined in previous studies involve the repetitive processes of cluster head node selection. This consumes a high amount of energy during the set-up process of cluster generation. In order to resolve the energy consumption problem associated with the repetitive set-up, this paper proposes the Round-Robin Cluster Header (RRCH) method that fixes the cluster and selects the head node in a round-robin method The RRCH approach is an energy-efficient method that realizes consistent and balanced energy consumption in each node of a generated cluster to prevent repetitious set-up processes as in the LEACH method. The propriety of the proposed method is substantiated with a simulation experiment.

An Efficient Secure Routing Protocol Based on Token Escrow Tree for Wireless Ad Hoc Networks (무선 애드 혹 네트워크에서 보안성을 고려한 Token Escrow 트리 기반의 효율적인 라우팅 프로토콜)

  • Lee, Jae Sik;Kim, Sung Chun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.4
    • /
    • pp.155-162
    • /
    • 2013
  • Routing protocol in ad hoc mobile networking has been an active research area in recent years. However, the environments of ad hoc network tend to have vulnerable points from attacks, because ad hoc mobile network is a kind of wireless network without centralized authentication or fixed network infrastructure such as base stations. Also, existing routing protocols that are effective in a wired network become inapplicable in ad hoc mobile networks. To address these issues, several secure routing protocols have been proposed: SAODV and SRPTES. Even though our protocols are intensified security of networks than existing protocols, they can not deal fluidly with frequent changing of wireless environment. Moreover, demerits in energy efficiency are detected because they concentrated only safety routing. In this paper, we propose an energy efficient secure routing protocol for various ad hoc mobile environment. First of all, we provide that the nodes distribute security information to reliable nodes for secure routing. The nodes constitute tree-structured with around nodes for token escrow, this action will protect invasion of malicious node through hiding security information. Next, we propose multi-path routing based security level for protection from dropping attack of malicious node, then networks will prevent data from unexpected packet loss. As a result, this algorithm enhances packet delivery ratio in network environment which has some malicious nodes, and a life time of entire network is extended through consuming energy evenly.

Improving A Stealth Game Level Design Tool (스텔스 게임 레벨 디자인 툴의 개선)

  • Na, Hyeon-Suk;Jeong, Sanghyeok;Jeong, Juhong
    • Journal of Korea Game Society
    • /
    • v.15 no.4
    • /
    • pp.29-38
    • /
    • 2015
  • In the stealth game design, level designers are to develop many interesting game environments with a variety of difficulties. J. Tremblay and his co-authors developed a Unity-based level design tool to help and automate this process. Given a map, if the designer inputs several game factors such as guard paths and velocities, their vision, and the player's initial and goal positions, then the tool visualizes simulation results including (clustered) possible paths a player could take to avoid detection. Thus with the help of this tool, the designer can ensure in realtime if the current game factors result in the intended difficulties and players paths, and if necessary adjust the factors. In this note, we present our improvement on this tool in two aspects. First, we integrate a function that if the designer inputs some vertices in the map, then the tool systematically generates and suggests interesting guard paths containing these vertices of various difficulties, which enhances its convenience and usefulness as a tool. Second, we replace the collision-detection function and the RRT-based (player) path generation function, by our new collision-check function and a Delaunay roadmap-based path generation function, which remarkably improves the simulation process in time-efficiency.

Evaluating efficiency of Vertical MLC VMAT plan for naso-pharyngeal carcinoma (비인두암 Vertical MLC VMAT plan 유용성 평가)

  • Chae, Seung Hoon;Son, Sang Jun;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.33
    • /
    • pp.127-135
    • /
    • 2021
  • Purpose : The purpose of the study is to evaluate the efficiency of Vertical MLC VMAT plan(VMV plan) Using 273° and 350° collimator angle compare to Complemental MLC VMAT plan(CMV plan) using 20° and 340° collimator angle for nasopharyngeal carcinoma. Materials & Methods : Thirty patients treated for nasopharyngeal carcinoma with the VMAT technique were retrospectively selected. Those cases were planned by Eclipse, PO and AcurosXB Algorithm with two 6MV 360° arcs and Each arc has 273° and 350° of collimator angle. The Complemental MLC VMAT plans are based on existing treatment plans. Those plans have the same parameters of existing treatment plans but collimator angle. For dosimetric evaluation, the dose-volumetric(DV) parameters of the planning target volume (PTV) and organs at risk (OARs) were calculated for all VMAT plans. MCSv(Modulation complexity score of VMAT), MU and treatment time were also compared. In addition, Pearson's correlation analysis was performed to confirm whether there was a correlation between the difference in the MCSv and the difference in each evaluation index of the two treatment plans. Result : In the case of PTV evaluation index, the CI of PTV_67.5 was improved by 3.76% in the VMV Plan, then for OAR, the dose reduction effect of the spinal cord (-14.05%) and brain stem (-9.34%) was remarkable. In addition, the parotid glands (left parotid : -5.38%, right : -5.97%) and visual organs (left optic nerve: -4.88%, right optic nerve: -5.80%, optic chiasm : -6.12%, left lens: -6.12%, right lens: -5.26%), auditory organs (left: -11.74%, right: -12.31%) and thyroid gland (-2.02%) were also confirmed. The difference in MCSv of the two treatment plans showed a significant negative (-) correlation with the difference in CI (r=-0.55) of PTV_54 and the difference in CI (r=-0.43) of PTV_48. Spinal cord (r=0.40), brain stem (r=0.34), and both salivary glands (left: r=0.36, right: r=0.37) showed a positive (+) correlation. (For all the values, p<.05) Conclusion : Compared to the CMV plan, the VMV plan is considered to be helpful in improving the quality of the treatment plan by allowing the MLC to be modulated more efficiently

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.