• Title/Summary/Keyword: 중복 데이터

Search Result 1,019, Processing Time 0.024 seconds

A Study on the Selection of Types of Social Disasters by Region (시·도별 사회재난 중점유형 선정에 관한 연구)

  • Lee, Hyo Jin;Yun, Hong Sic;Han, Hak
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.2
    • /
    • pp.206-217
    • /
    • 2021
  • Purpose: Recently, a series of large social disasters have led to a lot of research to prevent social disasters as well as natural disasters and reduce damage. However, this paper aims to select the types of social disasters that local governments should focus on and create basic data for effective countermeasures and mitigation efforts. Method: Among 43 types of disasters announced by the Ministry of Public Administration and Security, 11 types of disasters were selected and collected to select the main types of disasters, and risk types were derived by region with risk maps. In order to derive the risk map, each detailed index was rescheduled to be 0-1 and weights were determined through entropy technique. Result: As a result, about 41% of the major disasters announced by the Ministry of Public Administration and Security were consistent, and the rest of the major types were disasters that could not be obtained or have not occurred in the past 20 years. Conclusion: Therefore, in order to establish an effective prevention and recovery plan for social disasters through this study, it was intended to present social disaster-focused disasters for each local government.

A study on the selection of candidates for public bases according to the spatial distribution characteristics Automated External Defibrillator in Daegu City (대구시 자동심장충격기 공간분포 특성에 따른 공공 거점후보지 선정 연구)

  • Beak, Seong Ryul;Kim, Jun Hyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.599-610
    • /
    • 2020
  • The AED (Automated External Defibrillator) is not evaluated for spatial accuracy and temporal availability even if it is located within a building or a specific area that needed necessary to partition by spatial analysis and location allocation analysis. As a result of the analysis, the spatial analysis was performed using the existing public data of AED with applied the GIS location analysis method. A public institution (119 safety center, police box) was selected as a candidate for a public AED base that can operate 24 hours a day, 365 days a year according to the characteristics of each residential area. In addition, Thiessen Polygons were created for each candidate site and divided by regions. In the analysis of the service was analyzed regional in terms of accessibility to emergency medical services in consideration of the characteristics of AED, that emergency vehicles could arrive within 4 minutes of the time required for emergency medical treatment in most areas of the study area, but it did not areas outside of the city center. As a result, It was found that the operation of the AED base service center centered on vehicles of public institutions is effective for responding to AED patients at night and weekend hours. 19 Safety Center under and police box the jurisdiction of Daegu City to establish an AED service center for public institutions, location-based distance, attribute analysis, and minimization of overlapping areas that the method of using a vehicle appeared more efficient than using the existing walking type AED.

Analysis of Feature Map Compression Efficiency and Machine Task Performance According to Feature Frame Configuration Method (피처 프레임 구성 방안에 따른 피처 맵 압축 효율 및 머신 태스크 성능 분석)

  • Rhee, Seongbae;Lee, Minseok;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.318-331
    • /
    • 2022
  • With the recent development of hardware computing devices and software based frameworks, machine tasks using deep learning networks are expected to be utilized in various industrial fields and personal IoT devices. However, in order to overcome the limitations of high cost device for utilizing the deep learning network and that the user may not receive the results requested when only the machine task results are transmitted from the server, Collaborative Intelligence (CI) proposed the transmission of feature maps as a solution. In this paper, an efficient compression method for feature maps with vast data sizes to support the CI paradigm was analyzed and presented through experiments. This method increases redundancy by applying feature map reordering to improve compression efficiency in traditional video codecs, and proposes a feature map method that improves compression efficiency and maintains the performance of machine tasks by simultaneously utilizing image compression format and video compression format. As a result of the experiment, the proposed method shows 14.29% gain in BD-rate of BPP and mAP compared to the feature compression anchor of MPEG-VCM.

Quality Evaluation of Drone Image using Siemens star (Siemens star를 이용한 드론 영상의 품질 평가)

  • Lee, Jae One;Sung, Sang Min;Back, Ki Suk;Yun, Bu Yeol
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.217-226
    • /
    • 2022
  • In the view of the application of high-precision spatial information production, UAV (Umanned Aerial Vehicle)-Photogrammetry has a problem in that it lacks specific procedures and detailed regulations for quantitative quality verification methods or certification of captured images. In addition, test tools for UAV image quality assessment use only the GSD (Ground Sample Distance), not MTF (Modulation Transfer Function), which reflects image resolution and contrast at the same time. This fact makes often the quality of UAV image inferior to that of manned aerial image. We performed MTF and GSD analysis simultaneously using a siemens star to confirm the necessity of MTF analysis in UAV image quality assessment. The analyzing results of UAV images taken with different payload and sensors show that there is a big difference in σMTF values, representing image resolution and the degree of contrast, but slightly different in GSD. It concluded that the MTF analysis is a more objective and reliable analysis method than just the GSD analysis method, and high-quality drone images can only be obtained when the operator make images after judging the proper selection the sensor performance, image overlaps, and payload type. However, the results of this study are derived from analyzing only images acquired by limited sensors and imaging conditions. It is therefore expected that more objective and reliable results will be obtained if continuous research is conducted by accumulating various experimental data in related fields in the future.

A Study on the Development of IoT Inspection System for Gas Leakage Inspection in Kitchen Gas Range Built-in Method (주방 가스레인지 빌트인 방식에서 가스 누출검사를 위한 IoT 검사 시스템 개발에 관한 연구)

  • Kang, Dae Guk;Choi, Young Gyu
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.4
    • /
    • pp.283-290
    • /
    • 2022
  • In this study, an IoT inspection system that can be linked with a server was developed using a gas timer and ESP-01 Wi-Fi module installed on a gas valve in the home. The server environment of the gas leak IoT inspection system was installed with APM (Apache, PHP, MySQL) to collect gas pressure data by generation so that leakage checks could be performed. In order to control the gas leak IoT inspection system, the app inventory was used to manage the gas leak check value in real time. In addition, user convenience has been enhanced so that membership management, WiFi settings, and leakage check values can be checked through mobile apps. In order to manage subscribers by region, the user list was checked by logging in in in the administrator mode so that the information on whether or not the leak test was conducted and the results could be provided. In addition, when the user presses the gas leak check button, the pressure is automatically checked, and the measured value is stored in the server, and when a gas leak occurs, the leakage check is performed after alarm and repair so that it can be used if normal. In addition, in order to prevent overlapping membership, membership management can be performed based on MAC addresses.

Development of an Algorithm for Automatic Quantity Take-off of Slab Rebar (슬래브 철근 물량 산출 자동화 알고리즘 개발)

  • Kim, Suhwan;Kim, Sunkuk;Suh, Sangwook;Kim, Sangchul
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.5
    • /
    • pp.52-62
    • /
    • 2023
  • The objective of this study is to propose an automated algorithm for precise cutting length of slab rebar complying with regulations such as anchorage length, standard hooks, and lapping length. This algorithm aims to improve the traditional manual quantity take-off process typically outsourced by external contractors. By providing accurate rebar quantity data at BBS(Bar Bending Schedule) level from the bidding phase, uncertainty in quantity take-off can be eliminated and reliance on out-sourcing reduced. In addition, the algorithm allows for early determination of precise quantities, enabling construction firms to preapre competitive and optimized bids, leading to increased profit margins during contract negotiations. The proposed algorithm not only streamlines redundant tasks across various processes, including estimating, budgeting, and BBS generation but also offers flexibility in handling post-contract structural drawing changes. In particular, the proposed algorithm, when combined with BIM, can solve the technical problems of using BIM in the early phases of construction, and the algorithm's formulas and shape codes that built as REVIT-based family files, can help saving time and manpower.

Positional Accuracy Analysis According to the Exterior Orientation Parameters of a Low-Cost Drone (저가형 드론의 외부표정요소에 따른 위치결정 정확도 분석)

  • Kim, Doo Pyo;Lee, Jae One
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.2
    • /
    • pp.291-298
    • /
    • 2022
  • Recently developed drones are inexpensive and very convenient to operate. As a result, the production and utilization of spatial information using drones are increasing. However, most drones acquire images with a low-cost global navigation satellite system (GNSS) and an inertial measurement unit (IMU). Accordingly, the accuracy of the initial location and rotation angle elements of the image is low. In addition, because these drones are small and light, they can be greatly affected by wind, making it difficult to maintain a certain overlap, which degrades the positioning accuracy. Therefore, in this study, images are taken at different times in order to analyze the positioning accuracy according to changes in certain exterior orientation parameters. To do this, image processing was performed with Pix4D Mapper and the accuracy of the results was analyzed. In order to analyze the variation of the accuracy according to the exterior orientation parameters in detail, the exterior orientation parameters of the first processing result were used as meta-data for the second processing. Subsequently, the amount of change in the exterior orientation parameters was analyzed by in a strip-by-strip manner. As a result, it was proved that the changes of the Omega and Phi values among the rotation elements were related to a decrease in the height accuracy, while changes in Kappa were linked to the horizontal accuracy.

A Study on the Frequency of Traffic Accidents by Traffic Signal Timing: Focused on Daejeon (『신호현시 표출 방법』에 따른 교통사고 발생빈도 분석 연구: 대전광역시 관내 중심으로)

  • So-sig Yoon;Min-ho Lee;Choul-ki Lee
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.3
    • /
    • pp.20-37
    • /
    • 2023
  • Although traffic signal installations are continuously expanding, the effect of preventing traffic accidents remains unverified. Totally, 7,045 traffic accident data (such as signal violations) registered with TCS were manually searched for a 7-year period from 2013 to 2019 for 1,602 traffic signals in Daejeon Metropolitan City. The top 20 traffic accident intersections were identified, the traffic accident investigation records and field maps were viewed to compare the driving direction and signal phase of the violated vehicle, and the cause of the traffic accident was divided into insufficient signal operation design (operation) and driver negligence (intentional). Results of the analysis revealed that 75% of traffic accidents occurred in thru-left-turn traffic signals and overlap; moreover, extending the yellow time or operating all red signals due to countermeasures against traffic accidents occurring in yellow signals resulted in reduced traffic accidents. Data indicated that Permissive Left Turn requires improvement with the signal operation. In addition, since The Korean National Police Agency is not computerized for traffic accident sites and signal-related data, the lack of manpower necessitates improvement and utilization of TCS when establishing traffic accident prevention measures. It is believed that it will contribute to signal operation by analyzing vast amounts of data collected in the field and presenting improvement measures.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.