• Title/Summary/Keyword: 자동화 도출

Search Result 366, Processing Time 0.026 seconds

Development of Automatic Shear-wave Source for Downhole Seismic Method (다운홀 탄성파 기법용 전단파 자동 가진원의 개발)

  • Bang, Eun-Seok;Sung, Nak-Hoon;Kim, Jung-Ho;Kim, Dong-Soo
    • Journal of the Korean Geotechnical Society
    • /
    • v.23 no.11
    • /
    • pp.27-37
    • /
    • 2007
  • Downhole seismic method is very economic and easy to operate because it uses only one borehole and simple surface source to obtain the shear wave velocity profile of a site. In this study, automatic shear wave source was developed for efficient downhole seismic testing. This source is motor-spring type and easy to control. It can lessen the labor of operator and the working time. Moreover, it can provide better and repetitive signals for data interpretation. By combining developed automatic source with automatic receiver system, PC based data acquisition system, advanced managing program, and semi-automatic downhole performing system were constructed. Through comparison test with manual source, advantages of automatic source were verified. Constructed semi-automatic downhole testing system including automatic shear wave source was applied to the soft soil site. The applicability and reliability were verified and the importance of automating testing system for obtaining reliable result was emphasized.

An Intelligent Approach for Reorganization Record Classification Schemes in Public Institutions: Case Study on L Institution (공공기관 기록물 분류체계 재정비를 위한 지능화 방안: L 기관 사례를 중심으로)

  • Jinsol Lim;Hui-Jeong Han;Hyo-Jung Oh
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.2
    • /
    • pp.137-156
    • /
    • 2023
  • As social and political paradigms change, public institution tasks and structures are constantly created, integrated, or abolished. From an effective record management perspective, it is necessary to review whether the previously established record classification schemes reflect these changes and remain relevant to current tasks. However, in most institutions, the restructuring process relies on manual labor and the experiential judgment of practitioners or institutional record managers, making it difficult to reflect changes in a timely manner or comprehensively understand the overall context. To address these issues and improve the efficiency of record management, this study proposes an approach using automation and intelligence technologies to restructure the classification schemes, ensuring records are filed within an appropriate context. Furthermore, the proposed approach was applied to the target institution, its results were used as the basis for interviews with the practitioners to verify the effectiveness and limitations of the approach. It is, aiming to enhance the accuracy and reliability of the restructured record classification schemes and promote the standardization of record management.

A Study on Predicting the Potential for Bid-Rigging among Bidders using FTT in Public Construction Projects (FTT를 활용한 입찰실무자의 입찰담합 가능성 예측 연구)

  • Cho, Jin-Ho;Shin, Young-Su;Kim, Byung-Soo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.6
    • /
    • pp.36-44
    • /
    • 2023
  • The aim of this study is to predict the potential for bid-rigging among public construction bidders and develop preventive and responsive measures. By employing the Fraud Triangle Theory (FTT), which considers the factors of opportunity, pressure, and rationalization, we analyze and explore the bid-rigging potential among construction industry professionals. A survey is conducted among General and specialized construction firms to validate the FTT research model and predict individual bid-rigging tendencies. The results indicate that higher levels of pressure, greater opportunities, and stronger rationalization increase the likelihood of individual bid-rigging. Moreover, more opportunities and stronger rationalization also raise the potential for bid-rigging by others. Understanding these dynamics enhances our knowledge and contributes to the development of policies aimed at preventing bid-rigging.

Detection Models and Response Techniques of Fake Advertising Phishing Websites (가짜 광고성 피싱 사이트 탐지 모델 및 대응 기술)

  • Eunbeen Lee;Jeongeun Cho;Wonhyung Park
    • Convergence Security Journal
    • /
    • v.23 no.3
    • /
    • pp.29-36
    • /
    • 2023
  • With the recent surge in exposure to fake advertising phishing sites in search engines, the damage caused by poor search quality and personal information leakage is increasing. In particular, the seriousness of the problem is worsening faster as the possibility of automating the creation of advertising phishing sites through tools such as ChatGPT increases. In this paper, the source code of fake advertising phishing sites was statically analyzed to derive structural commonalities, and among them, a detection crawler that filters sites step by step based on foreign domains and redirection was developed to confirm that fake advertising posts were finally detected. In addition, we demonstrate the need for new guide lines by verifying that the redirection page of fake advertising sites is divided into three types and returns different sites according to each situation. Furthermore, we propose new detection guidelines for fake advertising phishing sites that cannot be detected by existing detection methods.

Computer Vision-Based Measurement Method for Wire Harness Defect Classification

  • Yun Jung Hong;Geon Lee;Jiyoung Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.77-84
    • /
    • 2024
  • In this paper, we propose a method for accurately and rapidly detecting defects in wire harnesses by utilizing computer vision to calculate six crucial measurement values: the length of crimped terminals, the dimensions (width) of terminal ends, and the width of crimped sections (wire and core portions). We employ Harris corner detection to locate object positions from two types of data. Additionally, we generate reference points for extracting measurement values by utilizing features specific to each measurement area and exploiting the contrast in shading between the background and objects, thus reflecting the slope of each sample. Subsequently, we introduce a method using the Euclidean distance and correction coefficients to predict values, allowing for the prediction of measurements regardless of changes in the wire's position. We achieve high accuracy for each measurement type, 99.1%, 98.7%, 92.6%, 92.5%, 99.9%, and 99.7%, achieving outstanding overall average accuracy of 97% across all measurements. This inspection method not only addresses the limitations of conventional visual inspections but also yields excellent results with a small amount of data. Moreover, relying solely on image processing, it is expected to be more cost-effective and applicable with less data compared to deep learning methods.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

A Study on System Requirements for Integrated Electronic Document Management System (IEDMS) (통합전자문서체계구현을 위한 요구기능 분석 연구 -A사의 전자문서관리 사례를 중심으로-)

  • 권택문
    • Journal of Information Technology Application
    • /
    • v.2 no.1
    • /
    • pp.55-81
    • /
    • 2000
  • An Electronic Document Management System(EDMS) is an electronic system solution that is used to create, capture, distribute, edit, store and manage documents and related structured data repositories throughout an organization. Recently, documents of any type, such as text, images, and video files, and structured databases can be controlled and managed by an office automation system and an EDMS. Thus, many organizations are already using these information technologies to reduce process cycle-times. But what the organizations are missing is a integrated system the current workflow or office automation system and provides immediate access to and automatic routing of the organization's mission-critical information. This study tried to find out the user's requirements for integrating current information system and relatively new technology, electronic document management system in order to improve business operations, productivity and quality, and reduces waste. integration of electronic document management system(EDMS) and office automation system and proper use of these technological will improve organization's processes, and compress the process cycle-times. For this study a case study was done by a project team in cooperation with a government organization(say A company). Through this case study valuable electronic document management and office automation system requirement have been identified and reported for providing a system model in order to design an Integrated EDMS(IMDMS).

  • PDF

Green Port Strategies for Reducing Air Pollution in Port of Incheon (대기오염 저감을 통한 인천항의 Green Port 전략)

  • Han, Chul-Hwan
    • Journal of Korea Port Economic Association
    • /
    • v.27 no.1
    • /
    • pp.281-304
    • /
    • 2011
  • In the energy-climate era, pollution emissions from port activities have a significant issue in international shipping and port community. Thus international organization such as IMO and developed countries are seeking to develop various reduction strategies against air pollution. However Korea has recently conducted several studies concerning air pollution in port industry. The main purpose of the paper is to suggest emission reduction strategies for bulk terminal in Port of Incheon, which handles large amount bulk cargoes as a gateway for the metropolitan area. For this aim, the clean air strategies of the world major ports were considered and air pollution reduction strategies were suggested. The main findings of this paper are as follows. First, the emission reduction strategies for container terminal are should be integrated based on technologies changes, operational changes and market-based measures. Second, the emission reduction strategies for bulk terminal can be effective when use innovative measures during loading, unloading and storage process such as telescopic cascade trimming chute, snake sandwich equipment, dry fog system and dome structure. Finally, investigation on actual conditions of air pollution in Korean ports and development of environmental evaluation scheme for persisting monitoring should be conducted.

Assessment of Parallel Computing Performance of Agisoft Metashape for Orthomosaic Generation (정사모자이크 제작을 위한 Agisoft Metashape의 병렬처리 성능 평가)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.427-434
    • /
    • 2019
  • In the present study, we assessed the parallel computing performance of Agisoft Metashape for orthomosaic generation, which can implement aerial triangulation, generate a three-dimensional point cloud, and make an orthomosaic based on SfM (Structure from Motion) technology. Due to the nature of SfM, most of the time is spent on Align photos, which runs as a relative orientation, and Build dense cloud, which generates a three-dimensional point cloud. Metashape can parallelize the two processes by using multi-cores of CPU (Central Processing Unit) and GPU (Graphics Processing Unit). An orthomosaic was created from large UAV (Unmanned Aerial Vehicle) images by six conditions combined by three parallel methods (CPU only, GPU only, and CPU + GPU) and two operating systems (Windows and Linux). To assess the consistency of the results of the conditions, RMSE (Root Mean Square Error) of aerial triangulation was measured using ground control points which were automatically detected on the images without human intervention. The results of orthomosaic generation from 521 UAV images of 42.2 million pixels showed that the combination of CPU and GPU showed the best performance using the present system, and Linux showed better performance than Windows in all conditions. However, the RMSE values of aerial triangulation revealed a slight difference within an error range among the combinations. Therefore, Metashape seems to leave things to be desired so that the consistency is obtained regardless of parallel methods and operating systems.