• Title/Summary/Keyword: AI-based System and Technology

Search Result 467, Processing Time 0.027 seconds

Development of a deep-learning based tunnel incident detection system on CCTVs (딥러닝 기반 터널 영상유고감지 시스템 개발 연구)

  • Shin, Hyu-Soung;Lee, Kyu-Beom;Yim, Min-Jin;Kim, Dong-Gyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.6
    • /
    • pp.915-936
    • /
    • 2017
  • In this study, current status of Korean hazard mitigation guideline for tunnel operation is summarized. It shows that requirement for CCTV installation has been gradually stricted and needs for tunnel incident detection system in conjunction with the CCTV in tunnels have been highly increased. Despite of this, it is noticed that mathematical algorithm based incident detection system, which are commonly applied in current tunnel operation, show very low detectable rates by less than 50%. The putative major reasons seem to be (1) very weak intensity of illumination (2) dust in tunnel (3) low installation height of CCTV to about 3.5 m, etc. Therefore, an attempt in this study is made to develop an deep-learning based tunnel incident detection system, which is relatively insensitive to very poor visibility conditions. Its theoretical background is given and validating investigation are undertaken focused on the moving vehicles and person out of vehicle in tunnel, which are the official major objects to be detected. Two scenarios are set up: (1) training and prediction in the same tunnel (2) training in a tunnel and prediction in the other tunnel. From the both cases, targeted object detection in prediction mode are achieved to detectable rate to higher than 80% in case of similar time period between training and prediction but it shows a bit low detectable rate to 40% when the prediction times are far from the training time without further training taking place. However, it is believed that the AI based system would be enhanced in its predictability automatically as further training are followed with accumulated CCTV BigData without any revision or calibration of the incident detection system.

A Study on the Real-time Recognition Methodology for IoT-based Traffic Accidents (IoT 기반 교통사고 실시간 인지방법론 연구)

  • Oh, Sung Hoon;Jeon, Young Jun;Kwon, Young Woo;Jeong, Seok Chan
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.15-27
    • /
    • 2022
  • In the past five years, the fatality rate of single-vehicle accidents has been 4.7 times higher than that of all accidents, so it is necessary to establish a system that can detect and respond to single-vehicle accidents immediately. The IoT(Internet of Thing)-based real-time traffic accident recognition system proposed in this study is as following. By attaching an IoT sensor which detects the impact and vehicle ingress to the guardrail, when an impact occurs to the guardrail, the image of the accident site is analyzed through artificial intelligence technology and transmitted to a rescue organization to perform quick rescue operations to damage minimization. An IoT sensor module that recognizes vehicles entering the monitoring area and detects the impact of a guardrail and an AI-based object detection module based on vehicle image data learning were implemented. In addition, a monitoring and operation module that imanages sensor information and image data in integrate was also implemented. For the validation of the system, it was confirmed that the target values were all met by measuring the shock detection transmission speed, the object detection accuracy of vehicles and people, and the sensor failure detection accuracy. In the future, we plan to apply it to actual roads to verify the validity using real data and to commercialize it. This system will contribute to improving road safety.

Development of Autonomous Vehicle Learning Data Generation System (자율주행 차량의 학습 데이터 자동 생성 시스템 개발)

  • Yoon, Seungje;Jung, Jiwon;Hong, June;Lim, Kyungil;Kim, Jaehwan;Kim, Hyungjoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.5
    • /
    • pp.162-177
    • /
    • 2020
  • The perception of traffic environment based on various sensors in autonomous driving system has a direct relationship with driving safety. Recently, as the perception model based on deep neural network is used due to the development of machine learning/in-depth neural network technology, a the perception model training and high quality of a training dataset are required. However, there are several realistic difficulties to collect data on all situations that may occur in self-driving. The performance of the perception model may be deteriorated due to the difference between the overseas and domestic traffic environments, and data on bad weather where the sensors can not operate normally can not guarantee the qualitative part. Therefore, it is necessary to build a virtual road environment in the simulator rather than the actual road to collect the traning data. In this paper, a training dataset collection process is suggested by diversifying the weather, illumination, sensor position, type and counts of vehicles in the simulator environment that simulates the domestic road situation according to the domestic situation. In order to achieve better performance, the authors changed the domain of image to be closer to due diligence and diversified. And the performance evaluation was conducted on the test data collected in the actual road environment, and the performance was similar to that of the model learned only by the actual environmental data.

Financial Products Recommendation System Using Customer Behavior Information (고객의 투자상품 선호도를 활용한 금융상품 추천시스템 개발)

  • Hyojoong Kim;SeongBeom Kim;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.25 no.1
    • /
    • pp.111-128
    • /
    • 2023
  • With the development of artificial intelligence technology, interest in data-based product preference estimation and personalized recommender systems is increasing. However, if the recommendation is not suitable, there is a risk that it may reduce the purchase intention of the customer and even extend to a huge financial loss due to the characteristics of the financial product. Therefore, developing a recommender system that comprehensively reflects customer characteristics and product preferences is very important for business performance creation and response to compliance issues. In the case of financial products, product preference is clearly divided according to individual investment propensity and risk aversion, so it is necessary to provide customized recommendation service by utilizing accumulated customer data. In addition to using these customer behavioral characteristics and transaction history data, we intend to solve the cold-start problem of the recommender system, including customer demographic information, asset information, and stock holding information. Therefore, this study found that the model proposed deep learning-based collaborative filtering by deriving customer latent preferences through characteristic information such as customer investment propensity, transaction history, and financial product information based on customer transaction log records was the best. Based on the customer's financial investment mechanism, this study is meaningful in developing a service that recommends a high-priority group by establishing a recommendation model that derives expected preferences for untraded financial products through financial product transaction data.

Approaches to Applying Social Network Analysis to the Army's Information Sharing System: A Case Study (육군 정보공유체계에 사회관계망 분석을 적용하기 위한방안: 사례 연구)

  • GunWoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.597-603
    • /
    • 2023
  • The paradigm of military operations has evolved from platform-centric warfare to network-centric warfare and further to information-centric warfare, driven by advancements in information technology. In recent years, with the development of cutting-edge technologies such as big data, artificial intelligence, and the Internet of Things (IoT), military operations are transitioning towards knowledge-centric warfare (KCW), based on artificial intelligence. Consequently, the military places significant emphasis on integrating advanced information and communication technologies (ICT) to establish reliable C4I (Command, Control, Communication, Computer, Intelligence) systems. This research emphasizes the need to apply data mining techniques to analyze and evaluate various aspects of C4I systems, including enhancing combat capabilities, optimizing utilization in network-based environments, efficiently distributing information flow, facilitating smooth communication, and effectively implementing knowledge sharing. Data mining serves as a fundamental technology in modern big data analysis, and this study utilizes it to analyze real-world cases and propose practical strategies to maximize the efficiency of military command and control systems. The research outcomes are expected to provide valuable insights into the performance of C4I systems and reinforce knowledge-centric warfare in contemporary military operations.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Design of an Integrated University Information Service Model Based on Block Chain (블록체인 기반의 대학 통합 정보서비스 실증 모델 설계)

  • Moon, Sang Guk;Kim, Min Sun;Kim, Hyun Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.2
    • /
    • pp.43-50
    • /
    • 2019
  • Block-chain enjoys technical advantages such as "robust security," owing to the structural characteristic that forgery is impossible, decentralization through sharing the ledger between participants, and the hyper-connectivity connecting Internet of Things, robots, and Artificial Intelligence. As a result, public organizations have highly positive attitudes toward the adoption of technology using block-chain, and the design of university information services is no exception. Universities are also considering the application of block-chain technology to foundations that implement various information services within a university. Through case studies of block-chain applications across various industries, this study designs an empirical model of an integrated information service platform that integrates information systems in a university. A basic road map of university information services is constructed based on block-chain technology, from planning to the actual service design stage. Furthermore, an actual empirical model of an integrated information service in a university is designed based on block-chain by applying this framework.

Application of Gamma Ray Densitometry in Powder Metallurgy

  • Schileper, Georg
    • Proceedings of the Korean Powder Metallurgy Institute Conference
    • /
    • 2002.07a
    • /
    • pp.25-37
    • /
    • 2002
  • The most important industrial application of gamma radiation in characterizing green compacts is the determination of the density. Examples are given where this method is applied in manufacturing technical components in powder metallurgy. The requirements imposed by modern quality management systems and operation by the workforce in industrial production are described. The accuracy of measurement achieved with this method is demonstrated and a comparison is given with other test methods to measure the density. The advantages and limitations of gamma ray densitometry are outlined. The gamma ray densitometer measures the attenuation of gamma radiation penetrating the test parts (Fig. 1). As the capability of compacts to absorb this type of radiation depends on their density, the attenuation of gamma radiation can serve as a measure of the density. The volume of the part being tested is defined by the size of the aperture screeniing out the radiation. It is a channel with the cross section of the aperture whose length is the height of the test part. The intensity of the radiation identified by the detector is the quantity used to determine the material density. Gamma ray densitometry can equally be performed on green compacts as well as on sintered components. Neither special preparation of test parts nor skilled personnel is required to perform the measurement; neither liquids nor other harmful substances are involved. When parts are exhibiting local density variations, which is normally the case in powder compaction, sectional densities can be determined in different parts of the sample without cutting it into pieces. The test is non-destructive, i.e. the parts can still be used after the measurement and do not have to be scrapped. The measurement is controlled by a special PC based software. All results are available for further processing by in-house quality documentation and supervision of measurements. Tool setting for multi-level components can be much improved by using this test method. When a densitometer is installed on the press shop floor, it can be operated by the tool setter himself. Then he can return to the press and immediately implement the corrections. Transfer of sample parts to the lab for density testing can be eliminated and results for the correction of tool settings are more readily available. This helps to reduce the time required for tool setting and clearly improves the productivity of powder presses. The range of materials where this method can be successfully applied covers almost the entire periodic system of the elements. It reaches from the light elements such as graphite via light metals (AI, Mg, Li, Ti) and their alloys, ceramics ($AI_20_3$, SiC, Si_3N_4, $Zr0_2$, ...), magnetic materials (hard and soft ferrites, AlNiCo, Nd-Fe-B, ...), metals including iron and alloy steels, Cu, Ni and Co based alloys to refractory and heavy metals (W, Mo, ...) as well as hardmetals. The gamma radiation required for the measurement is generated by radioactive sources which are produced by nuclear technology. These nuclear materials are safely encapsulated in stainless steel capsules so that no radioactive material can escape from the protective shielding container. The gamma ray densitometer is subject to the strict regulations for the use of radioactive materials. The radiation shield is so effective that there is no elevation of the natural radiation level outside the instrument. Personal dosimetry by the operating personnel is not required. Even in case of malfunction, loss of power and incorrect operation, the escape of gamma radiation from the instrument is positively prevented.

  • PDF

Development of a Simulator for Optimizing Semiconductor Manufacturing Incorporating Internet of Things (사물인터넷을 접목한 반도체 소자 공정 최적화 시뮬레이터 개발)

  • Dang, Hyun Shik;Jo, Dong Hee;Kim, Jong Seo;Jung, Taeho
    • Journal of the Korea Society for Simulation
    • /
    • v.26 no.4
    • /
    • pp.35-41
    • /
    • 2017
  • With the advances in Internet over Things, the demand in diverse electronic devices such as mobile phones and sensors has been rapidly increasing and boosting up the researches on those products. Semiconductor materials, devices, and fabrication processes are becoming more diverse and complicated, which accompanies finding parameters for an optimal fabrication process. In order to find the parameters, a process simulation before fabrication or a real-time process control system during fabrication can be used, but they lack incorporating the feedback from post-fabrication data and compatibility with older equipment. In this research, we have developed an artificial intelligence based simulator, which finds parameters for an optimal process and controls process equipment. In order to apply the control concept to all the equipment in a fabrication sequence, we have developed a prototype for a manipulator which can be installed over an existing buttons and knobs in the equipment and controls the equipment communicating with the AI over the Internet. The AI is based on the deep learning to find process parameters that will produce a device having target electrical characteristics. The proposed simulator can control existing equipment via the Internet to fabricate devices with desired performance and, therefore, it will help engineers to develop new devices efficiently and effectively.

Development and Application of a Scenario Analysis System for CBRN Hazard Prediction (화생방 오염확산 시나리오 분석 시스템 구축 및 활용)

  • Byungheon Lee;Jiyun Seo;Hyunwoo Nam
    • Journal of the Korea Society for Simulation
    • /
    • v.33 no.3
    • /
    • pp.13-26
    • /
    • 2024
  • The CBRN(Chemical, Biological, Radiological, and Nuclear) hazard prediction model is a system that supports commanders in making better decisions by creating contamination distribution and damage prediction areas based on the weapons used, terrain, and weather information in the events of biochemical and radiological accidents. NBC_RAMS(Nuclear, Biological and Chemical Reporting And Modeling S/W System) developed by ADD (Agency for Defense Development) is used not only supporting for decision making plan for various military operations and exercises but also for post analyzing CBRN related events. With the NBC_RAMS's core engine, we introduced a CBR hazard assessment scenario analysis system that can generate contaminant distribution prediction results reflecting various CBR scenarios, and described how to apply it in specific purposes in terms of input information, meteorological data, land data with land coverage and DEM, and building data with pologon form. As a practical use case, a technology development case is addressed that tracks the origin location of contaminant source with artificial intelligence and a technology that selects the optimal location of a CBR detection sensor with score data by analyzing large amounts of data generated using the CBRN scenario analysis system. Through this system, it is possible to generate AI-specialized CBRN related to training and analysis data and support planning of operation and exercise by predicting battle field.