• Title/Summary/Keyword: Generated AI

Search Result 245, Processing Time 0.027 seconds

Enhancing Occlusion Robustness for Vision-based Construction Worker Detection Using Data Augmentation

  • Kim, Yoojun;Kim, Hyunjun;Sim, Sunghan;Ham, Youngjib
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.904-911
    • /
    • 2022
  • Occlusion is one of the most challenging problems for computer vision-based construction monitoring. Due to the intrinsic dynamics of construction scenes, vision-based technologies inevitably suffer from occlusions. Previous researchers have proposed the occlusion handling methods by leveraging the prior information from the sequential images. However, these methods cannot be employed for construction object detection in non-sequential images. As an alternative occlusion handling method, this study proposes a data augmentation-based framework that can enhance the detection performance under occlusions. The proposed approach is specially designed for rebar occlusions, the distinctive type of occlusions frequently happen during construction worker detection. In the proposed method, the artificial rebars are synthetically generated to emulate possible rebar occlusions in construction sites. In this regard, the proposed method enables the model to train a variety of occluded images, thereby improving the detection performance without requiring sequential information. The effectiveness of the proposed method is validated by showing that the proposed method outperforms the baseline model without augmentation. The outcomes demonstrate the great potential of the data augmentation techniques for occlusion handling that can be readily applied to typical object detectors without changing their model architecture.

  • PDF

AI-Based Project Similarity Evaluation Model Using Project Scope Statements

  • Ko, Taewoo;Jeong, H. David;Lee, JeeHee
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.284-291
    • /
    • 2022
  • Historical data from comparable projects can serve as benchmarking data for an ongoing project's planning during the project scoping phase. As project owners typically store substantial amounts of data generated throughout project life cycles in digitized databases, they can capture appropriate data to support various project planning activities by accessing digital databases. One of the most important work tasks in this process is identifying one or more past projects comparable to a new project. The uniqueness and complexity of construction projects along with unorganized data, impede the reliable identification of comparable past projects. A project scope document provides the preliminary overview of a project in terms of the extent of the project and project requirements. However, narratives and free-formatted descriptions of project scopes are a significant and time-consuming barrier if a human needs to review them and determine similar projects. This study proposes an Artificial Intelligence-driven model for analyzing project scope descriptions and evaluating project similarity using natural language processing (NLP) techniques. The proposed algorithm can intelligently a) extract major work activities from unstructured descriptions held in a database and b) quantify similarities by considering the semantic features of texts representing work activities. The proposed model enhances historical comparable project identification by systematically analyzing project scopes.

  • PDF

The Management of Smart Safety Houses Using The Deep Learning (딥러닝을 이용한 스마트 안전 축사 관리 방안)

  • Hong, Sung-Hwa
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.505-507
    • /
    • 2021
  • Image recognition technology is a technology that recognizes an image object by using the generated feature descriptor and generates object feature points and feature descriptors that can compensate for the shape of the object to be recognized based on artificial intelligence technology, environmental changes around the object, and the deterioration of recognition ability by object rotation. The purpose of the present invention is to implement a power management framework required to increase profits and minimize damage to livestock farmers by preventing accidents that may occur due to the improvement of efficiency of the use of livestock house power and overloading of electricity by integrating and managing a power fire management device installed for analyzing a complex environment of power consumption and fire occurrence in a smart safety livestock house, and to develop and disseminate a safe and optimized intelligent smart safety livestock house.

  • PDF

Estimation of the mechanical properties of oil palm shell aggregate concrete by novel AO-XGB model

  • Yipeng Feng;Jiang Jie;Amir Toulabi
    • Steel and Composite Structures
    • /
    • v.49 no.6
    • /
    • pp.645-666
    • /
    • 2023
  • Due to the steadily declining supply of natural coarse aggregates, the concrete industry has shifted to substituting coarse aggregates generated from byproducts and industrial waste. Oil palm shell is a substantial waste product created during the production of palm oil (OPS). When considering the usage of OPSC, building engineers must consider its uniaxial compressive strength (UCS). Obtaining UCS is expensive and time-consuming, machine learning may help. This research established five innovative hybrid AI algorithms to predict UCS. Aquila optimizer (AO) is used with methods to discover optimum model parameters. Considered models are artificial neural network (AO - ANN), adaptive neuro-fuzzy inference system (AO - ANFIS), support vector regression (AO - SVR), random forest (AO - RF), and extreme gradient boosting (AO - XGB). To achieve this goal, a dataset of OPS-produced concrete specimens was compiled. The outputs depict that all five developed models have justifiable accuracy in UCS estimation process, showing the remarkable correlation between measured and estimated UCS and models' usefulness. All in all, findings depict that the proposed AO - XGB model performed more suitable than others in predicting UCS of OPSC (with R2, RMSE, MAE, VAF and A15-index at 0.9678, 1.4595, 1.1527, 97.6469, and 0.9077). The proposed model could be utilized in construction engineering to ensure enough mechanical workability of lightweight concrete and permit its safe usage for construction aims.

Increasing Accuracy of Stock Price Pattern Prediction through Data Augmentation for Deep Learning (데이터 증강을 통한 딥러닝 기반 주가 패턴 예측 정확도 향상 방안)

  • Kim, Youngjun;Kim, Yeojeong;Lee, Insun;Lee, Hong Joo
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.1-12
    • /
    • 2019
  • As Artificial Intelligence (AI) technology develops, it is applied to various fields such as image, voice, and text. AI has shown fine results in certain areas. Researchers have tried to predict the stock market by utilizing artificial intelligence as well. Predicting the stock market is known as one of the difficult problems since the stock market is affected by various factors such as economy and politics. In the field of AI, there are attempts to predict the ups and downs of stock price by studying stock price patterns using various machine learning techniques. This study suggest a way of predicting stock price patterns based on the Convolutional Neural Network(CNN) among machine learning techniques. CNN uses neural networks to classify images by extracting features from images through convolutional layers. Therefore, this study tries to classify candlestick images made by stock data in order to predict patterns. This study has two objectives. The first one referred as Case 1 is to predict the patterns with the images made by the same-day stock price data. The second one referred as Case 2 is to predict the next day stock price patterns with the images produced by the daily stock price data. In Case 1, data augmentation methods - random modification and Gaussian noise - are applied to generate more training data, and the generated images are put into the model to fit. Given that deep learning requires a large amount of data, this study suggests a method of data augmentation for candlestick images. Also, this study compares the accuracies of the images with Gaussian noise and different classification problems. All data in this study is collected through OpenAPI provided by DaiShin Securities. Case 1 has five different labels depending on patterns. The patterns are up with up closing, up with down closing, down with up closing, down with down closing, and staying. The images in Case 1 are created by removing the last candle(-1candle), the last two candles(-2candles), and the last three candles(-3candles) from 60 minutes, 30 minutes, 10 minutes, and 5 minutes candle charts. 60 minutes candle chart means one candle in the image has 60 minutes of information containing an open price, high price, low price, close price. Case 2 has two labels that are up and down. This study for Case 2 has generated for 60 minutes, 30 minutes, 10 minutes, and 5minutes candle charts without removing any candle. Considering the stock data, moving the candles in the images is suggested, instead of existing data augmentation techniques. How much the candles are moved is defined as the modified value. The average difference of closing prices between candles was 0.0029. Therefore, in this study, 0.003, 0.002, 0.001, 0.00025 are used for the modified value. The number of images was doubled after data augmentation. When it comes to Gaussian Noise, the mean value was 0, and the value of variance was 0.01. For both Case 1 and Case 2, the model is based on VGG-Net16 that has 16 layers. As a result, 10 minutes -1candle showed the best accuracy among 60 minutes, 30 minutes, 10 minutes, 5minutes candle charts. Thus, 10 minutes images were utilized for the rest of the experiment in Case 1. The three candles removed from the images were selected for data augmentation and application of Gaussian noise. 10 minutes -3candle resulted in 79.72% accuracy. The accuracy of the images with 0.00025 modified value and 100% changed candles was 79.92%. Applying Gaussian noise helped the accuracy to be 80.98%. According to the outcomes of Case 2, 60minutes candle charts could predict patterns of tomorrow by 82.60%. To sum up, this study is expected to contribute to further studies on the prediction of stock price patterns using images. This research provides a possible method for data augmentation of stock data.

  • PDF

Generating Training Dataset of Machine Learning Model for Context-Awareness in a Health Status Notification Service (사용자 건강 상태알림 서비스의 상황인지를 위한 기계학습 모델의 학습 데이터 생성 방법)

  • Mun, Jong Hyeok;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.

Real-Time Scheduling Scheme based on Reinforcement Learning Considering Minimizing Setup Cost (작업 준비비용 최소화를 고려한 강화학습 기반의 실시간 일정계획 수립기법)

  • Yoo, Woosik;Kim, Sungjae;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.2
    • /
    • pp.15-27
    • /
    • 2020
  • This study starts with the idea that the process of creating a Gantt Chart for schedule planning is similar to Tetris game with only a straight line. In Tetris games, the X axis is M machines and the Y axis is time. It is assumed that all types of orders can be worked without separation in all machines, but if the types of orders are different, setup cost will be incurred without delay. In this study, the game described above was named Gantris and the game environment was implemented. The AI-scheduling table through in-depth reinforcement learning compares the real-time scheduling table with the human-made game schedule. In the comparative study, the learning environment was studied in single order list learning environment and random order list learning environment. The two systems to be compared in this study are four machines (Machine)-two types of system (4M2T) and ten machines-six types of system (10M6T). As a performance indicator of the generated schedule, a weighted sum of setup cost, makespan and idle time in processing 100 orders were scheduled. As a result of the comparative study, in 4M2T system, regardless of the learning environment, the learned system generated schedule plan with better performance index than the experimenter. In the case of 10M6T system, the AI system generated a schedule of better performance indicators than the experimenter in a single learning environment, but showed a bad performance index than the experimenter in random learning environment. However, in comparing the number of job changes, the learning system showed better results than those of the 4M2T and 10M6T, showing excellent scheduling performance.

Analysis of Defective Causes in Real Time and Prediction of Facility Replacement Cycle based on Big Data (빅데이터 기반 실시간 불량품 발생 원인 분석 및 설비 교체주기 예측)

  • Hwang, Seung-Yeon;Kwak, Kyung-Min;Shin, Dong-Jin;Kwak, Kwang-Jin;Rho, Young-J;Park, Kyung-won;Park, Jeong-Min;Kim, Jeong-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.203-212
    • /
    • 2019
  • Along with the recent fourth industrial revolution, the world's manufacturing powerhouses are pushing for national strategies to revive the sluggish manufacturing industry. Moon Jae-in, the government is in accordance with the trend, called 'advancement of science and technology is leading the fourth round of the Industrial Revolution' strategy. Intelligent information technology such as IoT, Cloud, Big Data, Mobile, and AI, which are key technologies that lead the fourth industrial revolution, is promoting the emergence of new industries such as robots and 3D printing and the smarting of existing major manufacturing industries. Advances in technologies such as smart factories have enabled IoT-based sensing technology to measure various data that could not be collected before, and data generated by each process has also exploded. Thus, this paper uses data generators to generate virtual data that can occur in smart factories, and uses them to analyze the cause of the defect in real time and to predict the replacement cycle of the facility.

The Effect of Inulin Supplementation on Blood Lipid Levels, and Fecal Excretion of Bile Acid and Neutral Sterol in Korean Postmenopausal Women (폐경 후 한국 여성에서 이눌린 보충이 혈중 지질 농도와 변 담즙산 및 중성 스테롤 배설에 미치는 영향)

  • 이은영;김윤영;장기효;강순아;조여원
    • Journal of Nutrition and Health
    • /
    • v.37 no.5
    • /
    • pp.352-363
    • /
    • 2004
  • Lipid-lowering effects of the inulin have been demonstrated in animal, yet attempts to reproduce similar effects in humans have generated conflicting results. In this study, the lipid-lowering potential of inulin and especially its effect on bile acid and neutral sterol excretion were investigated in Korean postmenopausal women. Nineteen postmenopausal women were randomly divided into two groups in a double-blind parallel design and consumed one of two supplements for 12 weeks; placebo of 8g maltodextrins/sucrose mixture (placebo group) or 8g inulin (inulin group). There were no significant changes in body weight during the supplementation period in either inulin or placebo group. Dietary consumption of animal fat in both group tended to decrease after 12 weeks of experiment. Intake of cholesterol was lower in placebo group, whereas the decrease of cholesterol intake in inulin group did not reach statistical significance after 12 weeks. The levels of serum total cholesterol (TC) and LDL-cholesterol (LDL-C) were significantly decreased in both placebo (p<0.05) and inulin group (p<0.01) after supplementation for 12 weeks compared with the baseline. The levels of serum triglyceride (TG) and HDL-cholesterol (HDL-C) were not significantly affected by inulin supplements, but atherogenic index (AI) and LDL-C/HDL-C ratio (LHR) as a predictor for coronary heart disease were improved (p<0.01) significantly after inulin supplementation. Therefore, inulin supplement may decrease the risk of cardiovascular disease via improving blood cholesterol level. Fecal weight and pH were not changed after 12 weeks of supplementation. There were no statistically significant changes for the fecal short-chain fatty acids (SCFAs). In inulin group, fecal deoxycholic acid (DCA) was significantly lowered compared with the baseline (p<0.05) whereas other bile acids were not changed. During the 12 weeks of intervention, no differences were found in fecal excretion of neutral sterol in the two groups. In summary, dietary inulin decreases serum TC, LDL-C, AI, LHR and lowers excretion of fecal DCA in the Korean postmenopausal women. These results support the use of inulin for reducing risk factors for hyperlipidemic postmenopausal women. However, the exact mechanism (s) responsible for the blood lipid lowering action of inulin including altered fecal bile acid remain to be elucidated.

Application of Gamma Ray Densitometry in Powder Metallurgy

  • Schileper, Georg
    • Proceedings of the Korean Powder Metallurgy Institute Conference
    • /
    • 2002.07a
    • /
    • pp.25-37
    • /
    • 2002
  • The most important industrial application of gamma radiation in characterizing green compacts is the determination of the density. Examples are given where this method is applied in manufacturing technical components in powder metallurgy. The requirements imposed by modern quality management systems and operation by the workforce in industrial production are described. The accuracy of measurement achieved with this method is demonstrated and a comparison is given with other test methods to measure the density. The advantages and limitations of gamma ray densitometry are outlined. The gamma ray densitometer measures the attenuation of gamma radiation penetrating the test parts (Fig. 1). As the capability of compacts to absorb this type of radiation depends on their density, the attenuation of gamma radiation can serve as a measure of the density. The volume of the part being tested is defined by the size of the aperture screeniing out the radiation. It is a channel with the cross section of the aperture whose length is the height of the test part. The intensity of the radiation identified by the detector is the quantity used to determine the material density. Gamma ray densitometry can equally be performed on green compacts as well as on sintered components. Neither special preparation of test parts nor skilled personnel is required to perform the measurement; neither liquids nor other harmful substances are involved. When parts are exhibiting local density variations, which is normally the case in powder compaction, sectional densities can be determined in different parts of the sample without cutting it into pieces. The test is non-destructive, i.e. the parts can still be used after the measurement and do not have to be scrapped. The measurement is controlled by a special PC based software. All results are available for further processing by in-house quality documentation and supervision of measurements. Tool setting for multi-level components can be much improved by using this test method. When a densitometer is installed on the press shop floor, it can be operated by the tool setter himself. Then he can return to the press and immediately implement the corrections. Transfer of sample parts to the lab for density testing can be eliminated and results for the correction of tool settings are more readily available. This helps to reduce the time required for tool setting and clearly improves the productivity of powder presses. The range of materials where this method can be successfully applied covers almost the entire periodic system of the elements. It reaches from the light elements such as graphite via light metals (AI, Mg, Li, Ti) and their alloys, ceramics ($AI_20_3$, SiC, Si_3N_4, $Zr0_2$, ...), magnetic materials (hard and soft ferrites, AlNiCo, Nd-Fe-B, ...), metals including iron and alloy steels, Cu, Ni and Co based alloys to refractory and heavy metals (W, Mo, ...) as well as hardmetals. The gamma radiation required for the measurement is generated by radioactive sources which are produced by nuclear technology. These nuclear materials are safely encapsulated in stainless steel capsules so that no radioactive material can escape from the protective shielding container. The gamma ray densitometer is subject to the strict regulations for the use of radioactive materials. The radiation shield is so effective that there is no elevation of the natural radiation level outside the instrument. Personal dosimetry by the operating personnel is not required. Even in case of malfunction, loss of power and incorrect operation, the escape of gamma radiation from the instrument is positively prevented.

  • PDF