• Title/Summary/Keyword: Parametric Information

Search Result 740, Processing Time 0.036 seconds

Analysis of soil coarse pore fraction by major factors for evaluation of water conservation function potential in forest soil (산림토양의 수원함양기능 잠재력 평가를 위한 주요 인자별 토양 조공극률 분석)

  • Li, Qiwen;Lim, Hong-Geun;Moon, Hae-Won;Nam, Soo-Youn;Kim, Jae-Hoon;Choi, Hyung-Tae
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.25 no.6
    • /
    • pp.35-50
    • /
    • 2022
  • As the water shortage has become a noticeable issue due to climate change, forests play an importance role as the provider of water supply service. There is, however, little information about the relationships between the factors used in the estimation of water supply service and coarse pore fraction of forest soil which determines the potential of water supply. To find out whether there would be an amelioration in the scoring system of water supply service estimation, we examined all factors except meteorological one and additionally, analyzed 4 extra factors that might be related with coarse pore fraction of soil. A total of 2,214 soil samples were collected throughout South Korea to measure coarse pore fractions from 2015 to 2020. First, the result of average coarse pore fraction of all samples showed 32.98±6.59% which was consistent with previous studies. And the results of non-parametric analysis of variance indicated that only two of eleven factors that was used in the scoring system matched the results of coarse pore fraction of forest soils. Tree canopy coverage showed no difference among categories, and slope also showed no significance at level of 0.05 in the linear regression analysis. Additionally, the applicability of 4 extra factors were confirmed, as the result of coarse pore fractions of soil samples were different for various categories of each factor. Therefore, the scoring system of water supply service of forest should be revised to improve accuracy.

AI-Based Object Recognition Research for Augmented Reality Character Implementation (증강현실 캐릭터 구현을 위한 AI기반 객체인식 연구)

  • Seok-Hwan Lee;Jung-Keum Lee;Hyun Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1321-1330
    • /
    • 2023
  • This study attempts to address the problem of 3D pose estimation for multiple human objects through a single image generated during the character development process that can be used in augmented reality. In the existing top-down method, all objects in the image are first detected, and then each is reconstructed independently. The problem is that inconsistent results may occur due to overlap or depth order mismatch between the reconstructed objects. The goal of this study is to solve these problems and develop a single network that provides consistent 3D reconstruction of all humans in a scene. Integrating a human body model based on the SMPL parametric system into a top-down framework became an important choice. Through this, two types of collision loss based on distance field and loss that considers depth order were introduced. The first loss prevents overlap between reconstructed people, and the second loss adjusts the depth ordering of people to render occlusion inference and annotated instance segmentation consistently. This method allows depth information to be provided to the network without explicit 3D annotation of the image. Experimental results show that this study's methodology performs better than existing methods on standard 3D pose benchmarks, and the proposed losses enable more consistent reconstruction from natural images.

Experimental and numerical study on the structural behavior of Multi-Cell Beams reinforced with metallic and non-metallic materials

  • Yousry B.I. Shaheen;Ghada M. Hekal;Ahmed K. Fadel;Ashraf M. Mahmoud
    • Structural Engineering and Mechanics
    • /
    • v.90 no.6
    • /
    • pp.611-633
    • /
    • 2024
  • This study intends to investigate the response of multi-cell (MC) beams to flexural loads in which the primary reinforcement is composed of both metallic and non-metallic materials. "Multi-cell" describes beam sections with multiple longitudinal voids separated by thin webs. Seven reinforced concrete MC beams measuring 300×200×1800 mm were tested under flexural loadings until failure. Two series of beams are formed, depending on the type of main reinforcement that is being used. A control RC beam with no openings and six MC beams are found in these two series. Series one and two are reinforced with metallic and non-metallic main reinforcement, respectively, in order to maintain a constant reinforcement ratio. The first crack, ultimate load, deflection, ductility index, energy absorption, strain characteristics, crack pattern, and failure mode were among the structural parameters of the beams under investigation that were documented. The primary variables that vary are the kind of reinforcing materials that are utilized, as well as the kind and quantity of mesh layers. The outcomes of this study that looked at the experimental and numerical performance of ferrocement reinforced concrete MC beams are presented in this article. Nonlinear finite element analysis (NLFEA) was performed with ANSYS-16.0 software to demonstrate the behavior of composite MC beams with holes. A parametric study is also carried out to investigate the factors, such as opening size, that can most strongly affect the mechanical behavior of the suggested model. The experimental and numerical results obtained demonstrate that the FE simulations generated an acceptable degree of experimental value estimation. It's also important to demonstrate that, when compared to the control beam, the MC beam reinforced with geogrid mesh (MCGB) decreases its strength capacity by a maximum of 73.33%. In contrast, the minimum strength reduction value of 16.71% is observed in the MC beams reinforced with carbon reinforcing bars (MCCR). The findings of the experiments on MC beams with openings demonstrate that the presence of openings has a significant impact on the behavior of the beams, as there is a decrease in both the ultimate load and maximum deflection.

Probabilistic Anatomical Labeling of Brain Structures Using Statistical Probabilistic Anatomical Maps (확률 뇌 지도를 이용한 뇌 영역의 위치 정보 추출)

  • Kim, Jin-Su;Lee, Dong-Soo;Lee, Byung-Il;Lee, Jae-Sung;Shin, Hee-Won;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.317-324
    • /
    • 2002
  • Purpose: The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal Neurological Institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Materials and Methods: Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the Statistical Probabilistic Anatomical Map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for 4he easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was peformed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Results: Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. Conclusion: These programs will be useful for the result interpretation of the image analysis peformed on MNI coordinate, as done in SPM program.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

A preliminary study for development of an automatic incident detection system on CCTV in tunnels based on a machine learning algorithm (기계학습(machine learning) 기반 터널 영상유고 자동 감지 시스템 개발을 위한 사전검토 연구)

  • Shin, Hyu-Soung;Kim, Dong-Gyou;Yim, Min-Jin;Lee, Kyu-Beom;Oh, Young-Sup
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.1
    • /
    • pp.95-107
    • /
    • 2017
  • In this study, a preliminary study was undertaken for development of a tunnel incident automatic detection system based on a machine learning algorithm which is to detect a number of incidents taking place in tunnel in real time and also to be able to identify the type of incident. Two road sites where CCTVs are operating have been selected and a part of CCTV images are treated to produce sets of training data. The data sets are composed of position and time information of moving objects on CCTV screen which are extracted by initially detecting and tracking of incoming objects into CCTV screen by using a conventional image processing technique available in this study. And the data sets are matched with 6 categories of events such as lane change, stoping, etc which are also involved in the training data sets. The training data are learnt by a resilience neural network where two hidden layers are applied and 9 architectural models are set up for parametric studies, from which the architectural model, 300(first hidden layer)-150(second hidden layer) is found to be optimum in highest accuracy with respect to training data as well as testing data not used for training. From this study, it was shown that the highly variable and complex traffic and incident features could be well identified without any definition of feature regulation by using a concept of machine learning. In addition, detection capability and accuracy of the machine learning based system will be automatically enhanced as much as big data of CCTV images in tunnel becomes rich.

Analysis of $^{99m}Tc-ECD$ Brain SPECT images in Boys and Girls ADHD using Statistical Parametric Mapping(SPM) (통계적 파라미터지도 작성법(SPM)을 이용한 남여별 ADHD환자의 뇌 SPECT 영상비교분석)

  • Park, Soung-Ock;Kwon, Soo-Il
    • Journal of radiological science and technology
    • /
    • v.27 no.3
    • /
    • pp.31-41
    • /
    • 2004
  • Attention deficit hyperactivity disorder(ADHD)is one of the most common psychiatric disorders in childhood, especially school age children and persisting into adult. ADHD is affected 7.6% in our children, Korea. and persisting into $15{\sim}20%$ in adult. And it is characterized by hyperactivity, inattention and impulsivity. Brain imaging is one of way to diagnosis for ADHD. Brain imaging studies may be provide information two types - structural and functional imaging. Structural and functional images of the brain play an important role in management of neurologic and psyciatric disorders. Brain SPECT, with perfusion imaging radiopharmaceuticals is one of the appropriate test to diagnosis of neurologic and psychiatric diseases. Ther are a few studies about separated analysis between boys and girls ADHD SPECT brain images. Selection of Probability level(P-value) is very important to determind the abnormalities when analysis a data by SPM. SPM is a statistical method used for image analysis and determine statistical different between two groups-normal and ADHD. Commonly used P-value is P<0.05 in statistical analysis. The purpose of this study is to evaluation of blood flow clusters distribution, between boys and girls ADHD. The number of normal boys are 8(6-7y, average : $9.6{\pm}3.9y$) and 51(4-11y, average : $9.0{\pm}2.4$) ADHD patients, and normal girls are 4(6-12y, average : $9{\pm}2.4y$) and 13(2-13y, average $10{\pm}3.5y$) ADHD patiens. Blood flow tracer $^{99m}Tc-ethylcysteinate$ dimer(ECD) injected as rCBF agent and take blood flow images after 30 min. during sleeping by SPECT camera. The anatomical region of hyperperfusion of rCBF in boys ADHD group is posterior cingulate gyrus and hyperperfusion rate is 15.39-15.77% according to p-value. And girls ADHD group appears at posterior cerebellum, Lt. cerbral limbic lobe and Lt. Rt. cerebral temporal lobe. These areas hyperperfusion rate are 24.68-31.25%. Hypoperfusion areas in boys ADHD,s brain are Lt. cerebral insular gyrus, Lt. Rt. frontal lobe and mid-prefrontal lobe, these areas decresed blood flow as 15.21-15.64%. Girls ADHD decreased blood flow regions are Lt. cerebral insular gyrus, Lt. cerebral frontal and temporal lobe, Lt. Rt. lentiform nucleus and Lt. parietal lobe. And hypoperfusion rate is 30.57-30.85% in girls ADHD. The girls ADHD group's perfusion rate is more variable than boys. The studies about rCBF in ADHD, should be separate with boys and girls.

  • PDF

Facial Expression Control of 3D Avatar using Motion Data (모션 데이터를 이용한 3차원 아바타 얼굴 표정 제어)

  • Kim Sung-Ho;Jung Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.5
    • /
    • pp.383-390
    • /
    • 2004
  • This paper propose a method that controls facial expression of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. And we setup its system. The space of expression is created from about 2400 frames consist of motion captured data of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. But this space is not such a space where one state can go to another state via the straight trajectory between them. We derive trajectories between two states from the captured set of expressions in an approximate manner. First, two states are regarded adjacent if the distance between their distance matrices is below a given threshold. Any two states are considered to have a trajectory between them If there is a sequence of adjacent states between them. It is assumed . that one states goes to another state via the shortest trajectory between them. The shortest trajectories are found by dynamic programming. The space of facial expressions, as the set of distance matrices, is multidimensional. Facial expression of 3D avatar Is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the multidimensional scaling(MDS). To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. As a result of that, users estimate that system is very useful to control facial expression of 3D avatar in real-time.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.