• Title/Summary/Keyword: sensing characteristics

Search Result 1,806, Processing Time 0.025 seconds

A Study on Expressing 3D Animation by Visual Direction : focused on 〈 How to train your dragon 〉 (시각적 연출에 의한 3D 입체 애니메이션 표현 연구: 〈드래곤 길들이기〉를 중심으로)

  • Kim, Jung-Hyun
    • Cartoon and Animation Studies
    • /
    • s.26
    • /
    • pp.1-30
    • /
    • 2012
  • The purpose of animation is to give interesting stories to an audience through motion. To achieve the purpose, over the past century since its inception, animation has adopted many kinds of technologies, and thus developed diverse narrative methods and visual expression techniques. In addition, with the advancement of expression techniques, all elements making up animation have gradually been systemized, and at the same time, have helped express the worlds beyond the reality. As a result, people have faced the era when an audience can watch everything imaginated by an animation director on a big screen. These days, more efforts have been made in order for the audience to feel much more than enjoy pictures moving in a frame. In other words, the purpose of the animation is changing from the passive viewing of animation to feeling and sensing stuffs through the animation. In the center of the changing process is 3D technology which gives new interesting to an audience. Sometime ago, a 3D animation movie was produced in Korea. But it did not bring out box-office profits, for it failed to give satisfaction to an audience who expected high perfection and beauty being able to be rivalled to those of international 3D animation movies. The failure is attributable to the fact that the domestic 3D animation production industry is merely in the early stage, and has not sufficient human resources, technology, and experiences in producing 3D animation films. Moreover, the problem is that most studies on 3D focus on the technologies related to reenactment, but that few studies on the images, which an audience directly faces, have been conducted. Under the domestic circumstance, the study on stereoscopic image screen of , a 3D stereoscopic animation film which was released in 2010 and has been seen as the best successful 3D stereoscopic animation, is worthwhile. Thus this thesis conducted theoretical consideration and case analysis focusing on the visual direction that creates the pictures to deliver abundant three dimensional effect so that it can be used as a basic data when producing high quality-domestic 3D animation and training professional labor forces. In the result, it was found that the 3D animation was not a new area, but the area which has been expanded and changed by applying the characteristics of 3D image based on the principles of the existing media aesthetics. This study might be helpful to establish the foundation of the theoretical studies necessary for producing 3D animation contents for realizing the sense of reality.

Overview and Prospective of Satellite Chlorophyll-a Concentration Retrieval Algorithms Suitable for Coastal Turbid Sea Waters (연안 혼탁 해수에 적합한 위성 클로로필-a 농도 산출 알고리즘 개관과 전망)

  • Park, Ji-Eun;Park, Kyung-Ae;Lee, Ji-Hyun
    • Journal of the Korean earth science society
    • /
    • v.42 no.3
    • /
    • pp.247-263
    • /
    • 2021
  • Climate change has been accelerating in coastal waters recently; therefore, the importance of coastal environmental monitoring is also increasing. Chlorophyll-a concentration, an important marine variable, in the surface layer of the global ocean has been retrieved for decades through various ocean color satellites and utilized in various research fields. However, the commonly used chlorophyll-a concentration algorithm is only suitable for application in clear water and cannot be applied to turbid waters because significant errors are caused by differences in their distinct components and optical properties. In addition, designing a standard algorithm for coastal waters is difficult because of differences in various optical characteristics depending on the coastal area. To overcome this problem, various algorithms have been developed and used considering the components and the variations in the optical properties of coastal waters with high turbidity. Chlorophyll-a concentration retrieval algorithms can be categorized into empirical algorithms, semi-analytic algorithms, and machine learning algorithms. These algorithms mainly use the blue-green band ratio based on the reflective spectrum of sea water as the basic form. In constrast, algorithms developed for turbid water utilizes the green-red band ratio, the red-near-infrared band ratio, and the inherent optical properties to compensate for the effect of dissolved organisms and suspended sediments in coastal area. Reliable retrieval of satellite chlorophyll-a concentration from turbid waters is essential for monitoring the coastal environment and understanding changes in the marine ecosystem. Therefore, this study summarizes the pre-existing algorithms that have been utilized for monitoring turbid Case 2 water and presents the problems associated with the mornitoring and study of seas around the Korean Peninsula. We also summarize the prospective for future ocean color satellites, which can yield more accurate and diverse results regarding the ecological environment with the development of multi-spectral and hyperspectral sensors.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Automatic Target Recognition Study using Knowledge Graph and Deep Learning Models for Text and Image data (지식 그래프와 딥러닝 모델 기반 텍스트와 이미지 데이터를 활용한 자동 표적 인식 방법 연구)

  • Kim, Jongmo;Lee, Jeongbin;Jeon, Hocheol;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.145-154
    • /
    • 2022
  • Automatic Target Recognition (ATR) technology is emerging as a core technology of Future Combat Systems (FCS). Conventional ATR is performed based on IMINT (image information) collected from the SAR sensor, and various image-based deep learning models are used. However, with the development of IT and sensing technology, even though data/information related to ATR is expanding to HUMINT (human information) and SIGINT (signal information), ATR still contains image oriented IMINT data only is being used. In complex and diversified battlefield situations, it is difficult to guarantee high-level ATR accuracy and generalization performance with image data alone. Therefore, we propose a knowledge graph-based ATR method that can utilize image and text data simultaneously in this paper. The main idea of the knowledge graph and deep model-based ATR method is to convert the ATR image and text into graphs according to the characteristics of each data, align it to the knowledge graph, and connect the heterogeneous ATR data through the knowledge graph. In order to convert the ATR image into a graph, an object-tag graph consisting of object tags as nodes is generated from the image by using the pre-trained image object recognition model and the vocabulary of the knowledge graph. On the other hand, the ATR text uses the pre-trained language model, TF-IDF, co-occurrence word graph, and the vocabulary of knowledge graph to generate a word graph composed of nodes with key vocabulary for the ATR. The generated two types of graphs are connected to the knowledge graph using the entity alignment model for improvement of the ATR performance from images and texts. To prove the superiority of the proposed method, 227 documents from web documents and 61,714 RDF triples from dbpedia were collected, and comparison experiments were performed on precision, recall, and f1-score in a perspective of the entity alignment..

Estimation of Rice Canopy Height Using Terrestrial Laser Scanner (레이저 스캐너를 이용한 벼 군락 초장 추정)

  • Dongwon Kwon;Wan-Gyu Sang;Sungyul Chang;Woo-jin Im;Hyeok-jin Bak;Ji-hyeon Lee;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.387-397
    • /
    • 2023
  • Plant height is a growth parameter that provides visible insights into the plant's growth status and has a high correlation with yield, so it is widely used in crop breeding and cultivation research. Investigation of the growth characteristics of crops such as plant height has generally been conducted directly by humans using a ruler, but with the recent development of sensing and image analysis technology, research is being attempted to digitally convert growth measurement technology to efficiently investigate crop growth. In this study, the canopy height of rice grown at various nitrogen fertilization levels was measured using a laser scanner capable of precise measurement over a wide range, and a comparative analysis was performed with the actual plant height. As a result of comparing the point cloud data collected with a laser scanner and the actual plant height, it was confirmed that the estimated plant height measured based on the average height of the top 1% points showed the highest correlation with the actual plant height (R2 = 0.93, RMSE = 2.73). Based on this, a linear regression equation was derived and used to convert the canopy height measured with a laser scanner to the actual plant height. The rice growth curve drawn by combining the actual and estimated plant height collected by various nitrogen fertilization conditions and growth period shows that the laser scanner-based canopy height measurement technology can be effectively utilized for assessing the plant height and growth of rice. In the future, 3D images derived from laser scanners are expected to be applicable to crop biomass estimation, plant shape analysis, etc., and can be used as a technology for digital conversion of conventional crop growth assessment methods.

Review of applicability of Turbidity-SS relationship in hyperspectral imaging-based turbid water monitoring (초분광영상 기반 탁수 모니터링에서의 탁도-SS 관계식 적용성 검토)

  • Kim, Jongmin;Kim, Gwang Soo;Kwon, Siyoon;Kim, Young Do
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.12
    • /
    • pp.919-928
    • /
    • 2023
  • Rainfall characteristics in Korea are concentrated during the summer flood season. In particular, when a large amount of turbid water flows into the dam due to the increasing trend of concentrated rainfall due to abnormal rainfall and abnormal weather conditions, prolonged turbid water phenomenon occurs due to the overturning phenomenon. Much research is being conducted on turbid water prediction to solve these problems. To predict turbid water, turbid water data from the upstream inflow is required, but spatial and temporal data resolution is currently insufficient. To improve temporal resolution, the development of the Turbidity-SS conversion equation is necessary, and to improve spatial resolution, multi-item water quality measurement instrument (YSI), Laser In-Situ Scattering and Transmissometry (LISST), and hyperspectral sensors are needed. Sensor-based measurement can improve the spatial resolution of turbid water by measuring line and surface unit data. In addition, in the case of LISST-200X, it is possible to collect data on particle size, etc., so it can be used in the Turbidity-SS conversion equation for fraction (Clay: Silt: Sand). In addition, among recent remote sensing methods, the spatial distribution of turbid water can be presented when using UAVs with higher spatial and temporal resolutions than other payloads and hyperspectral sensors with high spectral and radiometric resolutions. Therefore, in this study, the Turbidity-SS conversion equation was calculated according to the fraction through laboratory analysis using LISST-200X and YSI-EXO, and sensor-based field measurements including UAV (Matrice 600) and hyperspectral sensor (microHSI 410 SHARK) were used. Through this, the spatial distribution of turbidity and suspended sediment concentration, and the turbidity calculated using the Turbidity-SS conversion equation based on the measured suspended sediment concentration, was presented. Through this, we attempted to review the applicability of the Turbidity-SS conversion equation and understand the current status of turbid water occurrence.