• Title/Summary/Keyword: 영상정보추출

Search Result 4,423, Processing Time 0.029 seconds

Corneal Ulcer Region Detection With Semantic Segmentation Using Deep Learning

  • Im, Jinhyuk;Kim, Daewon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.1-12
    • /
    • 2022
  • Traditional methods of measuring corneal ulcers were difficult to present objective basis for diagnosis because of the subjective judgment of the medical staff through photographs taken with special equipment. In this paper, we propose a method to detect the ulcer area on a pixel basis in corneal ulcer images using a semantic segmentation model. In order to solve this problem, we performed the experiment to detect the ulcer area based on the DeepLab model which has the highest performance in semantic segmentation model. For the experiment, the training and test data were selected and the backbone network of DeepLab model which set as Xception and ResNet, respectively were evaluated and compared the performances. We used Dice similarity coefficient and IoU value as an indicator to evaluate the performances. Experimental results show that when 'crop & resized' images are added to the dataset, it segment the ulcer area with an average accuracy about 93% of Dice similarity coefficient on the DeepLab model with ResNet101 as the backbone network. This study shows that the semantic segmentation model used for object detection also has an ability to make significant results when classifying objects with irregular shapes such as corneal ulcers. Ultimately, we will perform the extension of datasets and experiment with adaptive learning methods through future studies so that they can be implemented in real medical diagnosis environment.

Real-Time 3D Volume Deformation and Visualization by Integrating NeRF, PBD, and Parallel Resampling (NeRF, PBD 및 병렬 리샘플링을 결합한 실시간 3D 볼륨 변형체 시각화)

  • Sangmin Kwon;Sojin Jeon;Juni Park;Dasol Kim;Heewon Kye
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.189-198
    • /
    • 2024
  • Research combining deep learning-based models and physical simulations is making important advances in the medical field. This extracts the necessary information from medical image data and enables fast and accurate prediction of deformation of the skeleton and soft tissue based on physical laws. This study proposes a system that integrates Neural Radiance Fields (NeRF), Position-Based Dynamics (PBD), and Parallel Resampling to generate 3D volume data, and deform and visualize them in real-time. NeRF uses 2D images and camera coordinates to produce high-resolution 3D volume data, while PBD enables real-time deformation and interaction through physics-based simulation. Parallel Resampling improves rendering efficiency by dividing the volume into tetrahedral meshes and utilizing GPU parallel processing. This system renders the deformed volume data using ray casting, leveraging GPU parallel processing for fast real-time visualization. Experimental results show that this system can generate and deform 3D data without expensive equipment, demonstrating potential applications in engineering, education, and medicine.

Mapping and estimating forest carbon absorption using time-series MODIS imagery in South Korea (시계열 MODIS 영상자료를 이용한 산림의 연간 탄소 흡수량 지도 작성)

  • Cha, Su-Young;Pi, Ung-Hwan;Park, Chong-Hwa
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.5
    • /
    • pp.517-525
    • /
    • 2013
  • Time-series data of Normal Difference Vegetation Index (NDVI) obtained by the Moderate-resolution Imaging Spectroradiometer(MODIS) satellite imagery gives a waveform that reveals the characteristics of the phenology. The waveform can be decomposed into harmonics of various periods by the Fourier transformation. The resulting $n^{th}$ harmonics represent the amount of NDVI change in a period of a year divided by n. The values of each harmonics or their relative relation have been used to classify the vegetation species and to build a vegetation map. Here, we propose a method to estimate the annual amount of carbon absorbed on the forest from the $1^{st}$ harmonic NDVI value. The $1^{st}$ harmonic value represents the amount of growth of the leaves. By the allometric equation of trees, the growth of leaves can be considered to be proportional to the total amount of carbon absorption. We compared the $1^{st}$ harmonic NDVI values of the 6220 sample points with the reference data of the carbon absorption obtained by the field survey in the forest of South Korea. The $1^{st}$ harmonic values were roughly proportional to the amount of carbon absorption irrespective of the species and ages of the vegetation. The resulting proportionality constant between the carbon absorption and the $1^{st}$ harmonic value was 236 tCO2/5.29ha/year. The total amount of carbon dioxide absorption in the forest of South Korea over the last ten years has been estimated to be about 56 million ton, and this coincides with the previous reports obtained by other methods. Considering that the amount of the carbon absorption becomes a kind of currency like carbon credit, our method is very useful due to its generality.

A Design and Implementation of Multimedia Retrieval System based on MAF(Multimedia Application File Format) (MAF(Multimedia Application File Format) 기반 멀티미디어 검색 시스템의 설계 및 구현)

  • Gang Young-Mo;Park Joo-Hyoun;Bang Hyung-Gin;Nang Jong-Ho;Kim Hyung-Chul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.574-584
    • /
    • 2006
  • Recently, ISO/IEC 23000 (also known as 'MPEG-A') has proposed a new file format called 'MAF(Multimedia Application File Format)[1]' which provides a capability of integrating/storing the widely-used compression standards for audio and video and the metadata in MPEG-7 form into a single file format. However, it is still very hard to verify the usefulness of MPEG-A in the real applications because there is still no real system that fully implements this standard. In this thesis, a design and implementation of a multimedia retrieval system based on MPEG-A standard on PC and mobile device is presented. Furthermore, an extension of MPEG-A for describing the metadata for video is also proposed. It is selected and defined as a subset of MPEG-7 MDS[4] and TV-anytime[5] for video that is useful and manageable in the mobile environments. In order to design the multimedia retrieval system based on MPEG-A, we define the system requirements in terms of portability, extensibility, compatibility, adaptability, efficiency. Based on these requirements, we design the system which composed of 3 layers: Application Layer, Middleware Layer, Platform Layer. The proposed system consists of two sub-parts, client-part and server-part. The client-part consists of MAF authoring tool, MAP player tool and MAF searching tool which allow users to create, play and search the MAF files, respectively. The server-part is composed of modules to store and manage the MAF files and metadata extracted from MAF files. We show the usefulness of the proposed system by implementing the client system both on MS-Windows platform on desk-top computer and WIPI platform on mobile phone, and validate whether it to satisfy all the system requirements. The proposed system can be used to verify the specification in the MPEG-A, and to proves the usefulness of MPEG-A in the real application.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.

Study of the UAV for Application Plans and Landscape Analysis (UAV를 이용한 경관분석 및 활용방안에 관한 기초연구)

  • Kim, Seung-Min
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.32 no.3
    • /
    • pp.213-220
    • /
    • 2014
  • This is the study to conduct the topographical analysis using the orthophotographic data from the waypoint flight using the UAV and constructed the system required for the automatic waypoint flight using the multicopter.. The results of the waypoint photographing are as follows. First, result of the waypoint flight over the area of 9.3ha, take time photogrammetry took 40 minutes in total. The multicopter have maintained the certain flight altitude and a constant speed that the accurate photographing was conducted over the waypoint determined by the ground station. Then, the effect of the photogrammetry was checked. Second, attached a digital camera to the multicopter which is lightweight and low in cost compared to the general photogrammetric unmanned airplane and then used it to check its mobility and economy. In addition, the matching of the photo data, and production of DEM and DXF files made it possible to analyze the topography. Third, produced the high resolution orthophoto(2cm) for the inside of the river and found out that the analysis is possible for the changes in vegetation and topography around the river. Fourth, It would be used for the more in-depth research on landscape analysis such as terrain analysis and visibility analysis. This method may be widely used to analyze the various terrains in cities and rivers. It can also be used for the landscape control such as cultural remains and tourist sites as well as the control of the cultural and historical resources such as the visibility analysis for the construction of DSM.

A Topographical Classifier Development Support System Cooperating with Data Mining Tool WEKA from Airborne LiDAR Data (항공 라이다 데이터로부터 데이터마이닝 도구 WEKA를 이용한 지형 분류기 제작 지원 시스템)

  • Lee, Sung-Gyu;Lee, Ho-Jun;Sung, Chul-Woong;Park, Chang-Hoo;Cho, Woo-Sug;Kim, Yoo-Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.1
    • /
    • pp.133-142
    • /
    • 2010
  • To monitor composition and change of the national land, intelligent topographical classifier which enables accurate classification of land-cover types from airborne LiDAR data is highly required. We developed a topographical classifier development support system cooperating with da1a mining tool WEKA to help users to construct accurate topographical classification systems. The topographical classifier development support system has the following functions; superposing LiDAR data upon corresponding aerial images, dividing LiDAR data into tiles for efficient processing, 3D visualization of partial LiDAR data, feature from tiles, automatic WEKA input generation, and automatic C++ program generation from the classification rule set. In addition, with dam mining tool WEKA, we can choose highly distinguishable features by attribute selection function and choose the best classification model as the result topographical classifier. Therefore, users can easily develop intelligent topographical classifier which is well fitted to the developing objectives by using the topographical classifier development support system.

Utilization of Ground Control Points using LiDAR Intensity and DSM (LiDAR 반사강도와 DSM을 이용한 지상기준점 활용방안)

  • Lim, Sae-Bom;Kim, Jong-Mun;Shin, Sang-Cheol;Kwon, Chan-O
    • Spatial Information Research
    • /
    • v.18 no.5
    • /
    • pp.37-45
    • /
    • 2010
  • AT(Aerial Triangulation) is the essential procedure for creating orthophoto and transforming coordinates on the photographs into the real world coordinates utilizing GCPs (Ground Control Point) which is obtained by field survey and the external orientation factors from GPS/INS as a reference coordinates. In this procedure, all of the GCPs can be collected from field survey using GPS and Total Station, or obtained from digital maps. Collecting GCPs by field survey is accurate than GCPs from digital maps; however, lots of manpower should be put into the collecting procedure, and time and cost as well. On the other hand, in the case of obtaining GCPs from digital maps, it is very difficult to secure the required accuracy because almost things at each stage in the collecting procedure should rely on the subjective judgement of the performer. In this study, the results from three methods have been compared for the accuracy assessment in order to know if the results of each case is within the allowance error: for the perceivable objects such as road boarder, speed bumps, constructions etc., 1) GCPs selection utilizing the unique LiDAR intensity value reflected from such objects, 2) using LiDAR DSM and 3) GCPs from field survey. And also, AT and error analysis have been carried out w ith GCPs obtained by each case.

Systematic Approach to The Extraction of Effective Region for Tongue Diagnosis (설진 유효 영역 추출의 시스템적 접근 방법)

  • Kim, Keun-Ho;Do, Jun-Hyeong;Ryu, Hyun-Hee;Kim, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.123-131
    • /
    • 2008
  • In Oriental medicine, the status of a tongue is the important indicator to diagnose the condition of one's health like the physiological and the clinicopathological changes of internal organs in a body. A tongue diagnosis is not only convenient but also non-invasive, and therefore widely used in Oriental medicine. However, the tongue diagnosis is affected by examination circumstances like a light source, patient's posture, and doctor's condition a lot. To develop an automatic tongue diagnosis system for an objective and standardized diagnosis, segmenting a tongue region from a facial image captured and classifying tongue coating are inevitable but difficult since the colors of a tongue, lips, and skin in a mouth are similar. The proposed method includes preprocessing, over-segmenting, detecting the edge with a local minimum over a shading area from the structure of a tongue, correcting local minima or detecting the edge with the greatest color difference, selecting one edge to correspond to a tongue shape, and smoothing edges, where preprocessing consists of down-sampling to reduce computation time, histogram equalization, and edge enhancement, which produces the region of a segmented tongue. Finally, the systematic procedure separated only a tongue region from a face image with a tongue, which was obtained from a digital tongue diagnosis system. Oriental medical doctors' evaluation for the results illustrated that the segmented region excluding a non-tongue region provides important information for the accurate diagnosis. The proposed method can be used for an objective and standardized diagnosis and for an u-Healthcare system.