• Title/Summary/Keyword: Image annotation

Search Result 114, Processing Time 0.029 seconds

Comparison of transcriptome between high- and low-marbling fineness in longissimus thoracis muscle of Korean cattle

  • Beak, Seok-Hyeon;Baik, Myunggi
    • Animal Bioscience
    • /
    • v.35 no.2
    • /
    • pp.196-203
    • /
    • 2022
  • Objective: This study compared differentially expressed genes (DEGs) between groups with high and low numbers of fine marbling particles (NFMP) in the longissimus thoracis muscle (LT) of Korean cattle to understand the molecular events associated with fine marbling particle formation. Methods: The size and distribution of marbling particles in the LT were assessed with a computer image analysis method. Based on the NFMP, 10 LT samples were selected and assigned to either high- (n = 5) or low- (n = 5) NFMP groups. Using RNA sequencing, LT transcriptomic profiles were compared between the high- and low-NFMP groups. DEGs were selected at p<0.05 and |fold change| >2 and subjected to functional annotation. Results: In total, 328 DEGs were identified, with 207 up-regulated and 121 down-regulated genes in the high-NFMP group. Pathway analysis of these DEGs revealed five significant (p<0.05) Kyoto encyclopedia of genes and genomes pathways; the significant terms included endocytosis (p = 0.023), protein processing in endoplasmic reticulum (p = 0.019), and adipocytokine signaling pathway (p = 0.024), which are thought to regulate adipocyte hypertrophy and hyperplasia. The expression of sirtuin4 (p<0.001) and insulin receptor substrate 2 (p = 0.043), which are associated with glucose uptake and adipocyte differentiation, was higher in the high-NFMP group than in the low-NFMP group. Conclusion: Transcriptome differences between the high- and low-NFMP groups suggest that pathways regulating adipocyte hyperplasia and hypertrophy are involved in the marbling fineness of the LT.

Big Data using Artificial Intelligence CNN on Unstructured Financial Data (비정형 금융 데이터에 관한 인공지능 CNN 활용 빅데이터 연구)

  • Ko, Young-Bong;Park, Dea-Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.232-234
    • /
    • 2022
  • Big data is widely used in customer relationship management, relationship marketing, financial business improvement, credit information and risk management. Moreover, as non-face-to-face financial transactions have become more active recently due to the COVID-19 virus, the use of financial big data is more demanded in terms of relationships with customers. In terms of customer relationship, financial big data has arrived at a time that requires an emotional rather than a technical approach. In relational marketing, it was necessary to emphasize the emotional aspect rather than the cognitive, rational, and rational aspects. Existing traditional financial data was collected and utilized through text-type customer transaction data, corporate financial information, and questionnaires. In this study, the customer's emotional image data, that is, atypical data based on the customer's cultural and leisure activities, is acquired through SNS and the customer's activity image is analyzed with an artificial intelligence CNN algorithm. Activity analysis is again applied to the annotated AI, and the AI big data model is designed to analyze the behavior model shown in the annotation.

  • PDF

Detection Algorithm of Road Damage and Obstacle Based on Joint Deep Learning for Driving Safety (주행 안전을 위한 joint deep learning 기반의 도로 노면 파손 및 장애물 탐지 알고리즘)

  • Shim, Seungbo;Jeong, Jae-Jin
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.2
    • /
    • pp.95-111
    • /
    • 2021
  • As the population decreases in an aging society, the average age of drivers increases. Accordingly, the elderly at high risk of being in an accident need autonomous-driving vehicles. In order to secure driving safety on the road, several technologies to respond to various obstacles are required in those vehicles. Among them, technology is required to recognize static obstacles, such as poor road conditions, as well as dynamic obstacles, such as vehicles, bicycles, and people, that may be encountered while driving. In this study, we propose a deep neural network algorithm capable of simultaneously detecting these two types of obstacle. For this algorithm, we used 1,418 road images and produced annotation data that marks seven categories of dynamic obstacles and labels images to indicate road damage. As a result of training, dynamic obstacles were detected with an average accuracy of 46.22%, and road surface damage was detected with a mean intersection over union of 74.71%. In addition, the average elapsed time required to process a single image is 89ms, and this algorithm is suitable for personal mobility vehicles that are slower than ordinary vehicles. In the future, it is expected that driving safety with personal mobility vehicles will be improved by utilizing technology that detects road obstacles.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

AI-Based Object Recognition Research for Augmented Reality Character Implementation (증강현실 캐릭터 구현을 위한 AI기반 객체인식 연구)

  • Seok-Hwan Lee;Jung-Keum Lee;Hyun Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1321-1330
    • /
    • 2023
  • This study attempts to address the problem of 3D pose estimation for multiple human objects through a single image generated during the character development process that can be used in augmented reality. In the existing top-down method, all objects in the image are first detected, and then each is reconstructed independently. The problem is that inconsistent results may occur due to overlap or depth order mismatch between the reconstructed objects. The goal of this study is to solve these problems and develop a single network that provides consistent 3D reconstruction of all humans in a scene. Integrating a human body model based on the SMPL parametric system into a top-down framework became an important choice. Through this, two types of collision loss based on distance field and loss that considers depth order were introduced. The first loss prevents overlap between reconstructed people, and the second loss adjusts the depth ordering of people to render occlusion inference and annotated instance segmentation consistently. This method allows depth information to be provided to the network without explicit 3D annotation of the image. Experimental results show that this study's methodology performs better than existing methods on standard 3D pose benchmarks, and the proposed losses enable more consistent reconstruction from natural images.

Video Retrieval System supporting Content-based Retrieval and Scene-Query-By-Example Retrieval (비디오의 의미검색과 예제기반 장면검색을 위한 비디오 검색시스템)

  • Yoon, Mi-Hee;Cho, Dong-Uk
    • The KIPS Transactions:PartB
    • /
    • v.9B no.1
    • /
    • pp.105-112
    • /
    • 2002
  • In order to process video data effectively, we need to save its content on database and a content-based retrieval method which processes various queries of all users is required. In this paper, we present VRS(Video Retrieval System) which provides similarity query, SQBE(Scene Query By Example) query, and content-based retrieval by combining the feature-based retrieval and the annotation-based retrieval. The SQBE query makes it possible for a user to retrieve scones more exactly by inserting and deleting objects based on a retrieved scene. We proposed query language and query processing algorithm for SQBE query, and carried out performance evaluation on similarity retrieval. The proposed system is implemented with Visual C++ and Oracle.

Design and Implementation of the Query Processor and Browser for Content-based Retrieval in Video Database (내용기반 검색을 위한 비디오 데이터베이스 질의처리기 및 브라우저의 설계 및 구현)

  • Lee, Hun-Sun;Kim, Yong-Geol;Bae, Yeong-Rae;Jin, Seong-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.8
    • /
    • pp.2008-2019
    • /
    • 1999
  • As computing technologies are rapidly progressed and widely used, the needs of high quality information have been increased. To satisfy these needs, it is essential to develop a system which can provide an efficient storing, managing and retrieving mechanism of complex multimedia data, esp. video data. In this paper, we propose a metadata model which can support content-based retrieval of video data. And we design and implement an integrated user interface for querying and browser for content-based retrieval in video database which can efficiently access and browse the video clip that user want to see. Proposed query processor and browser can support various user queries by integrating image feature, spatial temporal feature and annotation. Our system supports structure browsing of retrieved result, so users can more exactly and efficiently access relevant video clip. Without browsing the whole video clip, users can know the contents of video by seeing the storyboard. This storyboard facility makes users know more quickly the content of video clip.

  • PDF

Designing Dataset for Artificial Intelligence Learning for Cold Sea Fish Farming

  • Sung-Hyun KIM;Seongtak OH;Sangwon LEE
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.208-216
    • /
    • 2023
  • The purpose of our study is to design datasets for Artificial Intelligence learning for cold sea fish farming. Salmon is considered one of the most popular fish species among men and women of all ages, but most supplies depend on imports. Recently, salmon farming, which is rapidly emerging as a specialized industry in Gangwon-do, has attracted attention. Therefore, in order to successfully develop salmon farming, the need to systematically build data related to salmon and salmon farming and use it to develop aquaculture techniques is raised. Meanwhile, the catch of pollack continues to decrease. Efforts should be made to improve the major factors affecting pollack survival based on data, as well as increasing the discharge volume for resource recovery. To this end, it is necessary to systematically collect and analyze data related to pollack catch and ecology to prepare a sustainable resource management strategy. Image data was obtained using CCTV and underwater cameras to establish an intelligent aquaculture strategy for salmon and pollock, which are considered representative fish species in Gangwon-do. Using these data, we built learning data suitable for AI analysis and prediction. Such data construction can be used to develop models for predicting the growth of salmon and pollack, and to develop algorithms for AI services that can predict water temperature, one of the key variables that determine the survival rate of pollack. This in turn will enable intelligent aquaculture and resource management taking into account the ecological characteristics of fish species. These studies look forward to achievements on an important level for sustainable fisheries and fisheries resource management.

Development of wound segmentation deep learning algorithm (딥러닝을 이용한 창상 분할 알고리즘 )

  • Hyunyoung Kang;Yeon-Woo Heo;Jae Joon Jeon;Seung-Won Jung;Jiye Kim;Sung Bin Park
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.2
    • /
    • pp.90-94
    • /
    • 2024
  • Diagnosing wounds presents a significant challenge in clinical settings due to its complexity and the subjective assessments by clinicians. Wound deep learning algorithms quantitatively assess wounds, overcoming these challenges. However, a limitation in existing research is reliance on specific datasets. To address this limitation, we created a comprehensive dataset by combining open dataset with self-produced dataset to enhance clinical applicability. In the annotation process, machine learning based on Gradient Vector Flow (GVF) was utilized to improve objectivity and efficiency over time. Furthermore, the deep learning model was equipped U-net with residual blocks. Significant improvements were observed using the input dataset with images cropped to contain only the wound region of interest (ROI), as opposed to original sized dataset. As a result, the Dice score remarkably increased from 0.80 using the original dataset to 0.89 using the wound ROI crop dataset. This study highlights the need for diverse research using comprehensive datasets. In future study, we aim to further enhance and diversify our dataset to encompass different environments and ethnicities.

Echocardiography Core Laboratory Validation of a Novel Vendor-Independent Web-Based Software for the Assessment of Left Ventricular Global Longitudinal Strain

  • Ernest Spitzer;Benjamin Camacho;Blaz Mrevlje;Hans-Jelle Brandendburg;Claire B. Ren
    • Journal of Cardiovascular Imaging
    • /
    • v.31 no.3
    • /
    • pp.135-141
    • /
    • 2023
  • BACKGROUND: Global longitudinal strain (GLS) is an accurate and reproducible parameter of left ventricular (LV) systolic function which has shown meaningful prognostic value. Fast, user-friendly, and accurate tools are required for its widespread implementation. We aim to compare a novel web-based tool with two established algorithms for strain analysis and test its reproducibility. METHODS: Thirty echocardiographic datasets with focused LV acquisitions were analyzed using three different semi-automated endocardial GLS algorithms by two readers. Analyses were repeated by one reader for the purpose of intra-observer variability. CAAS Qardia (Pie Medical Imaging) was compared with 2DCPA and AutoLV (TomTec). RESULTS: Mean GLS values were -15.0 ± 3.5% from Qardia, -15.3 ± 4.0% from 2DCPA, and -15.2 ± 3.8% from AutoLV. Mean GLS between Qardia and 2DCPA were not statistically different (p = 0.359), with a bias of -0.3%, limits of agreement (LOA) of 3.7%, and an intraclass correlation coefficient (ICC) of 0.88. Mean GLS between Qardia and AutoLV were not statistically different (p = 0.637), with a bias of -0.2%, LOA of 3.4%, and an ICC of 0.89. The coefficient of variation (CV) for intra-observer variability was 4.4% for Qardia, 8.4% 2DCPA, and 7.7% AutoLV. The CV for inter-observer variability was 4.5%, 8.1%, and 8.0%, respectively. CONCLUSIONS: In echocardiographic datasets of good image quality analyzed at an independent core laboratory using a standardized annotation method, a novel web-based tool for GLS analysis showed consistent results when compared with two algorithms of an established platform. Moreover, inter- and intra-observer reproducibility results were excellent.