• Title/Summary/Keyword: automatically

Search Result 6,815, Processing Time 0.03 seconds

Development and Validation of AI Image Segmentation Model for CT Image-Based Sarcopenia Diagnosis (CT 영상 기반 근감소증 진단을 위한 AI 영상분할 모델 개발 및 검증)

  • Lee Chung-Sub;Lim Dong-Wook;Noh Si-Hyeong;Kim Tae-Hoon;Ko Yousun;Kim Kyung Won;Jeong Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.3
    • /
    • pp.119-126
    • /
    • 2023
  • Sarcopenia is not well known enough to be classified as a disease in 2021 in Korea, but it is recognized as a social problem in developed countries that have entered an aging society. The diagnosis of sarcopenia follows the international standard guidelines presented by the European Working Group for Sarcopenia in Older People (EWGSOP) and the d Asian Working Group for Sarcopenia (AWGS). Recently, it is recommended to evaluate muscle function by using physical performance evaluation, walking speed measurement, and standing test in addition to absolute muscle mass as a diagnostic method. As a representative method for measuring muscle mass, the body composition analysis method using DEXA has been formally implemented in clinical practice. In addition, various studies for measuring muscle mass using abdominal images of MRI or CT are being actively conducted. In this paper, we develop an AI image segmentation model based on abdominal images of CT with a relatively short imaging time for the diagnosis of sarcopenia and describe the multicenter validation. We developed an artificial intelligence model using U-Net that can automatically segment muscle, subcutaneous fat, and visceral fat by selecting the L3 region from the CT image. Also, to evaluate the performance of the model, internal verification was performed by calculating the intersection over union (IOU) of the partitioned area, and the results of external verification using data from other hospitals are shown. Based on the verification results, we tried to review and supplement the problems and solutions.

Development of Time-Cost Trade-Off Algorithm for JIT System of Prefabricated Girder Bridges (Nodular GIrder) (프리팹 교량 거더 (노듈러 거더)의 적시 시공을 위한 공기-비용 알고리즘 개발)

  • Kim, Dae-Young;Chung, Taewon;Kim, Rang-Gyun
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.3
    • /
    • pp.12-19
    • /
    • 2023
  • In the case of the construction industry, the relationship between process and cost should be appropriately distributed so that the finished product can be delivered at the minimum fee within the construction period. At that time, it should be considered the size of the bridge, the construction method, the environment and production capacity of the factory, and the transport distance. However, due to various reasons that occur during the construction period, problems such as construction delay, construction cost increase, and quality and reliability degradation occur. Therefore, a systematic and scientific construction technique and process management technology are needed to break away from the conventional method. The prefab(Pre-Fabrication) is a representative OSC (Off-Site Construction) method manufactured in a factory and constructed onsite. This study develops a resource and process plan optimization system for the process management of the Nodular girder, a prefab bridge girder. A simulation algorithm develops to automatically test various variables in the personnel equipment mobilization plan to derive the optimal value. And, the algorithm was applied to the Paju-Pocheon Expressway Construction (Section 3) Dohwa 4 Bridge under construction, and the results compare. Based on construction work standard product calculation, actual input manpower, equipment type, and quantity were applied to the Activity Card, and the amount of work by quantity counting, resource planning, and resource requirements was reflected. In the future, we plan to improve the accuracy of the program by applying forecasting techniques including various field data.

An Automatic ROI Extraction and Its Mask Generation based on Wavelet of Low DOF Image (피사계 심도가 낮은 이미지에서 웨이블릿 기반의 자동 ROI 추출 및 마스크 생성)

  • Park, Sun-Hwa;Seo, Yeong-Geon;Lee, Bu-Kweon;Kang, Ki-Jun;Kim, Ho-Yong;Kim, Hyung-Jun;Kim, Sang-Bok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.93-101
    • /
    • 2009
  • This paper suggests a new algorithm automatically searching for Region-of-Interest(ROI) with high speed, using the edge information of high frequency subband transformed with wavelet. The proposed method executes a searching algorithm of 4-direction object boundary by the unit of block using the edge information, and detects ROIs. The whole image is splitted by $64{\times}64$ or $32{\times}32$ sized blocks and the blocks can be ROI block or background block according to taking the edges or not. The 4-directions searche the image from the outside to the center and the algorithm uses a feature that the low-DOF image has some edges as one goes to center. After searching all the edges, the method regards the inner blocks of the edges as ROI, and makes the ROI masks and sends them to server. This is one of the dynamic ROI method. The existing methods have had some problems of complicated filtering and region merge, but this method improved considerably the problems. Also, it was possible to apply to an application requiring real-time processing caused by the process of the unit of block.

Deep-learning-based GPR Data Interpretation Technique for Detecting Cavities in Urban Roads (도심지 도로 지하공동 탐지를 위한 딥러닝 기반 GPR 자료 해석 기법)

  • Byunghoon, Choi;Sukjoon, Pyun;Woochang, Choi;Churl-hyun, Jo;Jinsung, Yoon
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.189-200
    • /
    • 2022
  • Ground subsidence on urban roads is a social issue that can lead to human and property damages. Therefore, it is crucial to detect underground cavities in advance and repair them. Underground cavity detection is mainly performed using ground penetrating radar (GPR) surveys. This process is time-consuming, as a massive amount of GPR data needs to be interpreted, and the results vary depending on the skills and subjectivity of experts. To address these problems, researchers have studied automation and quantification techniques for GPR data interpretation, and recent studies have focused on deep learning-based interpretation techniques. In this study, we described a hyperbolic event detection process based on deep learning for GPR data interpretation. To demonstrate this process, we implemented a series of algorithms introduced in the preexisting research step by step. First, a deep learning-based YOLOv3 object detection model was applied to automatically detect hyperbolic signals. Subsequently, only hyperbolic signals were extracted using the column-connection clustering (C3) algorithm. Finally, the horizontal locations of the underground cavities were determined using regression analysis. The hyperbolic event detection using the YOLOv3 object detection technique achieved 84% precision and a recall score of 92% based on AP50. The predicted horizontal locations of the four underground cavities were approximately 0.12 ~ 0.36 m away from their actual locations. Thus, we confirmed that the existing deep learning-based interpretation technique is reliable with regard to detecting the hyperbolic patterns indicating underground cavities.

CKFont2: An Improved Few-Shot Hangul Font Generation Model Based on Hangul Composability (CKFont2: 한글 구성요소를 이용한 개선된 퓨샷 한글 폰트 생성 모델)

  • Jangkyoung, Park;Ammar, Ul Hassan;Jaeyoung, Choi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.12
    • /
    • pp.499-508
    • /
    • 2022
  • A lot of research has been carried out on the Hangeul generation model using deep learning, and recently, research is being carried out how to minimize the number of characters input to generate one set of Hangul (Few-Shot Learning). In this paper, we propose a CKFont2 model using only 14 letters by analyzing and improving the CKFont (hereafter CKFont1) model using 28 letters. The CKFont2 model improves the performance of the CKFont1 model as a model that generates all Hangul using only 14 characters including 24 components (14 consonants and 10 vowels), where the CKFont1 model generates all Hangul by extracting 51 Hangul components from 28 characters. It uses the minimum number of characters for currently known models. From the basic consonants/vowels of Hangul, 27 components such as 5 double consonants, 11/11 compound consonants/vowels respectively are learned by deep learning and generated, and the generated 27 components are combined with 24 basic consonants/vowels. All Hangul characters are automatically generated from the combined 51 components. The superiority of the performance was verified by comparative analysis with results of the zi2zi, CKFont1, and MX-Font model. It is an efficient and effective model that has a simple structure and saves time and resources, and can be extended to Chinese, Thai, and Japanese.

A Comparison of Image Classification System for Building Waste Data based on Deep Learning (딥러닝기반 건축폐기물 이미지 분류 시스템 비교)

  • Jae-Kyung Sung;Mincheol Yang;Kyungnam Moon;Yong-Guk Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.199-206
    • /
    • 2023
  • This study utilizes deep learning algorithms to automatically classify construction waste into three categories: wood waste, plastic waste, and concrete waste. Two models, VGG-16 and ViT (Vision Transformer), which are convolutional neural network image classification algorithms and NLP-based models that sequence images, respectively, were compared for their performance in classifying construction waste. Image data for construction waste was collected by crawling images from search engines worldwide, and 3,000 images, with 1,000 images for each category, were obtained by excluding images that were difficult to distinguish with the naked eye or that were duplicated and would interfere with the experiment. In addition, to improve the accuracy of the models, data augmentation was performed during training with a total of 30,000 images. Despite the unstructured nature of the collected image data, the experimental results showed that VGG-16 achieved an accuracy of 91.5%, and ViT achieved an accuracy of 92.7%. This seems to suggest the possibility of practical application in actual construction waste data management work. If object detection techniques or semantic segmentation techniques are utilized based on this study, more precise classification will be possible even within a single image, resulting in more accurate waste classification

Development of Global Fishing Application to Build Big Data on Fish Resources (어자원 빅데이터 구축을 위한 글로벌 낚시 앱 개발)

  • Pi, Su-Young;Lee, Jung-A;Yang, Jae-Hyuck
    • Journal of Digital Convergence
    • /
    • v.20 no.3
    • /
    • pp.333-341
    • /
    • 2022
  • Despite rapidly increasing demand for fishing, there is a lack of studies and information related to fishing, and there is a limit to obtaining the data on the global distribution of fish resources. Since the existing method of investigating fish resource distribution is designed to collect the fish resource information by visiting the investigation area using a throwing net, it is almost impossible to collect nation-wide data, such as streams, rivers, and seas. In addition, the existing method of measuring the length of fish used a tape measure, but in this study, a FishingTAG's smart measure was developed. When recording a picture using a FishingTAG's smart measure, the length of the fish and the environmental data when the fish was caught are automatically collected, and there is no need to carry a tape measure, so the user's convenience can be increased. With the development of a global fishing application using a FishingTAG's smart measure, first, it is possible to collect fish resource samples in a wide area around the world continuously on a real time basis. Second, it is possible to reduce the enormous cost for collecting fish resource data and to monitor the distribution and expansion of the alien fish species disturbing the ecosystem. Third, by visualizing global fish resource information through the Google Maps, users can obtain the information on fish resources according to their location. Since it provides the fish resource data collected on a real time basis, it is expected to of great help to various studies and the establishment of policies.

A study on the detection of fake news - The Comparison of detection performance according to the use of social engagement networks (그래프 임베딩을 활용한 코로나19 가짜뉴스 탐지 연구 - 사회적 참여 네트워크의 이용 여부에 따른 탐지 성능 비교)

  • Jeong, Iitae;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.197-216
    • /
    • 2022
  • With the development of Internet and mobile technology and the spread of social media, a large amount of information is being generated and distributed online. Some of them are useful information for the public, but others are misleading information. The misleading information, so-called 'fake news', has been causing great harm to our society in recent years. Since the global spread of COVID-19 in 2020, much of fake news has been distributed online. Unlike other fake news, fake news related to COVID-19 can threaten people's health and even their lives. Therefore, intelligent technology that automatically detects and prevents fake news related to COVID-19 is a meaningful research topic to improve social health. Fake news related to COVID-19 has spread rapidly through social media, however, there have been few studies in Korea that proposed intelligent fake news detection using the information about how the fake news spreads through social media. Under this background, we propose a novel model that uses Graph2vec, one of the graph embedding methods, to effectively detect fake news related to COVID-19. The mainstream approaches of fake news detection have focused on news content, i.e., characteristics of the text, but the proposed model in this study can exploit information transmission relationships in social engagement networks when detecting fake news related to COVID-19. Experiments using a real-world data set have shown that our proposed model outperforms traditional models from the perspectives of prediction accuracy.

Data Augmentation for Tomato Detection and Pose Estimation (토마토 위치 및 자세 추정을 위한 데이터 증대기법)

  • Jang, Minho;Hwang, Youngbae
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.44-55
    • /
    • 2022
  • In order to automatically provide information on fruits in agricultural related broadcasting contents, instance image segmentation of target fruits is required. In addition, the information on the 3D pose of the corresponding fruit may be meaningfully used. This paper represents research that provides information about tomatoes in video content. A large amount of data is required to learn the instance segmentation, but it is difficult to obtain sufficient training data. Therefore, the training data is generated through a data augmentation technique based on a small amount of real images. Compared to the result using only the real images, it is shown that the detection performance is improved as a result of learning through the synthesized image created by separating the foreground and background. As a result of learning augmented images using images created using conventional image pre-processing techniques, it was shown that higher performance was obtained than synthetic images in which foreground and background were separated. To estimate the pose from the result of object detection, a point cloud was obtained using an RGB-D camera. Then, cylinder fitting based on least square minimization is performed, and the tomato pose is estimated through the axial direction of the cylinder. We show that the results of detection, instance image segmentation, and cylinder fitting of a target object effectively through various experiments.

Quality Evaluation of Automatically Generated Metadata Using ChatGPT: Focusing on Dublin Core for Korean Monographs (ChatGPT가 자동 생성한 더블린 코어 메타데이터의 품질 평가: 국내 도서를 대상으로)

  • SeonWook Kim;HyeKyung Lee;Yong-Gu Lee
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.2
    • /
    • pp.183-209
    • /
    • 2023
  • The purpose of this study is to evaluate the Dublin Core metadata generated by ChatGPT using book covers, title pages, and colophons from a collection of books. To achieve this, we collected book covers, title pages, and colophons from 90 books and inputted them into ChatGPT to generate Dublin Core metadata. The performance was evaluated in terms of completeness and accuracy. The overall results showed a satisfactory level of completeness at 0.87 and accuracy at 0.71. Among the individual elements, Title, Creator, Publisher, Date, Identifier, Rights, and Language exhibited higher performance. Subject and Description elements showed relatively lower performance in terms of completeness and accuracy, but it confirmed the generation capability known as the inherent strength of ChatGPT. On the other hand, books in the sections of social sciences and technology of DDC showed slightly lower accuracy in the Contributor element. This was attributed to ChatGPT's attribution extraction errors, omissions in the original bibliographic description contents for metadata, and the language composition of the training data used by ChatGPT.