• Title/Summary/Keyword: Image-based analysis

Search Result 4,451, Processing Time 0.036 seconds

Manufacturing Techniques of Bronze Medium Mortars(Jungwangu, 中碗口) in Joseon Dynasty (조선시대 중완구의 제작 기술)

  • Huh, Ilkwon;Kim, Haesol
    • Conservation Science in Museum
    • /
    • v.26
    • /
    • pp.161-182
    • /
    • 2021
  • A jungwangu, a type of medium-sized mortar, is a firearm with a barrel and a bowl-shaped projectileloading component. A bigyeokjincheonroe (bombshell) or a danseok (stone ball) could be used as a projectile. According to the Hwaposik eonhae (Korean Translation of the Method of Production and Use of Artillery, 1635) by Yi Seo, mortars were classified into four types according to its size: large, medium, small, or extra-small. A total of three mortars from the Joseon period have survived, including one large mortar (Treasure No. 857) and two medium versions (Treasure Nos. 858 and 859). In this study, the production method for medium mortars was investigated based on scientific analysis of the two extant medium mortars, respectively housed in the Jinju National Museum (Treasure No. 858) and the Korea Naval Academy Museum (Treasure No. 859). Since only two medium mortars remain in Korea, detailed specifications were compared between them based on precise 3D scanning information of the items, and the measurements were compared with the figures in relevant records from the period. According to the investigation, the two mortars showed only a minute difference in overall size but their weight differed by 5,507 grams. In particular, the location of the wick hole and the length of the handle were distinct. The extant medium mortars are highly similar to the specifications listed in the Hwaposik eonhae. The composition of the medium mortars was analyzed and compared with other bronze gunpowder weapons. The surface composition analysis showed that the medium mortars were made of a ternary alloy of Cu-Sn-Pb with average respective proportions of (wt%) 85.24, 10.16, and 2.98. The material composition of the medium mortars was very similar to the average composition of the small gun from the Joseon period analyzed in previous research. It also showed a similarity with that of bronze gun-metal from medieval Europe. The casting technique was investigated based on a casting defect on the surface and the CT image. Judging by the mold line on the side, it appears that they were made in a piece-mold wherein the mold was halved and using a vertical design with molten metal poured through the end of the chamber and the muzzle was at the bottom. Chaplets, an auxiliary device that fixed the mold and the core to the barrel wall, were identified, which may have been applied to maintain the uniformity of the barrel wall. While the two medium mortars (Treasure Nos. 858 and 859) are highly similar to each other in appearance, considering the difference in the arrangement of the chaplets between the two items it is likely that a different mold design was used for each item.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Application of False Discovery Rate Control in the Assessment of Decrease of FDG Uptake in Early Alzheimer Dementia (조기 알츠하이머 치매의 뇌포도당 대사 감소 평가에서 오류발견률 조절법의 적용)

  • Lee, Dong-Soo;Kang, Hye-Jin;Jang, Myung-Jin;Cho, Sang-Soo;Kang, Won-Jun;Lee, Jae-Sung;Kang, Eun-Joo;Lee, Kang-Uk;Woo, Jong-In;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.6
    • /
    • pp.374-381
    • /
    • 2003
  • Purpose: Determining an appropriate thresholding is crucial for PDG PET analysis since strong control of Type I error could fail to find pathological differences between eariy Alzheimer' disease (AD) patients and healthy normal controls. We compared the SPM results on FDG PET imaging of early AD using uncorrected p-value, random-field based corrected p-value and false discovery rate (FDR) control. Materials and Methods: Twenty-eight patients ($66{\pm}7$ years old) with early AD and 18 age-matched normal controls ($68{\pm}6$ years old) underwent FDG brain PET. To identify brain regions with hypo-metabolism in group or individual patient compared to normal controls, group images or each patient's image was compared with normal controls usingthe same fixed p-value of 0.001 on uncorrected thresholding, random-field based corrected thresholding and FDR control. Results: The number of hypo-metabolic voxels was smallest in corrected p-value method, largest in uncorrected p-value method and intermediate in FDG thresholding in group analysis. Three types of result pattern were found. The first was that corrected p-value did not yield any voxel positive but FDR gave a few significantly hypometabolic voxels (8/28, 29%). The second was that both corrected p-value and FDR did not yield any positive region but numerous positive voxels were found with the threshold of uncorrected p-values (6/28, 21%). The last was that FDR was detected as many positive voxels as uncorrected p-value method (14/28, 50%). Conclusions FDR control could identify hypo-metaboiic areas in group or individual patients with early AD. We recommend FDR control instead of uncorrected or random-field corrected thresholding method to find the areas showing hypometabolism especially in small group or individual analysis of FDG PET.

Uncanny Valley Effect in the Animation Character Design - focusing on Avoiding or Utilizing the Uncanny Valley Effect (애니메이션 캐릭터 디자인에서의 언캐니 밸리 효과 연구 - 언캐니 밸리(uncanny valley)의 회피와 이용을 중심으로)

  • Ding, LI;Moon, Hyoun-Sun
    • Cartoon and Animation Studies
    • /
    • s.43
    • /
    • pp.321-342
    • /
    • 2016
  • The "uncanny valley" curve describes the measured results of the negative emotion response which depends on the similarity between the artificially created character and the real human shape. The "uncanny valley" effect that usually appears in the animation character design induces negative response such as fear and hatred feeling, and anxiety, which is not expected by designers. Especially, in the case of the commercial animation which mostly reply on public response, this kind of negative response is directly related to the failure of artificially created character. Accordingly, designers adjust the desirability of the character design by avoiding or utilizing the "uncanny valley" effect, inducing certain character effect that leads to the success in animation work. This manuscript confirmed the "uncanny valley" coefficient of the positive emotion character design which was based on the actual character design and animation analysis. The "uncanny valley" concept was firstly introduced by a medical scientist Ernst Jentsch in 1906. After then, a psychologist Freud applied this concept to psychological phenomenon in 1919 and a Japanese robert expert Professor Masahiro Mori presented the "uncanny valley" theory on the view of the recognition effect. This paper interpreted the "uncanny valley" effect based on these research theory outcomes in two aspects including sensation production and emotion expression. The mickey-mouse character design analysis confirmed the existence basis of the "uncanny valley" effect, which presented how mickey-mouse human shape image imposed the "uncanny valley" effect on audience. The animation work analysis investigated the reason why the produced 3D animation character should not be 100% similar to the real human by comparing the animation baby character produced by Pix company as the experimental subject to the data of the real baby with the same age. Therefore, the examples of avoiding or utilizing the "uncanny valley" effect in animation character design was discussed in detail and the four stages of sensation production and emotional change of audience due to this kind of effect was figured out. This research result can be used as an important reference in deciding the desirability of the animation character.

A Study on the Support Method for Activate Youth Start-ups in University for the Creation of a Start-up Ecosystem: Focused on the Case of Seoul City (지역 청년창업생태계 조성을 위한 대학의 지원방안 탐색: 서울시 사례를 중심으로)

  • Kim, In Sook;Yang, Ji Hee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.4
    • /
    • pp.57-71
    • /
    • 2022
  • The purpose of this study was to analyze the perception and demand of local youth and to find ways to support universities in order to create an youth start-up ecosystem. To this end, 509 young people living in Seoul were analyzed to recognize and demand young people in the region for youth start-ups, and to support universities. The findings are as follows. First, as a result of analyzing young people's perception of youth start-ups in the region, the "Youth Start-up Program" was analyzed the highest in terms of the demand for regional programs by university. In addition, there was a high perception that the image of youth startups in the region was "challenging" and "good for changing times." Second, after analyzing the demand for support for youth start-ups in the region, it appeared in the order of mentoring, start-up education, and creation of start-up spaces. And it showed different needs for different ages. Third, the results were derived from analysis of the demand for university support for the creation of a regional youth start-up ecosystem, the criteria for selecting local youth start-up support organizations, and the period of participation in local youth start-up support. Based on the results of the above research, the implications and suggestions of university support for the creation of a community of youth start-up ecosystem are as follows. First of all, it is necessary to develop and operate sustainable symbiosis mentoring programs focusing on university's infrastructure and regional symbiosis. Second, it is necessary to develop and utilize step-by-step systematic microlearning content based on the needs analysis of prospective youth start-ups. Third, it is necessary to form an open youth start-up base space for local residents in universities and link it with the start-up process inside and outside universities. The results of this study are expected to be used as basic data for establishing policies for supporting youth start-ups and establishing and operating strategies for supporting youth start-ups at universities.

The Usability Analysis of 3D-CRT, IMRT, Tomotherpy Radiation Therapy on Nasopharyngeal Cancer (NPC의 방사선치료시 3D-CRT, IMRT, Tomotherapy의 유용성 분석)

  • Song, Jong-Nam;Kim, Young-Jae;Hong, Seung-Il
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.5
    • /
    • pp.365-371
    • /
    • 2012
  • The radiation therapy treatment technique is developed from 3D-CRT, IMRT to Tomotherapy. and these three technique was most widely using methods. We find out a comparison normal tissue doses and tumor dose of 3D-CRT, IMRT(Linac Based), and Tomotherapy on Head and Neck Cancer. We achieved radiological image used the Human model phantom (Anthropomorphic Phantom) and it was taken CT simulation (Slice Thickness : 3mm) and GTV was nasopharngeal region and PTV(including set-up margin) was GTV plus 2mm area. and transfer those images to the radiation planning system (3D-CRT - ADAC-Pinnacle3, Tomotherapy - Tomotherapy Hi-Art System). The prescription dose was 7020 cGy and measuring PTV's dose and nomal tissue (parotid gland, oral cavity, spinal cord). The PTV's doses was Tomotherapy, Linac Based - IMRT, 3D-CRT was 6923 cGy, 6901 cGy and 6718 cGy its dose value was meet TCP because its value was up to the 95% based on 7020 cGy, Nomal tissue (parotid gland, oral cavity, spinal cord) was 1966 cGy(Tomotherapy), 2405 cGy(IMRT), 2468 cGy(3D-CRT)[parotid gland], 2991 cGy(Tomotherapy), 3062 cGy(IMRT), 3684 cGy (3D-CRT)[oral cavity], 1768 cGy(Tomotherapy), 2151 cGy(IMRT), 4031 cGy(3D-CRT)[spinal cord] its value did not exceeded NTCP. All the treatment techniques are equated with tumor and nomal tissue doses. The 3D-CRT was worse than other techniques on dose distribution, but it is reasonable in terms of TCP and NTCP baseline Tomotherapy, IMRT -dose distribution was relatively superior- was hard to therapy to claustrophobic patients and patients with respiratory failure. Particularly, in case on Tomotherapy, it take MVCT before treatment so dose measurement will be unnecessary radiation exposure to patients. Conclusion, Tomotherapy was the best treatment technique and 2nd was IMRT, and 3rd 3D-CRT. But applicable differently depending on the the patient's condition even though dose not matter.

Error Analysis of Delivered Dose Reconstruction Using Cone-beam CT and MLC Log Data (콘빔 CT 및 MLC 로그데이터를 이용한 전달 선량 재구성 시 오차 분석)

  • Cheong, Kwang-Ho;Park, So-Ah;Kang, Sei-Kwon;Hwang, Tae-Jin;Lee, Me-Yeon;Kim, Kyoung-Joo;Bae, Hoon-Sik;Oh, Do-Hoon
    • Progress in Medical Physics
    • /
    • v.21 no.4
    • /
    • pp.332-339
    • /
    • 2010
  • We aimed to setup an adaptive radiation therapy platform using cone-beam CT (CBCT) and multileaf collimator (MLC) log data and also intended to analyze a trend of dose calculation errors during the procedure based on a phantom study. We took CT and CBCT images of Catphan-600 (The Phantom Laboratory, USA) phantom, and made a simple step-and-shoot intensity-modulated radiation therapy (IMRT) plan based on the CT. Original plan doses were recalculated based on the CT ($CT_{plan}$) and the CBCT ($CBCT_{plan}$). Delivered monitor unit weights and leaves-positions during beam delivery for each MLC segment were extracted from the MLC log data then we reconstructed delivered doses based on the CT ($CT_{recon}$) and CBCT ($CBCT_{recon}$) respectively using the extracted information. Dose calculation errors were evaluated by two-dimensional dose discrepancies ($CT_{plan}$ was the benchmark), gamma index and dose-volume histograms (DVHs). From the dose differences and DVHs, it was estimated that the delivered dose was slightly greater than the planned dose; however, it was insignificant. Gamma index result showed that dose calculation error on CBCT using planned or reconstructed data were relatively greater than CT based calculation. In addition, there were significant discrepancies on the edge of each beam while those were less than errors due to inconsistency of CT and CBCT. $CBCT_{recon}$ showed coupled effects of above two kinds of errors; however, total error was decreased even though overall uncertainty for the evaluation of delivered dose on the CBCT was increased. Therefore, it is necessary to evaluate dose calculation errors separately as a setup error, dose calculation error due to CBCT image quality and reconstructed dose error which is actually what we want to know.

A Study on the Change in the Representation of Father Involvement in Home Economics Textbook (가정과 교과서에 나타난 아버지 역할의 변화)

  • Kim, Youn-Jung;Lee, Soo-Hee;Sohn, Sang-Hee
    • Journal of Korean Home Economics Education Association
    • /
    • v.26 no.2
    • /
    • pp.31-49
    • /
    • 2014
  • The purpose of this study is to examine how the father involvement suggested in the Home Economics textbook through the development of gender-equal society and provide the basic data for the development of a standard for the father involvement in the viewpoint of gender equality. For this, the father involvement depicted in the main text, photos, and illustrations included in the Home Economics textbooks were examined. A total of 34 Home Economics textbooks written based on the curricula from the 1st Curriculum up to the 2007 Revised Curriculum were analyzed centering on the contents and the quantity of the text, supplementary materials, photos, and illustrations. The following are the results of the analysis. First, the Home Economics textbooks based on the 1st to 3rd Curriculua only described the traditional father involvement, and photos and illustrations did not specifically describe the role of the father. Second, the Home Economics textbooks based on the 4th to 5th Curricula began to show changes such as the image of the father sharing household responsibilities. Third, the Home Economics textbooks based on the 6th Curriculum suggested more active involvement of the father such as sharing the equal responsibility for the upbringing of children and sharing responsibilities for child care and housework. Fourth, the Home Economics textbooks based on the 7th Curricula up to the Revised Curriculum of 2007 emphasized the father's involvement of upbringing children. Especially, a variety of contents including the domestic responsibilities of the father, the correction of the "work-first" attitude, and gender equality related contents were suggested to promote gender-equal society further. Said results show that the contents related with gender equality and the descriptions about the father role in the viewpoint of gender equality are steadily increasing in Home Economics textbooks. However, there were still problems such as the gender-role division regarding the involvement of the father in the family and temporary responses to social demands. Open debates between the experts in the education of Home Economics and experts in family life may be required in order to improve said problems.

  • PDF

Terrain Shadow Detection in Satellite Images of the Korean Peninsula Using a Hill-Shade Algorithm (음영기복 알고리즘을 활용한 한반도 촬영 위성영상에서의 지형그림자 탐지)

  • Hyeong-Gyu Kim;Joongbin Lim;Kyoung-Min Kim;Myoungsoo Won;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.637-654
    • /
    • 2023
  • In recent years, the number of users has been increasing with the rapid development of earth observation satellites. In response, the Committee on Earth Observation Satellites (CEOS) has been striving to provide user-friendly satellite images by introducing the concept of Analysis Ready Data (ARD) and defining its requirements as CEOS ARD for Land (CARD4L). In ARD, a mask called an Unusable Data Mask (UDM), identifying unnecessary pixels for land analysis, should be provided with a satellite image. UDMs include clouds, cloud shadows, terrain shadows, etc. Terrain shadows are generated in mountainous terrain with large terrain relief, and these areas cause errors in analysis due to their low radiation intensity. previous research on terrain shadow detection focused on detecting terrain shadow pixels to correct terrain shadows. However, this should be replaced by the terrain correction method. Therefore, there is a need to expand the purpose of terrain shadow detection. In this study, to utilize CAS500-4 for forest and agriculture analysis, we extended the scope of the terrain shadow detection to shaded areas. This paper aims to analyze the potential for terrain shadow detection to make a terrain shadow mask for South and North Korea. To detect terrain shadows, we used a Hill-shade algorithm that utilizes the position of the sun and a surface's derivatives, such as slope and aspect. Using RapidEye images with a spatial resolution of 5 meters and Sentinel-2 images with a spatial resolution of 10 meters over the Korean Peninsula, the optimal threshold for shadow determination was confirmed by comparing them with the ground truth. The optimal threshold was used to perform terrain shadow detection, and the results were analyzed. As a qualitative result, it was confirmed that the shape was similar to the ground truth as a whole. In addition, it was confirmed that most of the F1 scores were between 0.8 and 0.94 for all images tested. Based on the results of this study, it was confirmed that automatic terrain shadow detection was well performed throughout the Korean Peninsula.

A Web-based 'Patterns of Care Study' System for Clinical Radiation Oncology in Korea: Development, Launching, and Characteristics (우리나라 임상방사선종양을 위한 웹 기반 PCS 시스템의 개발과 특성)

  • Kim, Il Han;Chie, Eui Kyu;Oh, Do Hoon;Suh Chang-Ok;Kim, Jong Hoon;Ahn, Yong Chan;Hur, Won-Joo;Chung, Woong Ki;Choi, Doo Ho;Lee, Jae Won
    • Radiation Oncology Journal
    • /
    • v.21 no.4
    • /
    • pp.291-298
    • /
    • 2003
  • Purpose: We report upon a web-based system for Patterns of Care Study (PCS) devised for Korean radiation oncology. This PCS was designed to establish standard tools for clinical quality assurance, to determine basic parameters for radiation oncology processes, to offer a solid system for cooperative clinical studies and a useful standard database for comparisons with other national databases. Materials and Methods: The system consisted of a main server with two back-ups in other locations. The program uses a Linux operating system and a MySQL database. Cancers with high frequencies in radiotherapy departments in Korea from 1998 to 1999 were chosen to have a developmental priority. Results: The web-based clinical PCS .system for radiotherapy in www.pcs.re.kr was developed in early 2003 for cancers of the breast, rectum, esophagus, larynx and lung, and for brain metastasis. The total number of PCS study items exceeded one thousand. Our PCS system features user-friendliness, double entry checking, data security, encryption, hard disc mirroring, double back-up, and statistical analysis. Alphanumeric data can be input as well as image data. In addition, programs were constructed for IRB submission, random sampling of data, and departmental structure. Conclusion: For the first time in the field of PCS, we have developed a web-based system and associated working programs. With this system, we can gather sample data in a short period and thus save, cost, effort and time. Data audits should be peformed to validate input data. We propose that this system should be considered as a standard method for PCS or similar types of data collection systems.