• Title/Summary/Keyword: Python 3

Search Result 236, Processing Time 0.025 seconds

Comparative Evaluation of 18F-FDG Brain PET/CT AI Images Obtained Using Generative Adversarial Network (생성적 적대 신경망(Generative Adversarial Network)을 이용하여 획득한 18F-FDG Brain PET/CT 인공지능 영상의 비교평가)

  • Kim, Jong-Wan;Kim, Jung-Yul;Lim, Han-sang;Kim, Jae-sam
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.24 no.1
    • /
    • pp.15-19
    • /
    • 2020
  • Purpose Generative Adversarial Network(GAN) is one of deep learning technologies. This is a way to create a real fake image after learning the real image. In this study, after acquiring artificial intelligence images through GAN, We were compared and evaluated with real scan time images. We want to see if these technologies are potentially useful. Materials and Methods 30 patients who underwent 18F-FDG Brain PET/CT scanning at Severance Hospital, were acquired in 15-minute List mode and reconstructed into 1,2,3,4,5 and 15minute images, respectively. 25 out of 30 patients were used as learning images for learning of GAN and 5 patients used as verification images for confirming the learning model. The program was implemented using the Python and Tensorflow frameworks. After learning using the Pix2Pix model of GAN technology, this learning model generated artificial intelligence images. The artificial intelligence image generated in this way were evaluated as Mean Square Error(MSE), Peak Signal to Noise Ratio(PSNR), and Structural Similarity Index(SSIM) with real scan time image. Results The trained model was evaluated with the verification image. As a result, The 15-minute image created by the 5-minute image rather than 1-minute after the start of the scan showed a smaller MSE, and the PSNR and SSIM increased. Conclusion Through this study, it was confirmed that AI imaging technology is applicable. In the future, if these artificial intelligence imaging technologies are applied to nuclear medicine imaging, it will be possible to acquire images even with a short scan time, which can be expected to reduce artifacts caused by patient movement and increase the efficiency of the scanning room.

What Concerns Does ChatGPT Raise for Us?: An Analysis Centered on CTM (Correlated Topic Modeling) of YouTube Video News Comments (ChatGPT는 우리에게 어떤 우려를 초래하는가?: 유튜브 영상 뉴스 댓글의 CTM(Correlated Topic Modeling) 분석을 중심으로)

  • Song, Minho;Lee, Soobum
    • Informatization Policy
    • /
    • v.31 no.1
    • /
    • pp.3-31
    • /
    • 2024
  • This study aimed to examine public concerns in South Korea considering the country's unique context, triggered by the advent of generative artificial intelligence such as ChatGPT. To achieve this, comments from 102 YouTube video news related to ethical issues were collected using a Python scraper, and morphological analysis and preprocessing were carried out using Textom on 15,735 comments. These comments were then analyzed using a Correlated Topic Model (CTM). The analysis identified six primary topics within the comments: "Legal and Ethical Considerations"; "Intellectual Property and Technology"; "Technological Advancement and the Future of Humanity"; "Potential of AI in Information Processing"; "Emotional Intelligence and Ethical Regulations in AI"; and "Human Imitation."Structuring these topics based on a correlation coefficient value of over 10% revealed 3 main categories: "Legal and Ethical Considerations"; "Issues Related to Data Generation by ChatGPT (Intellectual Property and Technology, Potential of AI in Information Processing, and Human Imitation)"; and "Fear for the Future of Humanity (Technological Advancement and the Future of Humanity, Emotional Intelligence, and Ethical Regulations in AI)."The study confirmed the coexistence of various concerns along with the growing interest in generative AI like ChatGPT, including worries specific to the historical and social context of South Korea. These findings suggest the need for national-level efforts to ensure data fairness.

Development of a Face Detection and Recognition System Using a RaspberryPi (라즈베리파이를 이용한 얼굴검출 및 인식 시스템 개발)

  • Kim, Kang-Chul;Wei, Hai-tong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.5
    • /
    • pp.859-864
    • /
    • 2017
  • IoT is a new emerging technology to lead the $4^{th}$ industry renovation and has been widely used in industry and home to increase the quality of human being. In this paper, IoT based face detection and recognition system for a smart elevator is developed. Haar cascade classifier is used in a face detection system and a proposed PCA algorithm written in Python in the face recognition system is implemented to reduce the execution time and calculates the eigenfaces. SVM or Euclidean metric is used to recognize the faces detected in the face detection system. The proposed system runs on RaspberryPi 3. 200 sample images in ORL face database are used for training and 200 samples for testing. The simulation results show that the recognition rate is over 93% for PP+EU and over 96% for PP+SVM. The execution times of the proposed PCA and the conventional PCA are 0.11sec and 1.1sec respectively, so the proposed PCA is much faster than the conventional one. The proposed system can be suitable for an elevator monitoring system, real time home security system, etc.

A Study on the Use of Location Data for Exploring Infant's Peer Relationships in Free-Choice Play Activities (자유선택놀이 활동에서 유아 또래관계 탐색을 위한 위치데이터 활용 방안 연구)

  • Kim, Jeong Kyoum;Lee, Sang-Seon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.9
    • /
    • pp.466-472
    • /
    • 2020
  • The purpose of this study is to explore how to use location data for peer relations of infants in free-choice play activities. For this study, location data was collected using wearable devices for 14 students in one class at an early childhood education institution in Chungnam. For the pre-processing of the collected location data, a smoothing technique was applied to recover missing values during the collection process, and the data was visualized using Python's Matplotlib. Subsequently, the movement distance, distance between infants, and interaction types of infants were extracted from the location data using the formula. As a result of the study, it was possible to derive 1) change in moving distance, cumulative value, average value, 2) change in distance and average distance value between infants, and 3) change and trend in interaction type according to the passage of time. These results can provide valuable information on the process of forming peer groups for infants in situations where it is difficult for a teacher to closely observe all members, and can be used as meaningful information for the design and operation of educational programs.

Computational Optimization of Bioanalytical Parameters for the Evaluation of the Toxicity of the Phytomarker 1,4 Napthoquinone and its Metabolite 1,2,4-trihydroxynapththalene

  • Gopal, Velmani;AL Rashid, Mohammad Harun;Majumder, Sayani;Maiti, Partha Pratim;Mandal, Subhash C
    • Journal of Pharmacopuncture
    • /
    • v.18 no.2
    • /
    • pp.7-18
    • /
    • 2015
  • Objectives: Lawsone (1,4 naphthoquinone) is a non redox cycling compound that can be catalyzed by DT diaphorase (DTD) into 1,2,4-trihydroxynaphthalene (THN), which can generate reactive oxygen species by auto oxidation. The purpose of this study was to evaluate the toxicity of the phytomarker 1,4 naphthoquinone and its metabolite THN by using the molecular docking program AutoDock 4. Methods: The 3D structure of ligands such as hydrogen peroxide ($H_2O_2$), nitric oxide synthase (NOS), catalase (CAT), glutathione (GSH), glutathione reductase (GR), glucose 6-phosphate dehydrogenase (G6PDH) and nicotinamide adenine dinucleotide phosphate hydrogen (NADPH) were drawn using hyperchem drawing tools and minimizing the energy of all pdb files with the help of hyperchem by $MM^+$ followed by a semi-empirical (PM3) method. The docking process was studied with ligand molecules to identify suitable dockings at protein binding sites through annealing and genetic simulation algorithms. The program auto dock tools (ADT) was released as an extension suite to the python molecular viewer used to prepare proteins and ligands. Grids centered on active sites were obtained with spacings of $54{\times}55{\times}56$, and a grid spacing of 0.503 was calculated. Comparisons of Global and Local Search Methods in Drug Docking were adopted to determine parameters; a maximum number of 250,000 energy evaluations, a maximum number of generations of 27,000, and mutation and crossover rates of 0.02 and 0.8 were used. The number of docking runs was set to 10. Results: Lawsone and THN can be considered to efficiently bind with NOS, CAT, GSH, GR, G6PDH and NADPH, which has been confirmed through hydrogen bond affinity with the respective amino acids. Conclusion: Naphthoquinone derivatives of lawsone, which can be metabolized into THN by a catalyst DTD, were examined. Lawsone and THN were found to be identically potent molecules for their affinities for selected proteins.

Development of an Open Source-based Spatial Analysis Tool for Storm and Flood Damage (풍수해 대비 오픈소스 기반 공간분석 도구 개발)

  • Kim, Minjun;Lee, Changgyu;Hwang, Suyeon;Ham, Jungsoo;Choi, Jinmu
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1435-1446
    • /
    • 2021
  • Wind and flood damage caused by typhoons causes a lot of damage to the Korean Peninsula every year. In order to minimize damage, a preliminary analysis of damage estimation and evacuation routes is required for rapid decision-making. This study attempted to develop an analysis module that can provide necessary information according to the disaster stage. For use in the preparation stage, A function to check past typhoon routes and past damage information similar to typhoon routes heading north, a function to extract isolated dangerous areas, and a function to extract reservoir collapse areas were developed. For use in the early stages of response and recovery, a function to extract the expected flooding range considering the current flooding depth, a function to analyze expected damage information on population, buildings, farmland, and a function to provide evacuation information were included. In addition, an automated web map creation method was proposed to express the analysis results. The analysis function was developed and modularized based on Python open source, and the web display function was implemented based on JavaScript. The tools developed in this study are expected to be efficiently used for rapid decision-making in the early stages of monitoring against storm and flood damage.

A Case Study of Basic Data Science Education using Public Big Data Collection and Spreadsheets for Teacher Education (교사교육을 위한 공공 빅데이터 수집 및 스프레드시트 활용 기초 데이터과학 교육 사례 연구)

  • Hur, Kyeong
    • Journal of The Korean Association of Information Education
    • /
    • v.25 no.3
    • /
    • pp.459-469
    • /
    • 2021
  • In this paper, a case study of basic data science practice education for field teachers and pre-service teachers was studied. In this paper, for basic data science education, spreadsheet software was used as a data collection and analysis tool. After that, we trained on statistics for data processing, predictive hypothesis, and predictive model verification. In addition, an educational case for collecting and processing thousands of public big data and verifying the population prediction hypothesis and prediction model was proposed. A 34-hour, 17-week curriculum using a spreadsheet tool was presented with the contents of such basic education in data science. As a tool for data collection, processing, and analysis, unlike Python, spreadsheets do not have the burden of learning program- ming languages and data structures, and have the advantage of visually learning theories of processing and anal- ysis of qualitative and quantitative data. As a result of this educational case study, three predictive hypothesis test cases were presented and analyzed. First, quantitative public data were collected to verify the hypothesis of predicting the difference in the mean value for each group of the population. Second, by collecting qualitative public data, the hypothesis of predicting the association within the qualitative data of the population was verified. Third, by collecting quantitative public data, the regression prediction model was verified according to the hypothesis of correlation prediction within the quantitative data of the population. And through the satisfaction analysis of pre-service and field teachers, the effectiveness of this education case in data science education was analyzed.

A Study on the Development of Flight Prediction Model and Rules for Military Aircraft Using Data Mining Techniques (데이터 마이닝 기법을 활용한 군용 항공기 비행 예측모형 및 비행규칙 도출 연구)

  • Yu, Kyoung Yul;Moon, Young Joo;Jeong, Dae Yul
    • The Journal of Information Systems
    • /
    • v.31 no.3
    • /
    • pp.177-195
    • /
    • 2022
  • Purpose This paper aims to prepare a full operational readiness by establishing an optimal flight plan considering the weather conditions in order to effectively perform the mission and operation of military aircraft. This paper suggests a flight prediction model and rules by analyzing the correlation between flight implementation and cancellation according to weather conditions by using big data collected from historical flight information of military aircraft supplied by Korean manufacturers and meteorological information from the Korea Meteorological Administration. In addition, by deriving flight rules according to weather information, it was possible to discover an efficient flight schedule establishment method in consideration of weather information. Design/methodology/approach This study is an analytic study using data mining techniques based on flight historical data of 44,558 flights of military aircraft accumulated by the Republic of Korea Air Force for a total of 36 months from January 2013 to December 2015 and meteorological information provided by the Korea Meteorological Administration. Four steps were taken to develop optimal flight prediction models and to derive rules for flight implementation and cancellation. First, a total of 10 independent variables and one dependent variable were used to develop the optimal model for flight implementation according to weather condition. Second, optimal flight prediction models were derived using algorithms such as logistics regression, Adaboost, KNN, Random forest and LightGBM, which are data mining techniques. Third, we collected the opinions of military aircraft pilots who have more than 25 years experience and evaluated importance level about independent variables using Python heatmap to develop flight implementation and cancellation rules according to weather conditions. Finally, the decision tree model was constructed, and the flight rules were derived to see how the weather conditions at each airport affect the implementation and cancellation of the flight. Findings Based on historical flight information of military aircraft and weather information of flight zone. We developed flight prediction model using data mining techniques. As a result of optimal flight prediction model development for each airbase, it was confirmed that the LightGBM algorithm had the best prediction rate in terms of recall rate. Each flight rules were checked according to the weather condition, and it was confirmed that precipitation, humidity, and the total cloud had a significant effect on flight cancellation. Whereas, the effect of visibility was found to be relatively insignificant. When a flight schedule was established, the rules will provide some insight to decide flight training more systematically and effectively.

A Study on Deep Learning Model for Discrimination of Illegal Financial Advertisements on the Internet

  • Kil-Sang Yoo; Jin-Hee Jang;Seong-Ju Kim;Kwang-Yong Gim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.21-30
    • /
    • 2023
  • The study proposes a model that utilizes Python-based deep learning text classification techniques to detect the legality of illegal financial advertising posts on the internet. These posts aim to promote unlawful financial activities, including the trading of bank accounts, credit card fraud, cashing out through mobile payments, and the sale of personal credit information. Despite the efforts of financial regulatory authorities, the prevalence of illegal financial activities persists. By applying this proposed model, the intention is to aid in identifying and detecting illicit content in internet-based illegal financial advertisining, thus contributing to the ongoing efforts to combat such activities. The study utilizes convolutional neural networks(CNN) and recurrent neural networks(RNN, LSTM, GRU), which are commonly used text classification techniques. The raw data for the model is based on manually confirmed regulatory judgments. By adjusting the hyperparameters of the Korean natural language processing and deep learning models, the study has achieved an optimized model with the best performance. This research holds significant meaning as it presents a deep learning model for discerning internet illegal financial advertising, which has not been previously explored. Additionally, with an accuracy range of 91.3% to 93.4% in a deep learning model, there is a hopeful anticipation for the practical application of this model in the task of detecting illicit financial advertisements, ultimately contributing to the eradication of such unlawful financial advertisements.

A Case Study on Metadata Extractionfor Records Management Using ChatGPT (챗GPT를 활용한 기록관리 메타데이터 추출 사례연구)

  • Minji Kim;Sunghee Kang;Hae-young Rieh
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.24 no.2
    • /
    • pp.89-112
    • /
    • 2024
  • Metadata is a crucial component of record management, playing a vital role in properly managing and understanding the record. In cases where automatic metadata assignment is not feasible, manual input by records professionals becomes necessary. This study aims to alleviate the challenges associated with manual entry by proposing a method that harnesses ChatGPT technology for extracting records management metadata elements. To employ ChatGPT technology, a Python program utilizing the LangChain library was developed. This program was designed to analyze PDF documents and extract metadata from records through questions, both with a locally installed instance of ChatGPT and the ChatGPT online service. Multiple PDF documents were subjected to this process to test the effectiveness of metadata extraction. The results revealed that while using LangChain with ChatGPT-3.5 turbo provided a secure environment, it exhibited some limitations in accurately retrieving metadata elements. Conversely, the ChatGPT-4 online service yielded relatively accurate results despite being unable to handle sensitive documents for security reasons. This exploration underscores the potential of utilizing ChatGPT technology to extract metadata in records management. With advancements in ChatGPT-related technologies, safer and more accurate results are expected to be achieved. Leveraging these advantages can significantly enhance the efficiency and productivity of tasks associated with managing records and metadata in archives.