• Title/Summary/Keyword: Multiple intelligence

Search Result 711, Processing Time 0.025 seconds

Prediction of Patient Management in COVID-19 Using Deep Learning-Based Fully Automated Extraction of Cardiothoracic CT Metrics and Laboratory Findings

  • Thomas Weikert;Saikiran Rapaka;Sasa Grbic;Thomas Re;Shikha Chaganti;David J. Winkel;Constantin Anastasopoulos;Tilo Niemann;Benedikt J. Wiggli;Jens Bremerich;Raphael Twerenbold;Gregor Sommer;Dorin Comaniciu;Alexander W. Sauter
    • Korean Journal of Radiology
    • /
    • v.22 no.6
    • /
    • pp.994-1004
    • /
    • 2021
  • Objective: To extract pulmonary and cardiovascular metrics from chest CTs of patients with coronavirus disease 2019 (COVID-19) using a fully automated deep learning-based approach and assess their potential to predict patient management. Materials and Methods: All initial chest CTs of patients who tested positive for severe acute respiratory syndrome coronavirus 2 at our emergency department between March 25 and April 25, 2020, were identified (n = 120). Three patient management groups were defined: group 1 (outpatient), group 2 (general ward), and group 3 (intensive care unit [ICU]). Multiple pulmonary and cardiovascular metrics were extracted from the chest CT images using deep learning. Additionally, six laboratory findings indicating inflammation and cellular damage were considered. Differences in CT metrics, laboratory findings, and demographics between the patient management groups were assessed. The potential of these parameters to predict patients' needs for intensive care (yes/no) was analyzed using logistic regression and receiver operating characteristic curves. Internal and external validity were assessed using 109 independent chest CT scans. Results: While demographic parameters alone (sex and age) were not sufficient to predict ICU management status, both CT metrics alone (including both pulmonary and cardiovascular metrics; area under the curve [AUC] = 0.88; 95% confidence interval [CI] = 0.79-0.97) and laboratory findings alone (C-reactive protein, lactate dehydrogenase, white blood cell count, and albumin; AUC = 0.86; 95% CI = 0.77-0.94) were good classifiers. Excellent performance was achieved by a combination of demographic parameters, CT metrics, and laboratory findings (AUC = 0.91; 95% CI = 0.85-0.98). Application of a model that combined both pulmonary CT metrics and demographic parameters on a dataset from another hospital indicated its external validity (AUC = 0.77; 95% CI = 0.66-0.88). Conclusion: Chest CT of patients with COVID-19 contains valuable information that can be accessed using automated image analysis. These metrics are useful for the prediction of patient management.

Attention Based Collaborative Source-Side DDoS Attack Detection (어텐션 기반 협업형 소스측 분산 서비스 거부 공격 탐지)

  • Hwisoo Kim;Songheon Jeong;Kyungbaek Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.157-165
    • /
    • 2024
  • The evolution of the Distributed Denial of Service Attack(DDoS Attack) method has increased the difficulty in the detection process. One of the solutions to overcome the problems caused by the limitations of the existing victim-side detection method was the source-side detection technique. However, there was a problem of performance degradation due to network traffic irregularities. In order to solve this problem, research has been conducted to detect attacks using a collaborative network between several nodes based on artificial intelligence. Existing methods have shown limitations, especially in nonlinear traffic environments with high Burstness and jitter. To overcome this problem, this paper presents a collaborative source-side DDoS attack detection technique introduced with an attention mechanism. The proposed method aggregates detection results from multiple sources and assigns weights to each region, and through this, it is possible to effectively detect overall attacks and attacks in specific few areas. In particular, it shows a high detection rate with a low false positive of about 6% and a high detection rate of up to 4.3% in a nonlinear traffic dataset, and it can also confirm improvement in attack detection problems in a small number of regions compared to methods that showed limitations in the existing nonlinear traffic environment.

Research on Training and Implementation of Deep Learning Models for Web Page Analysis (웹페이지 분석을 위한 딥러닝 모델 학습과 구현에 관한 연구)

  • Jung Hwan Kim;Jae Won Cho;Jin San Kim;Han Jin Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.517-524
    • /
    • 2024
  • This study aims to train and implement a deep learning model for the fusion of website creation and artificial intelligence, in the era known as the AI revolution following the launch of the ChatGPT service. The deep learning model was trained using 3,000 collected web page images, processed based on a system of component and layout classification. This process was divided into three stages. First, prior research on AI models was reviewed to select the most appropriate algorithm for the model we intended to implement. Second, suitable web page and paragraph images were collected, categorized, and processed. Third, the deep learning model was trained, and a serving interface was integrated to verify the actual outcomes of the model. This implemented model will be used to detect multiple paragraphs on a web page, analyzing the number of lines, elements, and features in each paragraph, and deriving meaningful data based on the classification system. This process is expected to evolve, enabling more precise analysis of web pages. Furthermore, it is anticipated that the development of precise analysis techniques will lay the groundwork for research into AI's capability to automatically generate perfect web pages.

Development and application of cellular automata-based urban inundation and water cycle model CAW (셀룰러 오토마타 기반 도시침수 및 물순환 해석 모형 CAW의 개발 및 적용)

  • Lee, Songhee;Choi, Hyeonjin;Woo, Hyuna;Kim, Minyoung;Lee, Eunhyung;Kim, Sanghyun;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.165-179
    • /
    • 2024
  • It is crucial to have a comprehensive understanding of inundation and water cycle in urban areas for mitigating flood risks and sustainable water resources management. In this study, we developed a Cellular Automata-based integrated Water cycle model (CAW). A comparative analysis with physics-based and conventional cellular automata-based models was performed in an urban watershed in Portland, USA, to evaluate the adequacy of spatiotemporal inundation simulation in the context of a high-resolution setup. A high similarity was found in the maximum inundation maps by CAW and Weighted Cellular Automata 2 Dimension (WCA2D) model presumably due to the same diffuse wave assumption, showing an average Root-Mean-Square-Error (RMSE) value of 1.3 cm and high scores of binary pattern indices (HR 0.91, FAR 0.02, CSI 0.90). Furthermore, through multiple simulation experiments estimating the effects of land cover and soil conditions on inundation and infiltration, as the impermeability rate increased by 41%, the infiltration decreased by 54% (4.16 mm/m2) while the maximum inundation depth increased by 10% (2.19 mm/m2). It was expected that high-resolution integrated inundation and water cycle analysis considering various land cover and soil conditions in urban areas would be feasible using CAW.

Exploring automatic scoring of mathematical descriptive assessment using prompt engineering with the GPT-4 model: Focused on permutations and combinations (프롬프트 엔지니어링을 통한 GPT-4 모델의 수학 서술형 평가 자동 채점 탐색: 순열과 조합을 중심으로)

  • Byoungchul Shin;Junsu Lee;Yunjoo Yoo
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.187-207
    • /
    • 2024
  • In this study, we explored the feasibility of automatically scoring descriptive assessment items using GPT-4 based ChatGPT by comparing and analyzing the scoring results between teachers and GPT-4 based ChatGPT. For this purpose, three descriptive items from the permutation and combination unit for first-year high school students were selected from the KICE (Korea Institute for Curriculum and Evaluation) website. Items 1 and 2 had only one problem-solving strategy, while Item 3 had more than two strategies. Two teachers, each with over eight years of educational experience, graded answers from 204 students and compared these with the results from GPT-4 based ChatGPT. Various techniques such as Few-Shot-CoT, SC, structured, and Iteratively prompts were utilized to construct prompts for scoring, which were then inputted into GPT-4 based ChatGPT for scoring. The scoring results for Items 1 and 2 showed a strong correlation between the teachers' and GPT-4's scoring. For Item 3, which involved multiple problem-solving strategies, the student answers were first classified according to their strategies using prompts inputted into GPT-4 based ChatGPT. Following this classification, scoring prompts tailored to each type were applied and inputted into GPT-4 based ChatGPT for scoring, and these results also showed a strong correlation with the teachers' scoring. Through this, the potential for GPT-4 models utilizing prompt engineering to assist in teachers' scoring was confirmed, and the limitations of this study and directions for future research were presented.

Creating and Utilization of Virtual Human via Facial Capturing based on Photogrammetry (포토그래메트리 기반 페이셜 캡처를 통한 버추얼 휴먼 제작 및 활용)

  • Ji Yun;Haitao Jiang;Zhou Jiani;Sunghoon Cho;Tae Soo Yun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.25 no.2
    • /
    • pp.113-118
    • /
    • 2024
  • Recently, advancements in artificial intelligence and computer graphics technology have led to the emergence of various virtual humans across multiple media such as movies, advertisements, broadcasts, games, and social networking services (SNS). In particular, in the advertising marketing sector centered around virtual influencers, virtual humans have already proven to be an important promotional tool for businesses in terms of time and cost efficiency. In Korea, the virtual influencer market is in its nascent stage, and both large corporations and startups are preparing to launch new services related to virtual influencers without clear boundaries. However, due to the lack of public disclosure of the development process, they face the situation of having to incur significant expenses. To address these requirements and challenges faced by businesses, this paper implements a photogrammetry-based facial capture system for creating realistic virtual humans and explores the use of these models and their application cases. The paper also examines an optimal workflow in terms of cost and quality through MetaHuman modeling based on Unreal Engine, which simplifies the complex CG work steps from facial capture to the actual animation process. Additionally, the paper introduces cases where virtual humans have been utilized in SNS marketing, such as on Instagram, and demonstrates the performance of the proposed workflow by comparing it with traditional CG work through an Unreal Engine-based workflow.

An Efficient Dual Queue Strategy for Improving Storage System Response Times (저장시스템의 응답 시간 개선을 위한 효율적인 이중 큐 전략)

  • Hyun-Seob Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.19-24
    • /
    • 2024
  • Recent advances in large-scale data processing technologies such as big data, cloud computing, and artificial intelligence have increased the demand for high-performance storage devices in data centers and enterprise environments. In particular, the fast data response speed of storage devices is a key factor that determines the overall system performance. Solid state drives (SSDs) based on the Non-Volatile Memory Express (NVMe) interface are gaining traction, but new bottlenecks are emerging in the process of handling large data input and output requests from multiple hosts simultaneously. SSDs typically process host requests by sequentially stacking them in an internal queue. When long transfer length requests are processed first, shorter requests wait longer, increasing the average response time. To solve this problem, data transfer timeout and data partitioning methods have been proposed, but they do not provide a fundamental solution. In this paper, we propose a dual queue based scheduling scheme (DQBS), which manages the data transfer order based on the request order in one queue and the transfer length in the other queue. Then, the request time and transmission length are comprehensively considered to determine the efficient data transmission order. This enables the balanced processing of long and short requests, thus reducing the overall average response time. The simulation results show that the proposed method outperforms the existing sequential processing method. This study presents a scheduling technique that maximizes data transfer efficiency in a high-performance SSD environment, which is expected to contribute to the development of next-generation high-performance storage systems

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Factors Related to Poor School Performance of Elementary School Children (국민학교아동의 학습부진에 관련된 요인)

  • Park, Jung-Han;Kim, Gui-Yeon;Her, Kyu-Sook;Lee, Ju-Young;Kim, Doo-Hie
    • Journal of Preventive Medicine and Public Health
    • /
    • v.26 no.4 s.44
    • /
    • pp.628-649
    • /
    • 1993
  • This study was conducted to investigate the factors related to the poor school performance of the elementary school children. Two schools in Taegu, one in the affluent area and the other in the poor area, were selected and a total of 175 children whose school performance was within low 10 percentile (poor performers) and 97 children whose school performance were within high 5 percentile (good performers) in each class of 2nd, 4th and 6th grades were tested for the physical health, behavioral problem and family background. Each child had gone through a battery of tests including visual and hearing acuity, anthropometry (body weight, height, head circumference), intelligence (Kodae Stanford-Binet test), test anxiety (TAI-K), neurologic examination by a developmental pediatrician and heavy metal content (Pb, Cd, Zn) in hair by atomic absorption spectrophotometry. A questionnaire was administered to the mothers for prenatal and prenatal courses of the child, family environment, child's developmental history, and child's behavioral and learning problems. Another questionnaire was administered to the teachers of the children for the child's family background, arithmatic & language abilities and behavioral problem. The poor school performance had a significant correlation with male gender, high birth order, broken home, low educational and occupational levels of parents, visual problem, high test anxiety score, attention deficit hyperactivity disorder (ADHD), poor physical growth (weight, height, head circumference) and low I.Q. score. The factors that had a significant correlation with the poor school performance in multiple logistic regression analysis were child's birth order (odds ratio=2.06), male gender(odds ratio=5.91), broken home(odds ratio=9.29), test anxiety score(odds ratio=1.07), ADHD (odds ratio=9.67), I.Q. score (odds ratio=0.85) and height less than Korean standard mean-1S.D.(odds ratio=11.12). The heavy metal contents in hair did not show any significant correlation with poor school performance. However the lead and cadmium contents were high in males than in females. The lead content was negatively correlated with child's grade(P<0.05) and zinc was positively correlated with grade (P<0.05). among the factors that showed a significant correlation with the poor school performance, high birth order, short stature and ADHD may be modified by a good family planning, good feeding practice for infant and child, and early detection and treatment of ADHD. Also, teacher and parents should restrain themselves from inducing excessive test anxiety by forcing the child to study and over-expecting beyond the child's intellectual capability.

  • PDF

Development of Optimum Traffic Safety Evaluation Model Using the Back-Propagation Algorithm (역전파 알고리즘을 이용한 최적의 교통안전 평가 모형개발)

  • Kim, Joong-Hyo;Kwon, Sung-Dae;Hong, Jeong-Pyo;Ha, Tae-Jun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.35 no.3
    • /
    • pp.679-690
    • /
    • 2015
  • The need to remove the cause of traffic accidents by improving the engineering system for a vehicle and the road in order to minimize the accident hazard. This is likely to cause traffic accident continue to take a large and significant social cost and time to improve the reliability and efficiency of this generally poor road, thereby generating a lot of damage to the national traffic accident caused by improper environmental factors. In order to minimize damage from traffic accidents, the cause of accidents must be eliminated through technological improvements of vehicles and road systems. Generally, it is highly probable that traffic accident occurs more often on roads that lack safety measures, and can only be improved with tremendous time and costs. In particular, traffic accidents at intersections are on the rise due to inappropriate environmental factors, and are causing great losses for the nation as a whole. This study aims to present safety countermeasures against the cause of accidents by developing an intersection Traffic safety evaluation model. It will also diagnose vulnerable traffic points through BPA (Back -propagation algorithm) among artificial neural networks recently investigated in the area of artificial intelligence. Furthermore, it aims to pursue a more efficient traffic safety improvement project in terms of operating signalized intersections and establishing traffic safety policies. As a result of conducting this study, the mean square error approximate between the predicted values and actual measured values of traffic accidents derived from the BPA is estimated to be 3.89. It appeared that the BPA appeared to have excellent traffic safety evaluating abilities compared to the multiple regression model. In other words, The BPA can be effectively utilized in diagnosing and practical establishing transportation policy in the safety of actual signalized intersections.