• Title/Summary/Keyword: Processing Language

Search Result 2,692, Processing Time 0.034 seconds

Understanding of Generative Artificial Intelligence Based on Textual Data and Discussion for Its Application in Science Education (텍스트 기반 생성형 인공지능의 이해와 과학교육에서의 활용에 대한 논의)

  • Hunkoog Jho
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.3
    • /
    • pp.307-319
    • /
    • 2023
  • This study aims to explain the key concepts and principles of text-based generative artificial intelligence (AI) that has been receiving increasing interest and utilization, focusing on its application in science education. It also highlights the potential and limitations of utilizing generative AI in science education, providing insights for its implementation and research aspects. Recent advancements in generative AI, predominantly based on transformer models consisting of encoders and decoders, have shown remarkable progress through optimization of reinforcement learning and reward models using human feedback, as well as understanding context. Particularly, it can perform various functions such as writing, summarizing, keyword extraction, evaluation, and feedback based on the ability to understand various user questions and intents. It also offers practical utility in diagnosing learners and structuring educational content based on provided examples by educators. However, it is necessary to examine the concerns regarding the limitations of generative AI, including the potential for conveying inaccurate facts or knowledge, bias resulting from overconfidence, and uncertainties regarding its impact on user attitudes or emotions. Moreover, the responses provided by generative AI are probabilistic based on response data from many individuals, which raises concerns about limiting insightful and innovative thinking that may offer different perspectives or ideas. In light of these considerations, this study provides practical suggestions for the positive utilization of AI in science education.

Accuracy comparison of 3-unit fixed dental provisional prostheses fabricated by different CAD/CAM manufacturing methods (다양한 CAD/CAM 제조 방식으로 제작한 3본 고정성 임시 치과 보철물의 정확도 비교)

  • Hyuk-Joon Lee;Ha-Bin Lee;Mi-Jun Noh;Ji-Hwan Kim
    • Journal of Technologic Dentistry
    • /
    • v.45 no.2
    • /
    • pp.31-38
    • /
    • 2023
  • Purpose: This in vitro study aimed to compare the trueness of 3-unit fixed dental provisional prostheses (FDPs) fabricated by three different additive manufacturing and subtractive manufacturing procedures. Methods: A reference model with a maxillary left second premolar and the second molar prepped and the first molar missing was scanned for the fabrication of 3-unit FDPs. An anatomically shaped 3-unit FDP was designed on computer-aided design software. 10 FDPs were fabricated by subtractive (MI group) and additive manufacturing (stereolithography: SL group, digital light processing: DL group, liquid crystal displays: LC group) methods, respectively (N=40). All FDPs were scanned and exported to the standard triangulated language file. A three-dimensional analysis program measured the discrepancy of the internal, margin, and pontic base area. As for the comparison among manufacturing procedures, the Kruskal-Wallis test and the Mann-Whitney test with Bonferroni correction were evaluated statistically. Results: Regarding the internal area, the root mean square (RMS) value of the 3-unit FDPs was the lowest in the MI group (31.79±6.39 ㎛) and the highest in the SL group (69.34±29.88 ㎛; p=0.001). In the marginal area, those of the 3-unit FDPs were the lowest in the LC group (25.39±4.36 ㎛) and the highest in the SL group (48.94±18.98 ㎛; p=0.001). In the pontic base area, those of the 3-unit FDPs were the lowest in the LC group (8.72±2.74 ㎛) and the highest in the DL group (20.75±2.03 ㎛; p=0.001). Conclusion: A statistically significant difference was observed in the RMS mean values of all the groups. However, in comparison to the subtractive manufacturing method, all measurement areas of 3-unit FDPs fabricated by three different additive manufacturing methods are within a clinically acceptable range.

Data-Driven Technology Portfolio Analysis for Commercialization of Public R&D Outcomes: Case Study of Big Data and Artificial Intelligence Fields (공공연구성과 실용화를 위한 데이터 기반의 기술 포트폴리오 분석: 빅데이터 및 인공지능 분야를 중심으로)

  • Eunji Jeon;Chae Won Lee;Jea-Tek Ryu
    • The Journal of Bigdata
    • /
    • v.6 no.2
    • /
    • pp.71-84
    • /
    • 2021
  • Since small and medium-sized enterprises fell short of the securement of technological competitiveness in the field of big data and artificial intelligence (AI) field-core technologies of the Fourth Industrial Revolution, it is important to strengthen the competitiveness of the overall industry through technology commercialization. In this study, we aimed to propose a priority related to technology transfer and commercialization for practical use of public research results. We utilized public research performance information, improving missing values of 6T classification by deep learning model with an ensemble method. Then, we conducted topic modeling to derive the converging fields of big data and AI. We classified the technology fields into four different segments in the technology portfolio based on technology activity and technology efficiency, estimating the potential of technology commercialization for those fields. We proposed a priority of technology commercialization for 10 detailed technology fields that require long-term investment. Through systematic analysis, active utilization of technology, and efficient technology transfer and commercialization can be promoted.

Multifaceted Evaluation Methodology for AI Interview Candidates - Integration of Facial Recognition, Voice Analysis, and Natural Language Processing (AI면접 대상자에 대한 다면적 평가방법론 -얼굴인식, 음성분석, 자연어처리 영역의 융합)

  • Hyunwook Ji;Sangjin Lee;Seongmin Mun;Jaeyeol Lee;Dongeun Lee;kyusang Lim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.55-58
    • /
    • 2024
  • 최근 각 기업의 AI 면접시스템 도입이 증가하고 있으며, AI 면접에 대한 실효성 논란 또한 많은 상황이다. 본 논문에서는 AI 면접 과정에서 지원자를 평가하는 방식을 시각, 음성, 자연어처리 3영역에서 구현함으로써, 면접 지원자를 다방면으로 분석 방법론의 적절성에 대해 평가하고자 한다. 첫째, 시각적 측면에서, 면접 지원자의 감정을 인식하기 위해, 합성곱 신경망(CNN) 기법을 활용해, 지원자 얼굴에서 6가지 감정을 인식했으며, 지원자가 카메라를 응시하고 있는지를 시계열로 도출하였다. 이를 통해 지원자가 면접에 임하는 태도와 특히 얼굴에서 드러나는 감정을 분석하는 데 주력했다. 둘째, 시각적 효과만으로 면접자의 태도를 파악하는 데 한계가 있기 때문에, 지원자 음성을 주파수로 환산해 특성을 추출하고, Bidirectional LSTM을 활용해 훈련해 지원자 음성에 따른 6가지 감정을 추출했다. 셋째, 지원자의 발언 내용과 관련해 맥락적 의미를 파악해 지원자의 상태를 파악하기 위해, 음성을 STT(Speech-to-Text) 기법을 이용하여 텍스트로 변환하고, 사용 단어의 빈도를 분석하여 지원자의 언어 습관을 파악했다. 이와 함께, 지원자의 발언 내용에 대한 감정 분석을 위해 KoBERT 모델을 적용했으며, 지원자의 성격, 태도, 직무에 대한 이해도를 파악하기 위해 객관적인 평가지표를 제작하여 적용했다. 논문의 분석 결과 AI 면접의 다면적 평가시스템의 적절성과 관련해, 시각화 부분에서는 상당 부분 정확도가 객관적으로 입증되었다고 판단된다. 음성에서 감정분석 분야는 면접자가 제한된 시간에 모든 유형의 감정을 드러내지 않고, 또 유사한 톤의 말이 진행되다 보니 특정 감정을 나타내는 주파수가 다소 집중되는 현상이 나타났다. 마지막으로 자연어처리 영역은 면접자의 발언에서 나오는 말투, 특정 단어의 빈도수를 넘어, 전체적인 맥락과 느낌을 이해할 수 있는 자연어처리 분석모델의 필요성이 더욱 커졌음을 판단했다.

  • PDF

Establishment of Risk Database and Development of Risk Classification System for NATM Tunnel (NATM 터널 공정리스크 데이터베이스 구축 및 리스크 분류체계 개발)

  • Kim, Hyunbee;Karunarathne, Batagalle Vinuri;Kim, ByungSoo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.25 no.1
    • /
    • pp.32-41
    • /
    • 2024
  • In the construction industry, not only safety accidents, but also various complex risks such as construction delays, cost increases, and environmental pollution occur, and management technologies are needed to solve them. Among them, process risk management, which directly affects the project, lacks related information compared to its importance. This study tried to develop a MATM tunnel process risk classification system to solve the difficulty of risk information retrieval due to the use of different classification systems for each project. Risk collection used existing literature review and experience mining techniques, and DB construction utilized the concept of natural language processing. For the structure of the classification system, the existing WBS structure was adopted in consideration of compatibility of data, and an RBS linked to the work species of the WBS was established. As a result of the research, a risk classification system was completed that easily identifies risks by work type and intuitively reveals risk characteristics and risk factors linked to risks. As a result of verifying the usability of the established classification system, it was found that the classification system was effective as risks and risk factors for each work type were easily identified by user input of keywords. Through this study, it is expected to contribute to preventing an increase in cost and construction period by identifying risks according to work types in advance when planning and designing NATM tunnels and establishing countermeasures suitable for those factors.

Digital Library Interface Research Based on EEG, Eye-Tracking, and Artificial Intelligence Technologies: Focusing on the Utilization of Implicit Relevance Feedback (뇌파, 시선추적 및 인공지능 기술에 기반한 디지털 도서관 인터페이스 연구: 암묵적 적합성 피드백 활용을 중심으로)

  • Hyun-Hee Kim;Yong-Ho Kim
    • Journal of the Korean Society for information Management
    • /
    • v.41 no.1
    • /
    • pp.261-282
    • /
    • 2024
  • This study proposed and evaluated electroencephalography (EEG)-based and eye-tracking-based methods to determine relevance by utilizing users' implicit relevance feedback while navigating content in a digital library. For this, EEG/eye-tracking experiments were conducted on 32 participants using video, image, and text data. To assess the usefulness of the proposed methods, deep learning-based artificial intelligence (AI) techniques were used as a competitive benchmark. The evaluation results showed that EEG component-based methods (av_P600 and f_P3b components) demonstrated high classification accuracy in selecting relevant videos and images (faces/emotions). In contrast, AI-based methods, specifically object recognition and natural language processing, showed high classification accuracy for selecting images (objects) and texts (newspaper articles). Finally, guidelines for implementing a digital library interface based on EEG, eye-tracking, and artificial intelligence technologies have been proposed. Specifically, a system model based on implicit relevance feedback has been presented. Moreover, to enhance classification accuracy, methods suitable for each media type have been suggested, including EEG-based, eye-tracking-based, and AI-based approaches.

Exploration of Neurophysiological Mechanisms underlying Action Performance Changes caused by Semantic Congruency between Perceived Action Verbs and Current Actions (지각된 행위동사와 현재 행위의 의미 일치성에 따른 행위 수행 변화의 신경생리학적 기전 탐색)

  • Rha, Younghyoun;Jeong, Myung Yung;Kwak, Jarang;Lee, Donghoon
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.4
    • /
    • pp.573-597
    • /
    • 2016
  • Recent fMRI and EEG research for neural representations of action concepts insist that processing of action concepts evoke the simulation of sensory-motor information. Moreover, there are several behavioral studies showing that understanding of action verbs or sentences describing actions interfere or facilitate current action performance. However, it is unclear that online interaction between processing of action concepts and current action is based on the simulation of sensory-motor information, or other neural mechanisms. The present research aims to explore the underlying neural mechanism that how the perception of action language influence the performance of current action using high-spacial temporal resolution EEG and multiple source analysis techniques. For this, participants were asked to perform a cued-motor reaction task in which button-pressing hand action and pedal-stepping foot action were required according to the color of the cue, and we presented auditorily action verbs describing the responding actions (i.e., /press/, /step/, /stop/) just before the color cue and examined the interaction effect from the semantic congruency between the action verbs and the current action. Behavioral results revealed consistently a facilitatory effect when action verbs and responding actions were semantically congruent in both button-pressing and pedal-stepping actions, and an inhibitory effect when semantically incongruent in the button-pressing action condition. In the results of EEG source waveform analysis, the semantic congruency effects between action verbs and the responding actions were observed in the Wernicke's area during the perception of action verbs, in the anterior cingulate gyrus and the supplementary motor area (SMA) at the time when the motor-cue was presented, and in the SMA and primary motor cortex (M1) during action execution stage. Based on the current findings, we argue that perceived action verbs evoke the facilitation/inhibition effect by influencing the expectation and preparation stage of following actions rather than the directly activating the particular motor cortex. Finally we discussed the implication on the neural representation of action concepts and methodological limitations of the current research.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

The Construction of QoS Integration Platform for Real-time Negotiation and Adaptation Stream Service in Distributed Object Computing Environments (분산 객체 컴퓨팅 환경에서 실시간 협약 및 적응 스트림 서비스를 위한 QoS 통합 플랫폼의 구축)

  • Jun, Byung-Taek;Kim, Myung-Hee;Joo, Su-Chong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11S
    • /
    • pp.3651-3667
    • /
    • 2000
  • Recently, in the distributed multimedia environments based on internet, as radical growing technologies, the most of researchers focus on both streaming technology and distributed object thchnology, Specially, the studies which are tried to integrate the streaming services on the distributed object technology have been progressing. These technologies are applied to various stream service mamgements and protocols. However, the stream service management mexlels which are being proposed by the existing researches are insufficient for suporting the QoS of stream services. Besides, the existing models have the problems that cannot support the extensibility and the reusability, when the QoS-reiatedfunctions are being developed as a sub-module which is suited on the specific-purpose application services. For solving these problems, in this paper. we suggested a QoS Integrated platform which can extend and reuse using the distributed object technologies, and guarantee the QoS of the stream services. A structure of platform we suggested consists of three components such as User Control Module(UCM), QoS Management Module(QoSM) and Stream Object. Stream Object has Send/Receive operations for transmitting the RTP packets over TCP/IP. User Control ModuleI(UCM) controls Stream Objects via the COREA service objects. QoS Management Modulel(QoSM) has the functions which maintain the QoS of stream service between the UCMs in client and server. As QoS control methexlologies, procedures of resource monitoring, negotiation, and resource adaptation are executed via the interactions among these comiXments mentioned above. For constmcting this QoS integrated platform, we first implemented the modules mentioned above independently, and then, used IDL for defining interfaces among these mexlules so that can support platform independence, interoperability and portability base on COREA. This platform is constructed using OrbixWeb 3.1c following CORBA specification on Solaris 2.5/2.7, Java language, Java, Java Media Framework API 2.0, Mini-SQL1.0.16 and multimedia equipments. As results for verifying this platform functionally, we showed executing results of each module we mentioned above, and a numerical data obtained from QoS control procedures on client and server's GUI, while stream service is executing on our platform.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.