• Title/Summary/Keyword: Cognitive Information

Search Result 2,726, Processing Time 0.027 seconds

Effects of Linguistic Immersion Synthesis on Foreign Language Learning Using Virtual Reality Agents (가상현실 에이전트 외국어 교사를 활용한 외국어 학습의 몰입 융합 효과)

  • Kang, Jeonghyun;Kwon, Seulhee;Chung, Donghun
    • Informatization Policy
    • /
    • v.31 no.1
    • /
    • pp.32-52
    • /
    • 2024
  • This study investigates the effectiveness of virtual reality agents as foreign language instructors with focus on the impact of different native language backgrounds and instructional roles. The agents were first distinguished as native or non-native speakers treated as a between-subject factor, and then assigned roles as either teachers or salespersons considered within-subject factors. An immersive virtual environment was developed for this experiment, and a 2×2 mixed factorial design was carried out. In an experimental group of 72 university students, statistically significant interactions were found in learning satisfaction, memory, and recall between the native/non-native status of the agents and their roles. With regard to learning confidence and presence, however, no statistically significant differences were observed in both interaction effects and main effects. Contextual learning in a virtual environment was found to enhance learning effectiveness and satisfaction, with the nativeness and the role of agents influencing learners' memory; thus highlighting the effectiveness of using virtual reality agents in foreign language learning. This suggests that varied approaches can have positive cognitive and emotional impacts on learners, thereby providing valuable theoretical and empirical implications.

Assessing the Impact of Defacing Algorithms on Brain Volumetry Accuracy in MRI Analyses

  • Dong-Woo Ryu;ChungHwee Lee;Hyuk-je Lee;Yong S Shim;Yun Jeong Hong;Jung Hee Cho;Seonggyu Kim;Jong-Min Lee;Dong Won Yang
    • Dementia and Neurocognitive Disorders
    • /
    • v.23 no.3
    • /
    • pp.127-135
    • /
    • 2024
  • Background and Purpose: To ensure data privacy, the development of defacing processes, which anonymize brain images by obscuring facial features, is crucial. However, the impact of these defacing methods on brain imaging analysis poses significant concern. This study aimed to evaluate the reliability of three different defacing methods in automated brain volumetry. Methods: Magnetic resonance imaging with three-dimensional T1 sequences was performed on ten patients diagnosed with subjective cognitive decline. Defacing was executed using mri_deface, BioImage Suite Web-based defacing, and Defacer. Brain volumes were measured employing the QBraVo program and FreeSurfer, assessing intraclass correlation coefficient (ICC) and the mean differences in brain volume measurements between the original and defaced images. Results: The mean age of the patients was 71.10±6.17 years, with 4 (40.0%) being male. The total intracranial volume, total brain volume, and ventricle volume exhibited high ICCs across the three defacing methods and 2 volumetry analyses. All regional brain volumes showed high ICCs with all three defacing methods. Despite variations among some brain regions, no significant mean differences in regional brain volume were observed between the original and defaced images across all regions. Conclusions: The three defacing algorithms evaluated did not significantly affect the results of image analysis for the entire brain or specific cerebral regions. These findings suggest that these algorithms can serve as robust methods for defacing in neuroimaging analysis, thereby supporting data anonymization without compromising the integrity of brain volume measurements.

CNN-ViT Hybrid Aesthetic Evaluation Model Based on Quantification of Cognitive Features in Images (이미지의 인지적 특징 정량화를 통한 CNN-ViT 하이브리드 미학 평가 모델)

  • Soo-Eun Kim;Joon-Shik Lim
    • Journal of IKEEE
    • /
    • v.28 no.3
    • /
    • pp.352-359
    • /
    • 2024
  • This paper proposes a CNN-ViT hybrid model that automatically evaluates the aesthetic quality of images by combining local and global features. In this approach, CNN is used to extract local features such as color and object placement, while ViT is employed to analyze the aesthetic value of the image by reflecting global features. Color composition is derived by extracting the primary colors from the input image, creating a color palette, and then passing it through the CNN. The Rule of Thirds is quantified by calculating how closely objects in the image are positioned near the thirds intersection points. These values provide the model with critical information about the color balance and spatial harmony of the image. The model then analyzes the relationship between these factors to predict scores that align closely with human judgment. Experimental results on the AADB image database show that the proposed model achieved a Spearman's Rank Correlation Coefficient (SRCC) of 0.716, indicating more consistent rank predictions, and a Pearson Correlation Coefficient (LCC) of 0.72, which is 2~4% higher than existing models.

A Bibliometric Analysis of Global Research Trends in Digital Therapeutics (디지털 치료기기의 글로벌 연구 동향에 대한 계량서지학적 분석)

  • Dae Jin Kim;Hyeon Su Kim;Byung Gwan Kim;Ki Chang Nam
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.4
    • /
    • pp.162-172
    • /
    • 2024
  • To analyse the overall research trends in digital therapeutics, this study conducted a quantitative bibliometric analysis of articles published in the last 10 years from 2014 to 2023. We extracted bibliographic information of studies related to digital therapeutics from the Web of Science (WOS) database and performed publication status, citation analysis and keyword analysis using R (version 4.3.1) and VOSviewer (version 1.6.18) software. A total of 1,114 articles were included in the study, and the annual publication growth rate for digital therapeutics was 66.1%, a very rapid increase. "health" is the most used keyword based on Keyword Plus, and "cognitive-behavioral therapy", "depression", "healthcare", "mental-health", "meta-analysis" and "randomized controlled-trial" are the research keywords that have driven the development and impact of digital therapeutic devices over the long term. A total of five clusters were observed in the co-occurrence network analysis, with new research keywords such as "artificial intelligence", "machine learning" and "regulation" being observed in recent years. In our analysis of research trends in digital therapeutics, keywords related to mental health, such as depression, anxiety, and disorder, were the top keywords by occurrences and total link strength. While many studies have shown the positive effects of digital therapeutics, low engagement and high dropout rates remain a concern, and much research is being done to evaluate and improve them. Future studies should expand the search terms to ensure the representativeness of the results.

Multi-level Analysis of the Antecedents of Knowledge Transfer: Integration of Social Capital Theory and Social Network Theory (지식이전 선행요인에 관한 다차원 분석: 사회적 자본 이론과 사회연결망 이론의 결합)

  • Kang, Minhyung;Hau, Yong Sauk
    • Asia pacific journal of information systems
    • /
    • v.22 no.3
    • /
    • pp.75-97
    • /
    • 2012
  • Knowledge residing in the heads of employees has always been regarded as one of the most critical resources within a firm. However, many tries to facilitate knowledge transfer among employees has been unsuccessful because of the motivational and cognitive problems between the knowledge source and the recipient. Social capital, which is defined as "the sum of the actual and potential resources embedded within, available through, derived from the network of relationships possessed by an individual or social unit [Nahapiet and Ghoshal, 1998]," is suggested to resolve these motivational and cognitive problems of knowledge transfer. In Social capital theory, there are two research streams. One insists that social capital strengthens group solidarity and brings up cooperative behaviors among group members, such as voluntary help to colleagues. Therefore, social capital can motivate an expert to transfer his/her knowledge to a colleague in need without any direct reward. The other stream insists that social capital provides an access to various resources that the owner of social capital doesn't possess directly. In knowledge transfer context, an employee with social capital can access and learn much knowledge from his/her colleagues. Therefore, social capital provides benefits to both the knowledge source and the recipient in different ways. However, prior research on knowledge transfer and social capital is mostly limited to either of the research stream of social capital and covered only the knowledge source's or the knowledge recipient's perspective. Social network theory which focuses on the structural dimension of social capital provides clear explanation about the in-depth mechanisms of social capital's two different benefits. 'Strong tie' builds up identification, trust, and emotional attachment between the knowledge source and the recipient; therefore, it motivates the knowledge source to transfer his/her knowledge to the recipient. On the other hand, 'weak tie' easily expands to 'diverse' knowledge sources because it does not take much effort to manage. Therefore, the real value of 'weak tie' comes from the 'diverse network structure,' not the 'weak tie' itself. It implies that the two different perspectives on strength of ties can co-exist. For example, an extroverted employee can manage many 'strong' ties with 'various' colleagues. In this regards, the individual-level structure of one's relationships as well as the dyadic-level relationship should be considered together to provide a holistic view of social capital. In addition, interaction effect between individual-level characteristics and dyadic-level characteristics can be examined, too. Based on these arguments, this study has following research questions. (1) How does the social capital of the knowledge source and the recipient influence knowledge transfer respectively? (2) How does the strength of ties between the knowledge source and the recipient influence knowledge transfer? (3) How does the social capital of the knowledge source and the recipient influence the effect of the strength of ties between the knowledge source and the recipient on knowledge transfer? Based on Social capital theory and Social network theory, a multi-level research model is developed to consider both the individual-level social capital of the knowledge source and the recipient and the dyadic-level strength of relationship between the knowledge source and the recipient. 'Cross-classified random effect model,' one of the multi-level analysis methods, is adopted to analyze the survey responses from 337 R&D employees. The results of analysis provide several findings. First, among three dimensions of the knowledge source's social capital, network centrality (i.e., structural dimension) shows the significant direct effect on knowledge transfer. On the other hand, the knowledge recipient's network centrality is not influential. Instead, it strengthens the influence of the strength of ties between the knowledge source and the recipient on knowledge transfer. It means that the knowledge source's network centrality does not directly increase knowledge transfer. Instead, by providing access to various knowledge sources, the network centrality provides only the context where the strong tie between the knowledge source and the recipient leads to effective knowledge transfer. In short, network centrality has indirect effect on knowledge transfer from the knowledge recipient's perspective, while it has direct effect from the knowledge source's perspective. This is the most important contribution of this research. In addition, contrary to the research hypothesis, company tenure of the knowledge recipient negatively influences knowledge transfer. It means that experienced employees do not look for new knowledge and stick to their own knowledge. This is also an interesting result. One of the possible reasons is the hierarchical culture of Korea, such as a fear of losing face in front of subordinates. In a research methodology perspective, multi-level analysis adopted in this study seems to be very promising in management research area which has a multi-level data structure, such as employee-team-department-company. In addition, social network analysis is also a promising research approach with an exploding availability of online social network data.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Topic Continuity in Korea Narrative (한국 설화문에서의 화제표현의 연속성)

  • Hi-JaChong
    • Korean Journal of Cognitive Science
    • /
    • v.2 no.2
    • /
    • pp.405-428
    • /
    • 1990
  • Language has a social function to communicate information. Linguists have gradually paid their attention to the function of language since the nineteen sixties, especially to the relationship of form, meaning and the function. The relationship could be more clearly grasped through disciyrse-based analysis than through sentence-based analysis. Many researches were centered on the discourse functional notion of topic. In the early 1970's the subject was defined as the grammatiocalized topic the topic as a discrete single constituent of the clause. In the late 1970's several lingusts including Givon suggerted that the topic was not an atomic, disctete entity, and that the clause could have more than one topic. The purpose of the present study is, following Givon, to study grammatical coding devices of topic and to measure the relative topic continuity/discontinuity of participant argu, ents in Korean narratives. By so doing, I would like to shed some light on effective ways of communicating information. The grammatical coding devices analyzed are the following eight structures: zero-anaphora, personal pronous, demonstrative pronouns, names, noun phrases following demonstratives, noun phrases following possessives, definite noun phrases and indefinite referentials. The narrative studied for the count was taken from the KoreanCIA chief's Testiomny:Revolution and Idol by Hyung Wook Kim. It was chosen because it was assumed that Kim's purpose in the novel was to tell a true story, which would not distort the natural use of language for literary effect. The measures taken in the analysis wre those of 'lookback', 'persistence', ambiguity'. The first of these, 'lookback', is a measure of the size of gap between the previous occurrence of a referent and its current occurence in the clause. The meausure of persistence, which is a measure of the speaker's topocal intent, reflects the topic's importance in the discourse. The third measure is a measure of ambiguity. This is necessary for assessing the disruptive effects that other topics within five previous clauses may have on topic identification. The more other topics are present within five previous clauses, the more difficult is the task of correct identification of a topic. The results of the present study show that the humanness of entities is the most powerful factior in topic continutiy in narrative discourse. The semantic roles of human arguments in narrative discourse tend to be agents or experiences. Since agents and experiences have high topicality in discourse, human entities clearly become clausal or discoursal topics. The results also show that the grammatical devices signal varying degrees of topic continuity discontinuity in continuous discourse. The more continuous a topic argument is, the less it is coded. For example, personal pronouns have the most continutiy and indefinite referentials have the least continutiy. The study strongly shows that topic continuity discontinutiy is controlled not only by grammatical devices available in the language but by socio-cultural factors and writer's intentions.

Mapping Heterogenous Ontologies for the HLP Applications - Sejong Semantic Classes and KorLexNoun 1.5 - (인간언어공학에의 활용을 위한 이종 개념체계 간 사상 - 세종의미부류와 KorLexNoun 1.5 -)

  • Bae, Sun-Mee;Im, Kyoung-Up;Yoon, Ae-Sun
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.95-126
    • /
    • 2010
  • This study proposes a bottom-up and inductive manual mapping methodology for integrating two heterogenous fine-grained ontologies which were built by a top-down and deductive methodology, namely the Sejong semantic classes (SJSC) and the upper nodes in KorLexNoun 1.5 (KLN), for HLP applications. It also discusses various problematics in the mapping processes of two language resources caused by their heterogeneity and proposes the solutions. The mapping methodology of heterogeneous fine-grained ontologies uses terminal nodes of SJSC and Least Upper Bounds (LUB) of KLN as basic mapping units. Mapping procedures are as follows: first, the mapping candidate groups are decided by the lexfollocorrelation between the synsets of KLN and the noun senses of Sejong Noun Dfotionaeci(SJND) which are classified according to SJSC. Secondly, the meanings of the candidate groups are precisely disambiguated by linguistic information provided by the two ontologies, i.e. the hierarchicllostructures, the definitions, and the exae les. Thirdly, the level of LUB is determined by applying the appropriate predicates and definitions of SJSC to the upper-lower and sister nodes of the candidate LUB. Fourthly, the mapping possibility ic inthe terminal node of SJSC is judged by che aring hierarchicllorelations of the two ontologies. Finally, the ituorrect synsets of KLN and terminologiollocandidate groups are excluded in the mapping. This study positively uses various language information described in each ontology for establishing the mapping criteria, and it is indeed the advantage of the fine-grained manual mapping. The result using the proposed methodology shows that 6,487 LUBs are mapped with 474 terminal and non-terminal nodes of SJSC, excluding the multiple mapped nodes, and that 88,255 nodes of KLN are mapped including all lower-level nodes of the mapped LUBs. The total mapping coverage is 97.91% of KLN synsets. This result can be applied in many elaborate syntactic and semantic analyses for Korean language processing.

  • PDF

Quantitative Analysis of GBCA Reaction by Mol Concentration Change on MRI Sequence (MRI sequence에 따른 GBCA 몰농도별 반응에 대한 정량적 분석)

  • Jeong, Hyun Keun;Jeong, Hyun Do;Kim, Ho Chul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.2
    • /
    • pp.182-192
    • /
    • 2015
  • In this paper, we introduce how to change the reaction rate as mol concentration when we scan enhanced MRI with GBCA(Gadolinium Based Contrast Agent), Also show the changing patterns depending on diverse MRI sequences which are made by different physical principle. For this study, we made MRI phantom ourselves. We mixed 500 mmol Gadoteridol with Saline in each 28 different containers from 500 to 0 mmol. After that, MR phantom was scanned by physically different MRI sequences which are T1 SE, T2 FLAIR, T1 FLAIR, 3D FLASH, T1 3D SPACE and 3D SPCIR in 1.5T bore. The results were as follows : *T1 Spin echo's Total SI(Signal Intensity) was 15608.7, Max peak was 1352.6 in 1 mmol. *T2 FLAIR's Total SI was 9106.4, Max peak was 0.4 1721.6 in 1 mmol. *T1 FLAIR's Total SI was 20972.5, Max peak was 1604.9 in 1 mmol. *3D FLASH's Total SI was 20924.0, Max peak was 1425.7 in 40 mmol. *3D SPACE 1mm's Total SI was 6399.0, Max peak was 528.3 in 3 mmol. *3D SPACE 5mm's Total SI was 6276.5, Max peak was 514.6 in 2 mmol. *3D SPCIR's Total SI was 1778.8, Max peak was 383.8 in 0.4 mmol. In most sequences, High signal intensity was shown in diluted lower concentration rather than high concentration, And also graph's max peak and pattern had difference value according to the each different sequence. Through this paper which have quantitative result of GBCA's reaction rate depending on sequence, We expect that practical enhanced MR protocol can be performed in clinical field.

User Centered Interface Design of Web-based Attention Testing Tools: Inhibition of Return(IOR) and Graphic UI (웹 기반 주의력 검사의 사용자 인터페이스 설계: 회귀억제 과제와 그래픽 UI를 중심으로)

  • Kwahk, Ji-Eun;Kwak, Ho-Wan
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.4
    • /
    • pp.331-367
    • /
    • 2008
  • This study aims to validate a web-based neuropsychological testing tool developed by Kwak(2007) and to suggest solutions to potential problems that can deteriorate its validity. When it targets a wider range of subjects, a web-based neuropsychological testing tool is challenged by high drop-out rates, lack of motivation, lack of interactivity with the experimenter, fear of computer, etc. As a possible solution to these threats, this study aims to redesign the user interface of a web-based attention testing tool through three phases of study. In Study 1, an extensive analysis of Kwak's(2007) attention testing tool was conducted to identify potential usability problems. The Heuristic Walkthrough(HW) method was used by three usability experts to review various design features. As a result, many problems were found throughout the tool. The findings concluded that the design of instructions, user information survey forms, task screen, results screen, etc. did not conform to the needs of users and their tasks. In Study 2, 11 guidelines for the design of web-based attention testing tools were established based on the findings from Study 1. The guidelines were used to optimize the design and organization of the tool so that it fits to the user and task needs. The resulting new design alternative was then implemented as a working prototype using the JAVA programming language. In Study 3, a comparative study was conducted to demonstrate the excellence of the new design of attention testing tool(named graphic style tool) over the existing design(named text style tool). A total of 60 subjects participated in user testing sessions where their error frequency, error patterns, and subjective satisfaction were measured through performance observation and questionnaires. Through the task performance measurement, a number of user errors in various types were observed in the existing text style tool. The questionnaire results were also in support of the new graphic style tool, users rated the new graphic style tool higher than the existing text style tool in terms of overall satisfaction, screen design, terms and system information, ease of learning, and system performance.

  • PDF