• Title/Summary/Keyword: learning sources

Search Result 348, Processing Time 0.027 seconds

News Consumption and Behavior of Young Adults and the Issue of Fake News

  • Nazari, Zeinab;Oruji, Mozhgan;Jamali, Hamid R.
    • Journal of Information Science Theory and Practice
    • /
    • v.10 no.2
    • /
    • pp.1-16
    • /
    • 2022
  • This study aimed to understand young adults' attitudes concerning news and news resources they consumed, and how they encounter the fake news phenomenon. A qualitative approach was used with semi-structured interviews with 41 young adults (aged 20-30) in Tehran, Iran. Findings revealed that about half of the participants favored social media, and a smaller group used traditional media and only a few maintained that traditional and modern media should be used together. News quality was considered to be lower on social media than in traditional news sources. Furthermore, young adults usually followed the news related to the issues which had impact on their daily life, and they typically tended to share news. To detect fake news, they checked several media to compare the information; and profiteering and attracting audiences' attention were the most important reasons for the existence of fake news. This is the first qualitative study for understanding news consumption behavior of young adults in a politicized society.

Constraining the Evolution of Epoch of Reionization by Deep-Learning the 21-cm Differential Brightness Temperature

  • Kwon, Yungi;Hong, Sungwook E.
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.78.3-78.3
    • /
    • 2019
  • We develop a novel technique that can constrain the evolutionary track of the epoch of reionization (EoR) by applying the convolutional neural network (CNN) to the 21-cm differential brightness temperature. We use 21cmFAST, a fast semi-numerical cosmological 21-cm signal simulator, to produce mock 21-cm map between z=6-13. We design a CNN architecture that predicts the volume-averaged neutral hydrogen fraction from the given 21-cm map. The estimated neutral fraction has a good agreement with its truth value even after smoothing the 21-cm map with somewhat realistic choices of beam size and the frequency bandwidth of the Square Kilometre Array (SKA). Our technique could be further utilized to denoise the 21-cm map or constrain the properties of the radiation sources.

  • PDF

Exploring the Epistemic Actions in Pre-service Teachers' Tasks

  • Jihyun Hwang
    • Research in Mathematical Education
    • /
    • v.26 no.1
    • /
    • pp.19-30
    • /
    • 2023
  • This study analyzes the tasks selected and implemented by pre-service mathematics teachers to support students' development of epistemic actions. Data was collected from 20 students who participated in a mathematics education curriculum theory course during one semester, and multiple data sources were used to gather information about the microteaching sessions. The study focused on the tasks selected and demonstrated during microteaching by pre-service teachers. The results suggest that providing students with a variety of learning opportunities that engage them in different combinations of abductive and deductive epistemic actions is important. The tasks selected by pre-service teachers primarily focused on understanding concepts, calculation, and reasoning. However, the use of engineering tools may present challenges as it requires students to engage in two epistemic actions simultaneously. The study's findings can inform the development of more effective approaches to mathematics education and can guide the development of teacher training programs.

Teacher Efficacy as an Affective Affiliate of Pedagogical Content Knowledge

  • Park, Soon-Hye
    • Journal of The Korean Association For Science Education
    • /
    • v.27 no.8
    • /
    • pp.743-754
    • /
    • 2007
  • Whether This paper argues that teacher efficacy is an affective affiliate of pedagogical content knowledge (PCK) based on empirical data of a study on the nature and construct of PCK. This study was a collective case study utilizing qualitative research methods. The participants of the study were three high school science teachers in the U.S. Data was collected from multiple sources such as classroom observation, interviews, teachers' written reflection, students' work samples, and researchers' field notes. Data was analyzed using the "In-depth Analysis of Explicit PCK" developed by the author. Data analysis indicated that teacher efficacy played a critical role in developing PCK by facilitating the movement from PCK to the enactment of PCK.

Case Study on Software Education using Social Coding Sites (소셜 코딩 사이트를 활용한 소프트웨어 교육 사례 연구)

  • Kang, Hwan-Soo;Cho, Jin-Hyung;Kim, Hee-Chern
    • Journal of Digital Convergence
    • /
    • v.15 no.5
    • /
    • pp.37-48
    • /
    • 2017
  • Recently, the importance of software education is growing because computational thinking of software education is recognized as a key means of future economic development. Also human resources who will lead the 4th industrial revolution need convergence and creativity, computational thinking based on critical thinking, communication, and collaborative learning is known to be effective in creativity education. Software education is also a time needed to reflect social issues such as collaboration with developers sharing interests and open source development methods. Github is a leading social coding site that facilitates collaborative work among developers and supports community activities in open software development. In this study, we apply operational cases of basic learning of social coding sites, learning for storage server with sources and outputs of lectures, and open collaborative learning by using Github. And we propose educational model consisted of four stages: Introduction to Github, Using Repository, Applying Social Coding, Making personal portfolio and Assessment. The proposal of this paper is very effective for software education by attracting interest and leading to pride in the student.

A Content Analysis on Learning Experience of K-MOOC(Korea-Massive Open Online Course) : Focused on Korean University Students (한국 대학생의 K-MOOC 학습 경험에 대한 내용 분석)

  • Park, Tae-Jung;Rah, Ilju
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.12
    • /
    • pp.446-457
    • /
    • 2016
  • The purpose of the study was to understand the various aspects of learning experiences of Korean university students on K-MOOC. Analyses on the major motivation of the enrollment in a certain MOOC class, the actual learning experiences in the class and the perception of the achievement of the class were the three main foci of the current study. The study employed inductive content analysis as a major analysis tool. Reflective journals from 94 students who enrolled in K-MOOC classes were collected and analyzed at the end of the semester. The result of this study indicated that most of students selected the specific K-MOOC classes based on their general interests on the topics the class offered. Other factors such as intellectual curiosity, practical reasons for their study or work and popularity were also influential on the selection of MOOC classes. Watching videos, taking quizzes and taking tests were the three major sources of the students' satisfaction. Most students felt that K-MOOC is technically satisfactory. However, some students reported on simple errors and absence of advanced functions in the platform. Students perceived positively on their academic achievements of obtaining knowledge(remembering and understanding), attitudes (receiving), and skills through K-MOOC. This study ultimately showed a new awareness of learning experiences around K-MOOC from the perspective of the students. Future research is needed to understand the relationships between the students' learning experience and the students' performance in MOOC classes.

A comparative research between 4th-grade and lower grades in elementary mathematics (초등학교 4학년과 저학년 수학의 비교 연구)

  • Kim, Sung-Joon
    • Journal of the Korean School Mathematics Society
    • /
    • v.10 no.4
    • /
    • pp.415-435
    • /
    • 2007
  • A transition from elementary to secondary school, and among grades, among learning contents is a essential problem in education. A connectivity between learning contents is important in student's growth and development. A gap between lower grades and higher grades in elementary school is no less extensive than a gap between elementary mathematics and secondary mathematics. In this paper, we start with a critical mind about a transition and connectivity between lower grades and higher grades in elementary school. In order to compare between elementary grades, we firstly focus 4th grade mathematics which finish lower grades and start higher grades at the same time. First, we make up a questionnaire to 4th grade students and teachers in charge 4th grade. A questionnaire is composed of questions about the degree of difficulty in the learning(and teaching) of 4th grade mathematics comparing with 3rd grade mathematics. Second, we compare to lower grades lessons(1st grade) and 4th grade lessons using a qualitative method. we analyze the lesson contents, activities and time through 'analysis of the learning course'. And we compare the pattern of eliciting questions, question patterns, nomination patterns and feedback patterns between 1st grade and 4th grade lessons. We hope that this paper is a fundamental sources in investigating a connectivity between lower grades and higher grades in elementary mathematics in the future.

  • PDF

Automatic Generation of Information Extraction Rules Through User-interface Agents (사용자 인터페이스 에이전트를 통한 정보추출 규칙의 자동 생성)

  • 김용기;양재영;최중민
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.447-456
    • /
    • 2004
  • Information extraction is a process of recognizing and fetching particular information fragments from a document. In order to extract information uniformly from many heterogeneous information sources, it is necessary to produce information extraction rules called a wrapper for each source. Previous methods of information extraction can be categorized into manual wrapper generation and automatic wrapper generation. In the manual method, since the wrapper is manually generated by a human expert who analyzes documents and writes rules, the precision of the wrapper is very high whereas it reveals problems in scalability and efficiency In the automatic method, the agent program analyzes a set of example documents and produces a wrapper through learning. Although it is very scalable, this method has difficulty in generating correct rules per se, and also the generated rules are sometimes unreliable. This paper tries to combine both manual and automatic methods by proposing a new method of learning information extraction rules. We adopt the scheme of supervised learning in which a user-interface agent is designed to get information from the user regarding what to extract from a document, and eventually XML-based information extraction rules are generated through learning according to these inputs. The interface agent is used not only to generate new extraction rules but also to modify and extend existing ones to enhance the precision and the recall measures of the extraction system. We have done a series of experiments to test the system, and the results are very promising. We hope that our system can be applied to practical systems such as information-mediator agents.

Multi-source information integration framework using self-supervised learning-based language model (자기 지도 학습 기반의 언어 모델을 활용한 다출처 정보 통합 프레임워크)

  • Kim, Hanmin;Lee, Jeongbin;Park, Gyudong;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.141-150
    • /
    • 2021
  • Based on Artificial Intelligence technology, AI-enabled warfare is expected to become the main issue in the future warfare. Natural language processing technology is a core technology of AI technology, and it can significantly contribute to reducing the information burden of underrstanidng reports, information objects and intelligences written in natural language by commanders and staff. In this paper, we propose a Language model-based Multi-source Information Integration (LAMII) framework to reduce the information overload of commanders and support rapid decision-making. The proposed LAMII framework consists of the key steps of representation learning based on language models in self-supervsied way and document integration using autoencoders. In the first step, representation learning that can identify the similar relationship between two heterogeneous sentences is performed using the self-supervised learning technique. In the second step, using the learned model, documents that implies similar contents or topics from multiple sources are found and integrated. At this time, the autoencoder is used to measure the information redundancy of the sentences in order to remove the duplicate sentences. In order to prove the superiority of this paper, we conducted comparison experiments using the language models and the benchmark sets used to evaluate their performance. As a result of the experiment, it was demonstrated that the proposed LAMII framework can effectively predict the similar relationship between heterogeneous sentence compared to other language models.

A Review of Deep Learning-based Trace Interpolation and Extrapolation Techniques for Reconstructing Missing Near Offset Data (가까운 벌림 빠짐 해결을 위한 딥러닝 기반의 트레이스 내삽 및 외삽 기술에 대한 고찰)

  • Jiho Park;Soon Jee Seol;Joongmoo Byun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.4
    • /
    • pp.185-198
    • /
    • 2023
  • In marine seismic surveys, the inevitable occurrence of trace gaps in the near offset resulting from geometrical differences between sources and receivers adversely affects subsequent seismic data processing and imaging. The absence of data in the near-offset region hinders accurate seismic imaging. Therefore, reconstructing the missing near-offset information is crucial for mitigating the influence of seismic multiples, particularly in the case of offshore surveys where the impact of multiple reflections is relatively more pronounced. Conventionally, various interpolation methods based on the Radon transform have been proposed to address the issue of the nearoffset data gap. However, these methods have several limitations, leading to the recent emergence of deep-learning (DL)-based approaches as alternatives. In this study, we conducted an in-depth analysis of two representative DL-based studies to scrutinize the challenges that future studies on near-offset interpolation must address. Furthermore, through field data experiments, we precisely analyze the limitations encountered when applying previous DL-based trace interpolation techniques to near-offset situations. Consequently, we suggest that near-offset data gaps must be approached by extrapolation rather than interpolation.