• 제목/요약/키워드: Data Science

검색결과 55,976건 처리시간 0.071초

2015 개정 교육과정에 따른 초등학교 과학과 교사용 지도서의 참고자료 분석 - 3~6학년 물리영역을 중심으로 - (Analysis of Reference Data in Science Guidebooks for Elementary Teachers Developed for 2015 Revised Curriculum - Focusing on Physics Section for the Third-Sixth Grade -)

  • 김형욱;송진웅
    • 한국초등과학교육학회지:초등과학교육
    • /
    • 제39권2호
    • /
    • pp.155-167
    • /
    • 2020
  • This study analyzed reference data for the physics section in science guidebooks for the third-sixth grade in elementary schools, according to the 2015 revised curriculum. It analyzed the reference data by categorizing them in terms of subjects, objectives and presentation forms and the visual data used in the reference data by categorizing their types. The findings show that the ratio of the science knowledge type was highest (53.8%) among the subjects of reference data in guidebooks for the science section, followed by the application to real life, and then, supplementary inquiry experiments and activities. The ratios of other types such as advanced science, environment, scientists and science history were, however, less than 1%, so they need to be improved. The ratio of knowledge provision was highest (40.5%) among the objectives of reference data but the ratios of conceptual supplementation and deepening were similar in ratio. Meanwhile, While the expository type (88.4%) accounted for most of the present forms of reference data, and photographs and illustrations (93.6%) also accounted for most of visual data suggested with reference data. Thus more various types of presentation forms and the extension of visual data seemed to be needed. This study is expected to provide some suggestions for the meaningful use of reference data in guidebooks for teachers and for the development of science guidebooks for teachers in elementary schools.

Analysis of Computational Science and Engineering SW Data Format for Multi-physics and Visualization

  • Ryu, Gimyeong;Kim, Jaesung;Lee, Jongsuk Ruth
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권2호
    • /
    • pp.889-906
    • /
    • 2020
  • Analysis of multi-physics systems and the visualization of simulation data are crucial and difficult in computational science and engineering. In Korea, Korea Institute of Science and Technology Information KISTI developed EDISON, a web-based computational science simulation platform, and it is now the ninth year since the service started. Hitherto, the EDISON platform has focused on providing a robust simulation environment and various computational science analysis tools. However, owing to the increasing issues in collaborative research, data format standardization has become more important. In addition, as the visualization of simulation data becomes more important for users to understand, the necessity of analyzing input / output data information for each software is increased. Therefore, it is necessary to organize the data format and metadata for the representative software provided by EDISON. In this paper, we analyzed computational fluid dynamics (CFD) and computational structural dynamics (CSD) simulation software in the field of mechanical engineering where several physical phenomena (fluids, solids, etc.) are complex. Additionally, in order to visualize various simulation result data, we used existing web visualization tools developed by third parties. In conclusion, based on the analysis of these data formats, it is possible to provide a foundation of multi-physics and a web-based visualization environment, which will enable users to focus on simulation more conveniently.

MARK5B 시스템을 이용한 전파천문 데이터 처리 (RADIO ASTRONOMICAL DATA PROCESSING USING MARK5B)

  • 오세진;염재환;노덕규;정현수;제도흥;김광동;김범국;황철준;정구영
    • 천문학논총
    • /
    • 제21권2호
    • /
    • pp.95-100
    • /
    • 2006
  • In this paper, we describe the radio astronomical data processing system implementation using Mark5B and its development. KASI(Korea Astronomy and Space Science Institute) is constructing the KVN (Korean VLBI Network) until the end of 2007, which is the first VLBI(Very Long Baseline Interferometery) facility in Korea and dedicated for the mm-wave VLBI observation. KVN will adopt the DAS (Data Acquisition System) consisting of digital filter with various function and 1Gsps high-speed sampler to digitize the radio astronomical data for analyzing on the digital filter system. And the analyzed data will be recorded to recorder up to 1Gbps data rates. To test this, we have implemented the system which is able to process 1Gbps data rates and carried out the data recording experiment.

Verification Algorithm for the Duplicate Verification Data with Multiple Verifiers and Multiple Verification Challenges

  • Xu, Guangwei;Lai, Miaolin;Feng, Xiangyang;Huang, Qiubo;Luo, Xin;Li, Li;Li, Shan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권2호
    • /
    • pp.558-579
    • /
    • 2021
  • The cloud storage provides flexible data storage services for data owners to remotely outsource their data, and reduces data storage operations and management costs for data owners. These outsourced data bring data security concerns to the data owner due to malicious deletion or corruption by the cloud service provider. Data integrity verification is an important way to check outsourced data integrity. However, the existing data verification schemes only consider the case that a verifier launches multiple data verification challenges, and neglect the verification overhead of multiple data verification challenges launched by multiple verifiers at a similar time. In this case, the duplicate data in multiple challenges are verified repeatedly so that verification resources are consumed in vain. We propose a duplicate data verification algorithm based on multiple verifiers and multiple challenges to reduce the verification overhead. The algorithm dynamically schedules the multiple verifiers' challenges based on verification time and the frequent itemsets of duplicate verification data in challenge sets by applying FP-Growth algorithm, and computes the batch proofs of frequent itemsets. Then the challenges are split into two parts, i.e., duplicate data and unique data according to the results of data extraction. Finally, the proofs of duplicate data and unique data are computed and combined to generate a complete proof of every original challenge. Theoretical analysis and experiment evaluation show that the algorithm reduces the verification cost and ensures the correctness of the data integrity verification by flexible batch data verification.

Deploying Linked Open Vocabulary (LOV) to Enhance Library Linked Data

  • Oh, Sam Gyun;Yi, Myongho;Jang, Wonghong
    • Journal of Information Science Theory and Practice
    • /
    • 제3권2호
    • /
    • pp.6-15
    • /
    • 2015
  • Since the advent of Linked Data (LD) as a method for building webs of data, there have been many attempts to apply and implement LD in various settings. Efforts have been made to convert bibliographic data in libraries into Linked Data, thereby generating Library Linked Data (LLD). However, when memory institutions have tried to link their data with external sources based on principles suggested by Tim Berners-Lee, identifying appropriate vocabularies for use in describing their bibliographic data has proved challenging. The objective of this paper is to discuss the potential role of Linked Open Vocabularies (LOV) in providing better access to various open datasets and facilitating effective linking. The paper will also examine the ways in which memory institutions can utilize LOV to enhance the quality of LLD and LLD-based ontology design.

문헌정보학과의 데이터 사이언스 커리큘럼 개발 실태와 방향성 고찰 (Study on the Current Status of Data Science Curriculum in Library and Information Science and its Direction)

  • 강지혜
    • 한국도서관정보학회지
    • /
    • 제47권3호
    • /
    • pp.343-363
    • /
    • 2016
  • 본 연구는 69개의 iSchool에서 데이터 사이언스 관련 교과가 어떻게 제공되고 있는지를 파악하고, 국내 교과와 비교하여 방향성을 제시한다. iSchool은 건강, 기술, 바이오 분야를 비롯한 관련 분야로 그 교과 영역을 확장하는 현상이 두드러진다. 하지만, 국내 교과에서는 인접학문과 융합하려는 현상은 활발하게 관찰되지 않았다. 데이터를 어떻게 처리하고 관리할 것인지에 대한 영역 역시 iSchool이 집중하는 분야인데, 일반적인 데이터 사이언스, 데이터 관리, 데이터 보안 등에 중점을 둔 교과가 제공되고 있다. 데이터를 저장하는 방식에 대한 교과 분류는 '데이터베이스' 관련 교과의 비중이 높았으며, 비슷한 비중으로 통계와 분석법이 제공되고 있었다. iSchool의 교과를 분석하고 국내 사례와 비교해 본 결과 본 논문은 국내 문헌정보학이 데이터 사이언스 관련 교과를 확대하고, 병진 데이터 사이언스로의 역할을 강화하며, 수리적 분석 능력을 키우는 교과를 개발하되, 특성화된 교과를 발굴하여 실험적인 수업을 제공하고, 기술과 상호작용하는 지식을 제공해야 할 것을 제안한다.

Development of the software for high speed data transfer of the high-speed, large capacity data archive system for the storage of the correlation data from Korea-Japan Joint VLBI Correlator (KJJVC)

  • Park, Sun-Youp;Kang, Yong-Woo;Roh, Duk-Gyoo;Oh, Se-Jin;Yeom, Jae-Hwan;Sohn, Bong-Won;Yukitoshi, Kanya;Byun, Do-Young
    • 한국우주과학회:학술대회논문집(한국우주과학회보)
    • /
    • 한국우주과학회 2008년도 한국우주과학회보 제17권2호
    • /
    • pp.37.2-37.2
    • /
    • 2008
  • Korea-Japan Joint VLBI Correlator (KJJVC), to be used for Korean VLBI Network (KVN) in Korea Astronomy & Space Science Institute (KASI), is a high-speed calculator that outputs the correlation results in the maximum speed of 1.4GB/sec.To receive and record this data keeping up with this speed and with no loss, the design of the software running on the data archive system for receving and recording the output data from the correlator is very important. But, the simple kind of programming using just single thread that receives data from network and records it by turns, can cause a bottleneck effect while processing high speed data and a probable data loss, and cannot utilize the merit of hardwares supporting multi core or hyper threading, or operating systems supporting these hardwares. In this talk we summarize the design of the data transfer software for KJJVC and high speed, large capacity data archive system using general socket programming and multi threading techniques, and the pre-BMT(Bench Marking Test) results from the tests of the storage product providers' proposals using this software.

  • PDF

화장품 고객 정보를 이용한 마이크로 마케팅 (Micro marketing using a cosmetic transaction data)

  • 석경하;조대현;김병수;이종언;백승훈;전유중;이영배;김재길
    • Journal of the Korean Data and Information Science Society
    • /
    • 제21권3호
    • /
    • pp.535-546
    • /
    • 2010
  • 고객 정보를 활용하는 방법에는 고객의 구매액을 활용한 마일리지 방법과 구매 횟수에 따라 등급을 나누어 활용하는 방법 등이 있다. 본 연구에서는 회사 매출에 직결되는 고객의 재구매 여부에 초점을 맞추어 고객정보와 구매정보를 이용하여 로지스틱 회귀분석을 통한 재구매 예측 모형을 만들었다. 예측 모형 평가 측도로는 하이드게 점수를 사용하였으며 하이드게 점수를 최대로 하는 점수를 기준으로 분계점을 선택하였다. 재구매 예측모형을 이용하여 재구매 지수를 만들어 고객을 등급화하여 보다 효율적인 고객 관리가 가능하게 하였다.

Hybrid Recommendation Algorithm for User Satisfaction-oriented Privacy Model

  • Sun, Yinggang;Zhang, Hongguo;Zhang, Luogang;Ma, Chao;Huang, Hai;Zhan, Dongyang;Qu, Jiaxing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권10호
    • /
    • pp.3419-3437
    • /
    • 2022
  • Anonymization technology is an important technology for privacy protection in the process of data release. Usually, before publishing data, the data publisher needs to use anonymization technology to anonymize the original data, and then publish the anonymized data. However, for data publishers who do not have or have less anonymized technical knowledge background, how to configure appropriate parameters for data with different characteristics has become a more difficult problem. In response to this problem, this paper adds a historical configuration scheme resource pool on the basis of the traditional anonymization process, and configuration parameters can be automatically recommended through the historical configuration scheme resource pool. On this basis, a privacy model hybrid recommendation algorithm for user satisfaction is formed. The algorithm includes a forward recommendation process and a reverse recommendation process, which can respectively perform data anonymization processing for users with different anonymization technical knowledge backgrounds. The privacy model hybrid recommendation algorithm for user satisfaction described in this paper is suitable for a wider population, providing a simpler, more efficient and automated solution for data anonymization, reducing data processing time and improving the quality of anonymized data, which enhances data protection capabilities.