• Title/Summary/Keyword: 과학기술 데이터

Search Result 2,591, Processing Time 0.028 seconds

Improved Performance of Image Semantic Segmentation using NASNet (NASNet을 이용한 이미지 시맨틱 분할 성능 개선)

  • Kim, Hyoung Seok;Yoo, Kee-Youn;Kim, Lae Hyun
    • Korean Chemical Engineering Research
    • /
    • v.57 no.2
    • /
    • pp.274-282
    • /
    • 2019
  • In recent years, big data analysis has been expanded to include automatic control through reinforcement learning as well as prediction through modeling. Research on the utilization of image data is actively carried out in various industrial fields such as chemical, manufacturing, agriculture, and bio-industry. In this paper, we applied NASNet, which is an AutoML reinforced learning algorithm, to DeepU-Net neural network that modified U-Net to improve image semantic segmentation performance. We used BRATS2015 MRI data for performance verification. Simulation results show that DeepU-Net has more performance than the U-Net neural network. In order to improve the image segmentation performance, remove dropouts that are typically applied to neural networks, when the number of kernels and filters obtained through reinforcement learning in DeepU-Net was selected as a hyperparameter of neural network. The results show that the training accuracy is 0.5% and the verification accuracy is 0.3% better than DeepU-Net. The results of this study can be applied to various fields such as MRI brain imaging diagnosis, thermal imaging camera abnormality diagnosis, Nondestructive inspection diagnosis, chemical leakage monitoring, and monitoring forest fire through CCTV.

Method for Importance based Streamline Generation on the Massive Fluid Dynamics Dataset (대용량 유동해석 데이터에서의 중요도 기반 스트림라인 생성 방법)

  • Lee, Joong-Youn;Kim, Min Ah;Lee, Sehoon
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.6
    • /
    • pp.27-37
    • /
    • 2018
  • Streamline generation is one of the most representative visualization methods to analyze the flow stream of fluid dynamics dataset. It is a challenging problem, however, to determine the seed locations for effective streamline visualization. Meanwhile, it needs much time to compute effective seed locations and streamlines on the massive flow dataset. In this paper, we propose not only an importance based method to determine seed locations for the effective streamline placements but also a parallel streamline visualization method on the distributed visualization system. Moreover, we introduce case studies on the real fluid dynamics dataset using GLOVE visualization system to evaluate the proposed method.

A Study on Legal Regulation of Neural Data and Neuro-rights (뇌신경 데이터의 법적 규율과 뇌신경권에 관한 소고)

  • Yang, Ji Hyun
    • The Korean Society of Law and Medicine
    • /
    • v.21 no.3
    • /
    • pp.145-178
    • /
    • 2020
  • This paper examines discussions surrounding cognitive liberty, neuro-privacy, and mental integrity from the perspective of Neuro-rights. The right to control one's neurological data entails self-determination of collection and usage of one's data, and the right to object to any way such data may be employed to negatively impact oneself. As innovations in neurotechnologies bear benefits and downsides, a novel concept of the neuro-rights has been suggested to protect individual liberty and rights. In Oct. 2020, the Chilean Senate presented the 'Proyecto de ley sobre neuroderechos' to promote the recognition and protection of neuro-rights. This new bill defines all data obtained from the brain as neuronal data and outlaws the commerce of this data. Neurotechnology, especially when paired with big data and artificial intelligence, has the potential to turn one's neurological state into data. The possibility of inferring one's intent, preferences, personality, memory, emotions, and so on, poses harm to individual liberty and rights. However, the collection and use of neural data may outpace legislative innovation in the near future. Legal protection of neural data and the rights of its subject must be established in a comprehensive way, to adapt to the evolving data economy and technical environment.

Harvest and Providing System based on OAI for Science Technology Information (OAI 기반 과학기술정보 수집 제공 시스템)

  • Yoon, Jun-Weon
    • Journal of Information Management
    • /
    • v.38 no.1
    • /
    • pp.141-160
    • /
    • 2007
  • Many contents produced and provided as development of information technology on the internet. Especially discussion that collecting and storing of digital information resources, is expanded as growing dependence on academic information of research workers. Open Access is a new paradigm of information distribution that is opposite concept of high price distribution academic information. It is an OAI system that is intended to collect and automate open access data in good order. This paper constructs stOAI based on OAI that is a science and technology information providing system. This system provides international academic journal free of charge that collect and store through OAI protocol in OA(Open Access) of yesKISTI(science and technology information portal service). Also, It provides automate and centralize science technology information, that KISTI has, to external institution as a standard type.

Practical Guide to X-ray Spectroscopic Data Analysis (X선 기반 분광광도계를 통해 얻은 데이터 분석의 기초)

  • Cho, Jae-Hyeon;Jo, Wook
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.35 no.3
    • /
    • pp.223-231
    • /
    • 2022
  • Spectroscopies are the most widely used for understanding the crystallographic, chemical, and physical aspects of materials; therefore, numerous commercial and non-commercial software have been introduced to help researchers better handling their spectroscopic data. However, not many researchers, especially early-stage ones, have a proper background knowledge on the choice of fitting functions and a technique for actual fitting, although the essence of such data analysis is peak fitting. In this regard, we present a practical guide for peak fitting for data analysis. We start with a basic-level theoretical background why and how a certain protocol for peak fitting works, followed by a step-by-step visualized demonstration how an actual fitting is performed. We expect that this contribution is sure to help many active researchers in the discipline of materials science better handle their spectroscopic data.

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.

Suggestions on how to convert official documents to Machine Readable (공문서의 기계가독형(Machine Readable) 전환 방법 제언)

  • Yim, Jin Hee
    • The Korean Journal of Archival Studies
    • /
    • no.67
    • /
    • pp.99-138
    • /
    • 2021
  • In the era of big data, analyzing not only structured data but also unstructured data is emerging as an important task. Official documents produced by government agencies are also subject to big data analysis as large text-based unstructured data. From the perspective of internal work efficiency, knowledge management, records management, etc, it is necessary to analyze big data of public documents to derive useful implications. However, since many of the public documents currently held by public institutions are not in open format, a pre-processing process of extracting text from a bitstream is required for big data analysis. In addition, since contextual metadata is not sufficiently stored in the document file, separate efforts to secure metadata are required for high-quality analysis. In conclusion, the current official documents have a low level of machine readability, so big data analysis becomes expensive.

Data Literacy Education in Design Curriculum of Higher Education Focused on Development of Design-Data Convergence Curriculum (디자인 교과과정에서의 데이터 문해력 교육에 관한 연구 -디자인-데이터 융합 교과 개발 사례를 중심으로)

  • Lee, Hyun Jhin
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.5
    • /
    • pp.685-696
    • /
    • 2022
  • This study explores convergence curriculum for design and data science, and applies data science knowledge on undergraduate design classes for designer's data literacy. First, related studies about data literacy education for non-data science major's, and data driven design project cases are explored, then design competency and data competency based on NCS are studied. Then this study developed 3 step design-data convergence curriculum model for designers' data literacy. The curriculum model is applied on case study classes, which are Big data and UX design(2) classes. The learning results and student's feedback of the case study classes are collected and analyzed to prove the design-data convergence curriculum, and the results provide findings and implications of the design-data convergence class case study.

Standardization of DRM Technologies in MPEG-21 (MPEG-21의 DRM 기술 표준화 현황 분석)

  • Jeong, Senator
    • Journal of Information Management
    • /
    • v.35 no.2
    • /
    • pp.107-130
    • /
    • 2004
  • MPEG-21 is an open standard framework for creation, delivery and consumption of digital content in interoperable and rights-managed and protected way. Focusing on DRM technologies, this paper covers with concept and ongoing activities of MPEG-21's parts - Digital Item Declaration which is the base unit of trade and delivery, Digital Item Identification, Intellectual Property Management & Protection, Rights Data Dictionary, Rights Expression Language, Persistent Association Technology, Event Reporting, and so on.

Analysis of failed job based on scheduler job logs (슈퍼컴퓨터 작업 로그 기반 실패 작업 특성 연구)

  • Park, Ju-Won
    • Annual Conference of KIPS
    • /
    • 2018.10a
    • /
    • pp.77-79
    • /
    • 2018
  • 최근 기초 과학 분야뿐만 아니라 빅데이터 분석, 인공 지능과 같은 컴퓨터 과학 분야에서도 대용량의 컴퓨팅 자원을 많이 활용함에 따라 슈퍼컴퓨터와 같은 고성능 컴퓨팅 자원에 대한 요구가 더욱 증가하고 있다. 이러한 대규모 컴퓨팅 자원을 안정적으로 운영하기 위해서는 실패 작업의 특성에 대한 상세한 분석이 필수적이다. 본 논문에서는 한국과학기술정보연구원에서 운영하고 있는 슈퍼컴퓨터(Tachyon)에서 1년 동안 수집된 작업 데이터를 기반으로 고성컴퓨팅 시스템을 활용하는 작업의 특성을 파악하기 위해 다음 3가지의 분석 결과를 제시한다. 첫째는 실패한 작업의 비율, 평균 사용한 procssor수, 전체 작업 시간 중 실패 작업이 차지한 비율과 같이 간단한 통계적 분석 결과를 제시한다. 둘째는 실패한 작업의 inter-arrival time 분포 모형을 제시한다. 마지막으로 시간에 따른 실패 작업 확률을 분석하기 위해 inter-arrival time 값을 이용하여 hazard rate 결과를 제시한다.