• Title/Summary/Keyword: 과학기술 데이터

Search Result 2,575, Processing Time 0.042 seconds

A study on the Livestock nonpoint source runoff characteristics and Load Calculation (축산계 비점오염원 유출 특성 및 부하량 산정에 관한 연구)

  • Ryu, Jeha;Yoon, Chun Gyung;Cho, MoonSoo;Lee, HyoJun;Lee, BoMi
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2016.05a
    • /
    • pp.574-574
    • /
    • 2016
  • 유역으로 유입되는 오염물질의 발생원은 점오염원과 비점오염원으로 구분할 수 있으며 점오염원의 경우 생활하수, 산업폐수, 그리고 축산폐수에 대해 처리시설 확충 및 기술개발을 통해 관리하고 있다. 비점오염원에서의 오염물질 유출은 토지이용, 강우, 불특정적인 오염물질 투입상태 등에 따라 다르며, 지역적 특성에 영향을 받기 때문에 불확실성이 고려되어야 한다. 특히 농촌지역에서의 비점오염은 접근이 어렵고 관리주체가 모호하여 좀처럼 규명되지 않았으며, 전국적으로 그 영향이 정량화되지 않아 실질적인 관리 및 대책마련에 어려움이 있었다. 특히, 가축분뇨의 발생으로부터 처리, 자원화에 이르기까지 각 관리체계에 있어서 축산비점오염의 배출경로와 수계오염부하량, 수질환경 영향을 정밀하게 분석하여 향후 대책마련을 위한 기초자료를 확보할 필요가 있다. 또한 전국적으로 비점오염이 수계에 미치는 영향과, 그 중 축산비점오염의 영향을 면밀히 분석하여 향후 정책 및 제도개선을 위한 과학적 기초자료로서 활용할 필요가 있다. 따라서 본 연구에서는 축산 밀집 지역을 대상으로 오염원 조사를 통해 강우시 비점오염 모니터링 지점을 선정 하였으며, 년5회씩 강우 모니터링을 하여 기초데이터를 축적 하였다. 대상지역은 강원도 횡성군에 위치한 일리천 유역이며 농가 수는 총 90개의 농가가 위치하고 있는데 그 중 돼지 1,467마리, 한우 1,957마리, 젖소 581마리, 개 2,880마리, 닭 75,000마리, 사슴 4마리로 조사되었다. 대상유역을 대상으로 배출부하량을 조사한 결과 BOD 배출부하량은 총 509.3 kg/day, T-N 배출부하량은 총 331.5 kg/day, T-P 배출부하량은 총 28.3 kg/day로 조사되었다. 유출특성을 파악하기 위하여 유량가중평균농도(Event Mean Concentration, EMC)를 산정한 결과 BOD의 경우 MW-4에서 1.2 mg/L - 7.2 mg/L, MW-5에서 0.8 mg/L - 6.3 mg/L, MW-7에서 0.7 mg/L - 5.2 mg/L의 범위를 보였다. T-N의 경우 MW-4에서 1.426 mg/L - 5.321 mg/L, MW-5에서 1.205 mg/L - 4.27 mg/L, MW-7에서 0.989 mg/L - 3.859 mg/L의 범위를 보였다. T-P의 경우 MW-4에서 0.245 mg/L - 0.632 mg/L, MW-5에서 0.236 mg/L - 0.596 mg/L, MW-7에서 0.213 mg/L - 0.521 mg/L의 범위를 보였다. 본 연구에서 EMC를 산정한 방법은 평시 수질 및 유량을 정하는 기준에 따라 값이 많이 달라질 수 있다. 따라서 합리적으로 평시 부하량을 제외하고 강우시의 영향을 파악할 수 있는 EMC 산정방법에 대한 추가적인 고찰이 필요할 것으로 사료된다.

  • PDF

Perceptions on Microcomputer-Based Laboratory Experiments of Science Teachers that Participated in In-Service Training (연수에 참여한 교사들의 MBL실험에 대한 인식)

  • Park, Kum-Hong;Ku, Yang-Sam;Choi, Byung-Soon;Shin, Ae-Kyung;Lee, Kuk-Haeng;Ko, Suk-Beum
    • Journal of The Korean Association For Science Education
    • /
    • v.27 no.1
    • /
    • pp.59-69
    • /
    • 2007
  • The aim of this study was to investigate teachers' perceptions on MBL (microcomputer-based laboratory) experiment training program for teachers, the expecting effects of MBL experiment and application of MBL experiment after conducting MBL experiment training for science classes in schools. This study showed that most of the teachers who participated in the training program thought that the MBL experiment training program was very useful and instructive. Many teachers considered that MBL experiments using a computer could decrease time spent in the experiment by accurate and fast data collection and analysis. They also thought that the reduced time could be used more effectively in the analysis of experimental data and discussion activities leading to correct concept formation as well as in the development of graphical analysis and science process skills. However, they thought that MBL experiments were ineffective in learning how to operate experiment apparatus. This study also revealed that most teachers intended to apply MBL experiments in real classrooms context right after the training course and they pointed out many obstacles in introducing MBL experiments into their classrooms such as a budget to purchase equipment, poor laboratory conditions, and few MBL experiment training opportunities. In order to apply MBL experiment into the real classrooms, further changes were suggested as follows; development of technologies to reduce unit cost of equipment for MBL experiments, production and supply of many kinds of sensors, development of MBL experiment materials, and expansion of the training program for teachers.

The Effects of Science Class Using Multimedia Materials on High School Students' Attitude toward Science (멀티미디어 자료를 활용한 과학수업이 고등학생의 과학에 대한 태도에 미치는 영향)

  • Yoo, Mi-Hyun;Park, Hyun-Ju
    • Journal of Science Education
    • /
    • v.35 no.1
    • /
    • pp.1-12
    • /
    • 2011
  • The purpose of this study was to examine the effects of science class using the multimedia materials on high school students' attitude toward science. The subjects were 222 high school students. For this study, 11th graders at a high school were assigned to a comparison group and an experimental group. The experimental group was received science class using multimedia materials for 3 months. The research design was pretest-posttest control group design, the data were analyzed using PASW statistics 18.0 program. The types of multimedia materials used in experimental group were science fiction movies, science documentaries, TV programs, and Power Point presentations created by students. Before and after treatment, the attitude toward science tests were administered. Pre-tests and post-test score differences between 2 groups were analyzed by ANCOVA. The differences of attitude toward science based on gender were compared by analysis of covariance. And the perception on science class with multimedia materials were also investigated. The results of this study were as follows: First, the attitude toward science was improved significantly after applying science classes using multimedia materials. Especially, there were significant difference between pre-test and post-test in the score of attitude toward science class and attitude toward science content which were sub-area of attitude toward science. Second, there was no significant difference between female and male students in total score of attitude toward science. However, the attitude toward science, scientists and society, which was a sub-area of attitude toward science, female students scored significantly higher than male students. Third, 84% student showed a positive perception that the science class enhanced their interest in science. 69% the students responded that we had thought about Science-Technology-Society. Multimedia material types which the students prefered were science fiction movie, science documentaries, science TV programs, respectively.

  • PDF

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

A Comparative Analysis of the Changes in Perception of the Fourth Industrial Revolution: Focusing on Analyzing Social Media Data (4차 산업혁명에 대한 인식 변화 비교 분석: 소셜 미디어 데이터 분석을 중심으로)

  • You, Jae Eun;Choi, Jong Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.11
    • /
    • pp.367-376
    • /
    • 2020
  • The fourth industrial revolution will greatly contribute to the entry of objects into an intelligent society through technologies such as big data and an artificial intelligence. Through the revolution, we were able to understand human behavior and awareness, and through the use of an artificial intelligence, we established ourselves as a key tool in various fields such as medicine and science. However, the fourth industrial revolution has a negative side with a positive future. In this study, an analysis was conducted using text mining techniques based on unstructured big data collected through social media. We wanted to look at keywords related to the fourth industrial revolution by year (2016, 2017 and 2018) and understand the meaning of each keyword. In addition, we understood how the keywords related to the Fourth Industrial Revolution changed with the change of the year and wanted to use R to conduct a Keyword Analysis to identify the recognition flow closely related to the Fourth Industrial Revolution through the keyword flow associated with the Fourth Industrial Revolution. Finally, people's perceptions of the fourth industrial revolution were identified by looking at the positive and negative feelings related to the fourth industrial revolution by year. The analysis showed that negative opinions were declining year after year, with more positive outlook and future.

A Study on the Fabrication of bone Model X-ray Phantom Using CT Data and 3D Printing Technology (CT 데이터와 3D 프린팅 기술을 이용한 뼈 모형 X선 팬텀 제작에 관한 연구)

  • Yun, Myeong Seong;Han, Dong-Kyoon;Kim, Yeon-Min;Yoon, Joon
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.7
    • /
    • pp.879-886
    • /
    • 2018
  • A 3-dimensional (D) printer is a device capable of outputting a three-dimensional solid object based on data modeled in a computer. These features are utilized in the bone model X - ray phantom production etc using CT data by fusing with the radiation science field. A bone model phantom was made using data obtained by CT scan of an existing Pelvis phantom, using PLA, Wood, XT-CF20, Glow fill, Steel filaments which are materials of Fused Filament Fabrication (FFF) 3D printer.Measure Hounsfield Unit (HU) with images obtained by CT scan of the existing Pelvis phantom and five material phantoms made with 3D printer under the same conditions,SI and SNR were measured using a diagnostic X-ray generator, and each phantom was compared and analyzed.As a result, the X - ray phantom in the X - ray examination condition of the limb was found to be most suitable for the glow fill filament.The characteristics of the filament can be known to the base of this research and the practicality of X - ray phantom fabrication was confirmed.

Gamification Analysis method proposal of Screen Sports (스크린 스포츠의 게이미피케이션 분석방법 제안)

  • Kil, Youngik;Ko, Ilju;Oh, Kyoungsu;Bang, Green
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.5
    • /
    • pp.369-383
    • /
    • 2018
  • In this paper, suggests a gamification analysis method applied to screen sports. The analysis method is through the process of comparison, collection and application. In the process of comparison, compare the characteristics of actual sports and screen sports and Data collection to complement the differences derived from the comparison process is done during the collection process. Process of application is verify for application status of Gamification. and analyzed of screen golf and screen baseball. The result shows that screen golf couldn't apply walking exercise in the comparison process, and screen baseball couldn't apply exercise elements except batting. During the collection process, driving distance and swing data were used for screen golf, and driving distance, batting average and RBI (runs batted in) data were used for screen baseball. Lastly, it was revealed during the application process that both screen golf and screen baseball provide data to users by using reward, competition and Self-expression elements of gamification. The analysis methods presented in this study can be a method to analyze screen sports, and are expected to be appropriate methods to make screen sports.

Mapping Mammalian Species Richness Using a Machine Learning Algorithm (머신러닝 알고리즘을 이용한 포유류 종 풍부도 매핑 구축 연구)

  • Zhiying Jin;Dongkun Lee;Eunsub Kim;Jiyoung Choi;Yoonho Jeon
    • Journal of Environmental Impact Assessment
    • /
    • v.33 no.2
    • /
    • pp.53-63
    • /
    • 2024
  • Biodiversity holds significant importance within the framework of environmental impact assessment, being utilized in site selection for development, understanding the surrounding environment, and assessing the impact on species due to disturbances. The field of environmental impact assessment has seen substantial research exploring new technologies and models to evaluate and predict biodiversity more accurately. While current assessments rely on data from fieldwork and literature surveys to gauge species richness indices, limitations in spatial and temporal coverage underscore the need for high-resolution biodiversity assessments through species richness mapping. In this study, leveraging data from the 4th National Ecosystem Survey and environmental variables, we developed a species distribution model using Random Forest. This model yielded mapping results of 24 mammalian species' distribution, utilizing the species richness index to generate a 100-meter resolution map of species richness. The research findings exhibited a notably high predictive accuracy, with the species distribution model demonstrating an average AUC value of 0.82. In addition, the comparison with National Ecosystem Survey data reveals that the species richness distribution in the high-resolution species richness mapping results conforms to a normal distribution. Hence, it stands as highly reliable foundational data for environmental impact assessment. Such research and analytical outcomes could serve as pivotal new reference materials for future urban development projects, offering insights for biodiversity assessment and habitat preservation endeavors.

Intelligent Transportation System (ITS) research optimized for autonomous driving using edge computing (엣지 컴퓨팅을 이용하여 자율주행에 최적화된 지능형 교통 시스템 연구(ITS))

  • Sunghyuck Hong
    • Advanced Industrial SCIence
    • /
    • v.3 no.1
    • /
    • pp.23-29
    • /
    • 2024
  • In this scholarly investigation, the focus is placed on the transformative potential of edge computing in enhancing Intelligent Transportation Systems (ITS) for the facilitation of autonomous driving. The intrinsic capability of edge computing to process voluminous datasets locally and in a real-time manner is identified as paramount in meeting the exigent requirements of autonomous vehicles, encompassing expedited decision-making processes and the bolstering of safety protocols. This inquiry delves into the synergy between edge computing and extant ITS infrastructures, elucidating the manner in which localized data processing can substantially diminish latency, thereby augmenting the responsiveness of autonomous vehicles. Further, the study scrutinizes the deployment of edge servers, an array of sensors, and Vehicle-to-Everything (V2X) communication technologies, positing these elements as constituents of a robust framework designed to support instantaneous traffic management, collision avoidance mechanisms, and the dynamic optimization of vehicular routes. Moreover, this research addresses the principal challenges encountered in the incorporation of edge computing within ITS, including issues related to security, the integration of data, and the scalability of systems. It proffers insights into viable solutions and delineates directions for future scholarly inquiry.