• Title/Summary/Keyword: Data Set Records

Search Result 197, Processing Time 0.03 seconds

Estimation of Genetic Parameters for Milk Production Traits in Holstein Dairy Cattle (홀스타인의 유생산형질에 대한 유전모수 추정)

  • Cho, Chungil;Cho, Kwanghyeon;Choy, Yunho;Choi, Jaekwan;Choi, Taejeong;Park, Byoungho;Lee, Seungsu
    • Journal of Animal Science and Technology
    • /
    • v.55 no.1
    • /
    • pp.7-11
    • /
    • 2013
  • The purpose of this study was to estimate (co) variance components of three milk production traits for genetic evaluation using a multiple lactation model. Each of the first five lactations was treated as different traits. For the parameter estimation study, a data set was set up including lactations from cows calved from 2001 to 2009. The total number of raw lactation records in first to fifth parities reached 1,416,589. At least 10 cows were required for each contemporary group, herd-year-season effect. Sires with fewer than 10 daughters were discarded. Lactations with 305d milk yield exceeding 15,000 kg were removed. In total, 1,456 sires of cows were remained after all the selection steps. A complete pedigree consisting of 292,382 records was used for the study. A sire model containing herd-year-season, caving age, and sire additive genetic effects was applied to the selected lactation data and pedigree for estimating (co) variance components via VCE. Heritabilities and genetic or residual correlations were then derived from the (co) variance estimates using R package. Genetic correlations between lactations ranged from 0.76 to 0.98 for milk yield, 0.79~1.00 for fat yield, 0.75~1.00 for protein yield. On individual lactation basis, relatively low heritability values were obtained 0.14~0.23, 0.13~0.20 and 0.14~0.19 for milk, fat, and protein yields, respectively. For the combined lactation heritability values were 0.29, 0.28, and 0.26 for milk, fat, and protein yields. The estimated parameters will be used in national genetic evaluations for production traits.

A Study on the Recurrence Characteristics of Wet and Dry Years Appeared in Seoul Annual Rainfall Data (서울지점 연강수량 자료에 나타난 다우해 및 과우해의 재현 특성에 관한 연구)

  • Yu, Cheol-Sang;Kim, Bo-Yun;No, Jae-Gyeong
    • Journal of Korea Water Resources Association
    • /
    • v.33 no.3
    • /
    • pp.307-314
    • /
    • 2000
  • This study is to investigate the recurrence characteristics of wet and dry years using over 200 year records of annual rainfall depth including Chosun Age in Korea. As well as analyzing the correlation structure of the raw data, recurrence trends of wet and dry year has been investigated based on several truncation levels (mean, $mean{\pm}0.25stdv.,\;mean{\pm}O.5stdv.,\;mean{\pm}O.75stdv.,\;mean{\pm}stdv.$). Also the transition probability among wet, dry and normal years has been derived for the same truncation levels. and finally the average return periods based on the steady-state probabilities were obtained. This analysis has been applied to not only the entire data but also partial data set of before- and after-the long dry period around 1900 in order to compare and detect the possible difference between the Chukwooki (an old raingauge invented in Chosun age) and the modem flip-bucket style. As a result, Similar pattern of dry and wet year recurrence has been found, but the return period of extremely dry years after the dry period shown longer than that before the dry period. Assuming that the dry and wet years can be defined as $mean{\pm}$ standard deviations, respectively, the return period of the wet years is shown to be about 5~6 years and that of the dry years about 6~7 years.

  • PDF

A Matchmaking System Adjusting the Mate-Selection Criteria based on a User's Behaviors using the Decision Tree (고객의 암묵적 이상형을 반영하여 배우자 선택기준을 동적으로 조정하는 온라인 매칭 시스템: 의사결정나무의 활용을 중심으로)

  • Park, Yoon-Joo
    • Information Systems Review
    • /
    • v.14 no.3
    • /
    • pp.115-129
    • /
    • 2012
  • A matchmaking system is a type of recommender systems that provides a set of dating partners suitable for the user by online. Many matchmaking systems, which are widely used these days, require users to specify their preferences with regards to ideal dating partners based on criteria such as age, job and salary. However, some users are not aware of their exact preferences, or are reluctant to reveal this information even if they do know. Also, users' selection standards are not fixed and can change according to circumstances. This paper suggests a new matchmaking system called Decision Tree based Matchmaking System (DTMS) that automatically adjusts the stated standards of a user by analyzing the characteristics of the people the user chose to contact. AMMS provides recommendations for new users on the basis of their explicit preferences. However, as a user's behavioral records are accumulated, it begins to analyze their hidden implicit preferences using a decision tree technique. Subsequently, DTMS reflects these implicit preferences in proportion to their predictive accuracy. The DTMS is regularly updated when a user's data size increases by a set amount. This paper suggests an architecture for the DTMS and presents the results of the implementation of a prototype.

  • PDF

Microseismic Monitoring Using Seismic Mini-Array (소규모 배열식 지진관측소를 이용한 미소지진 관측)

  • Sheen, Dong-Hoon;Cho, Chang Soo;Lee, Hee Il
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.1
    • /
    • pp.53-58
    • /
    • 2013
  • It was introduced a seismic mini-array that could monitor microseismicity efficiently and analyzed seismic data obtained from the mini-array that was operated from December 19, 2012 to January 9, 2013. The mini-array consisted of a six channel data logger, a central 3 components seismometer, and a tripartite array of vertical sensors centered around the 3 components seismometer as an equilateral triangle with about 100 m aperture. All seismometers that had the same instrument response were connected a 6 channel data logger, which was set to record seismograms at a sampling rate of 200 sps. During the three weeks of campaign, a total of 16 microearthquakes were detected. Using time differences of P wave arrivals from the vertical components, S-P time from 3 components seismometers, and back azimuth from the seismic array analysis, it was possible to locate the hypocenter of the microearthquake even with one seismic miniarray. The epicenters of two nearest microearthquakes were a quarry site located 1.3 km from the mini-array. The records of quarry blasting confirmed the our analysis.

Severity Analysis of the Pedestrian Crash Patterns Based on the Ordered Logit Model (Ordered Logit Model을 이용한 보행자 사고 심각도 요인 분석)

  • Choi, Jai-Sung;Kim, Sang-Youp;Hwang, Kyung-Sung;Baik, Seung-Yup
    • International Journal of Highway Engineering
    • /
    • v.11 no.1
    • /
    • pp.153-164
    • /
    • 2009
  • This Paper presents the severity analysis result of the year 2006 national pedestrian crashes using the data base of 37,589 records prepared for the National Police Bureau. A set of attributing factors considered to affect pedestrian crash patterns were selected, and their contributing effects were investigated by applying the Ordered Logit Model. This model was selected because this model has been able to afford satisfactory results when the dependent variable involved ordered severity levels; fatal, injury, and property- damage-only in this investigation. The investigation has unveiled the followings; First, the pedestrian crash patterns were dependent upon human -drivel and pedestrian- characteristics including gender, age, and drinking conditions. Second, other contributing factors included vehicle, roadway geometric, weather, and hour of day characteristics. Third, seasonal effect was not contributive to crash patterns. Finally, the application of the Ordered Logit Model facilitated the ordered severity level analysis of the pedestrian crash data. This paper concludes that conventional wisdom on the pedestrian crash characteristics is largely truthful. However, this conclusion is limited only to the data used in this analysis, and further research is required for its generalization.

  • PDF

Current Status and Future Development Direction of University Archives' Information Services : Based on the Interview with the Archives' Staff (대학기록관 기록정보서비스의 현황과 발전 방안 실무자 면담을 중심으로)

  • Lee, Hye Kyoung;Rieh, Hae-Young
    • The Korean Journal of Archival Studies
    • /
    • no.40
    • /
    • pp.131-180
    • /
    • 2014
  • Various theoretical studies have been conducted to activate university archives, but the services provided currently in the field haven't been much studied. This study aims to investigate the usage and users of the domestic university archives, examine the types of the archival information services provided, understand the characteristics and limitations of the services, and suggest the development direction. This study set 3 objectives for the research. First, Identify the users of the university archives, the reason of the use, and the kinds of archival materials used. Second, the kinds of services and programs the university archives provide to the users. Third, the difficulties the university archives face to execute information services, the plans they consider in the future, and the best possible direction to prove the services. The authors of the study determined to apply interviews with the staffs at university archives to identify the current status of the services. For this, the range of the services offered in the field of university archives was defined first, and then, key research questions were composed. To collect valid data, authors carried out face to face interviews and email/phone interviews with the staff of 12 university archives, as well as the investigation of their Web sites. The collected data were categorized by the topic of the interview questions for analysis. By analyzing the data, some useful information was yielded including the demographic information of the research participants, the characteristics of the archives' users and requests, the types and activities of the services the university archives offered, and the limitations of archival information services, the archives' future plans, and the best possible development direction. Based on the findings, this study proposed the implications and suggestions for archival information services in university archives, in 3 domains as follows. First, university archives should build close relationship with internal university administrative units, student groups, and faculty members for effective collection and better use of archives. Second, university archives need to acquire both administrative records by transfer and manuscripts and archives by active collection. Especially, archives need to try to acquire unique archives of the universities own. Third, the archives should develop and provide various services that can elevate the awareness of university archives and induce more potential users to the archives. Finally, to solve the problems the archives face, such as the lack of the understanding of the value of the archives and the shortage of the archival materials, it was suggested that the archivists need to actively collect archival materials, and provide the valuable information by active seeking in the archives where ever it is needed.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Occurrence of Low Back Pains for Dental Hygienists (치과위생사의 요통 발생에 관한 연구)

  • Lee, Sook-Jeung
    • Journal of Korean society of Dental Hygiene
    • /
    • v.4 no.2
    • /
    • pp.265-276
    • /
    • 2004
  • The purpose of this study,was to find out health status, characteristics related to working conditions, occurrence of low back pain and its related factors among dental hygienists working in dental clinics, and, thus, to provide basic information necessary to set up some plans for preventing the occurrence of low back pains and improving working conditions for dental hygienists. The data were collected from 310 dental hygienists working at dental clinics in Pusan and Kyungnam area, including Masan, Changwon, Jinhae and Jinju, with a self-administered questionnaire and were analyzed finally for 295 records with SPSS for Windows(7.52K) program. The results were as follows : Working condition was considered to be fair by 562% of dental hygienists were thought to threaten their health in the dental clinics. About seventy percent of dental hygienists worked over ten hours a day and 72.8% were standing while working over 7 hours, 65% considered working hours too long. Health status was thought be more than average for 82.3% while more than half perceived certain degree of stress frequently. The rate of complaining low back pains among dental hygienists was 87.8% with pains more than medium level for 39.3%. The variables significantly related to the degree of low back pains were height, uncomfortable postures, type of working postures and the height of working table, while some variables including height, hours of working on the feet, the presence of hazardous work environment, uncomfortable postures, the presence of hazardous works, the amount of working hours, type of working postures and work-associated stresses, were significantly related to the number of symptoms for low back pains. Multiple regression analysis showed that occurrence of low back pains was significantly influenced by 5 factors such as subjective status of health, work-related stresses, the degree of perception to work environment, the degree of perception to the amount of working hours and age. In conclusion, it might be necessary to manage working conditions effectively by reducing working hours, making good postures while working and removing stressful conditions in order to reduce risk factors for the occurrence of low back pains among dental hygienists. Therefore good working postures, assignment of appropriate rest time and some programs for early detection, care and education of low back pains should be provided for the dental hygienists.

  • PDF

An Analysis of Intuitive Thinking of Elementary Students in Mathematical Problem Solving Process (수학 문제해결 과정에 나타난 초등학생들의 직관적 사고 분석)

  • You, Dae-Hyun;Kang, Wan
    • Education of Primary School Mathematics
    • /
    • v.12 no.1
    • /
    • pp.1-20
    • /
    • 2009
  • The purposes of this study are to analyze elementary school student's intuitive thinking in the process of mathematical problem solving and to analyze elementary school student's errors of intuitive thinking in the process of mathematical problem solving. According to these purposes, the research questions can be set up as followings. (1) How is the state of illumination of the elementary school student's intuitive thinking in the process of mathematical problem solving? (2) What are origins of errors by elementary school student's intuitive thinking in the process of mathematical problem solving? In this study, Bogdan & Biklen's qualitative research method were used. The subjects in this study were 4 students who were attending the elementary school. The data in this study were 'Intuitine Thinking Test', records of observation and interview. In the interview, the discourses were recorded by sound and video recording. These were later transcribed and analyzed in detail. The findings of this study were as follows: First, If Elementary school student Knows the algorithm of problem, they rely on solving by algorithm rather than solving by intuitive thinking. Second, their problem solving ability by intuitive model are low. What is more they solve the problem by Intuitive model, their Self- Evidence is low. Third, in the process of solving the problem, intuitive thinking can complement logical thinking. Last, in the concept of probability and problem of probability, they are led into cognitive conflict cause of subjective interpretation.

  • PDF

The Trends and Prospects of Health Information Standards : Standardization Analysis and Suggestions (의료정보 표준에 관한 연구 : 표준화 분석 및 전망)

  • Kim, Chang-Soo
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.1-10
    • /
    • 2008
  • Ubiquitous health care system, which is one of the developing solution technologies of IT, BT and NT, could give us new medical environments in future. Implementing health information systems can be complex, expensive and frustrating. Healthcare professionals seeking to acquire or upgrade systems do not have a convenient, reliable way of specifying a level of adherence to communication standards sufficient to achieve truly efficient interoperability. Great progress has been made in establishing such standards-DICOM, IHE and HL7, notably, are now highly advanced. IHE has defined a common framework to deliver the basic interoperability needed for local and regional health information networks. It has developed a foundational set of standards-based integration profiles for information exchange with three interrelated efforts. HL7 is one of several ANSI-accredited Standards Developing Organizations operating in the healthcare arena. Most SDOs produce standards (protocols) for a particular healthcare domain such as pharmacy, medical devices, imaging or insurance transactions. HL7's domain is clinical and administrative data. HL7 is an international community of healthcare subject matter experts and information scientists collaborating to create standards for the exchange, management and integration of electronic healthcare information. The ASTM specification for Continuity of Care Record was developed by subcommittee E31.28 on electronic health records, which includes clinicians, provider institutions, administrators, patient advocates, vendors, and health industry. In this paper, there are suggestions that provide a test bed, demonstration and specification of how standards such a IHE, HL7, ASTM can be used to provide an integrated environment.

  • PDF