• Title/Summary/Keyword: 특성 모델 검증

Search Result 2,151, Processing Time 0.037 seconds

The Brassica rapa Tissue-specific EST Database (배추의 조직 특이적 발현유전자 데이터베이스)

  • Yu, Hee-Ju;Park, Sin-Gi;Oh, Mi-Jin;Hwang, Hyun-Ju;Kim, Nam-Shin;Chung, Hee;Sohn, Seong-Han;Park, Beom-Seok;Mun, Jeong-Hwan
    • Horticultural Science & Technology
    • /
    • v.29 no.6
    • /
    • pp.633-640
    • /
    • 2011
  • Brassica rapa is an A genome model species for Brassica crop genetics, genomics, and breeding. With the completion of sequencing the B. rapa genome, functional analysis of the genome is forthcoming issue. The expressed sequence tags are fundamental resources supporting annotation and functional analysis of the genome including identification of tissue-specific genes and promoters. As of July 2011, 147,217 ESTs from 39 cDNA libraries of B. rapa are reported in the public database. However, little information can be retrieved from the sequences due to lack of organized databases. To leverage the sequence information and to maximize the use of publicly-available EST collections, the Brassica rapa tissue-specific EST database (BrTED) is developed. BrTED includes sequence information of 23,962 unigenes assembled by StackPack program. The unigene set is used as a query unit for various analyses such as BLAST against TAIR gene model, functional annotation using MIPS and UniProt, gene ontology analysis, and prediction of tissue-specific unigene sets based on statistics test. The database is composed of two main units, EST sequence processing and information retrieving unit and tissue-specific expression profile analysis unit. Information and data in both units are tightly inter-connected to each other using a web based browsing system. RT-PCR evaluation of 29 selected unigene sets successfully amplified amplicons from the target tissues of B. rapa. BrTED provided here allows the user to identify and analyze the expression of genes of interest and aid efforts to interpret the B. rapa genome through functional genomics. In addition, it can be used as a public resource in providing reference information to study the genus Brassica and other closely related crop crucifer plants.

The Effects of Learning Transfer on Perceived Usefulness and Perceived Ease of Use in Enterprise e-Learning - Focused on Mediating Effects of Self-Efficacy and Work Environment - (지각된 유용성과 사용용이성이 기업 이러닝 교육의 학습전이에 미치는 영향에 관한 연구 -자기효능감과 업무환경의 매개효과를 중심으로-)

  • Park, Dae-Bum;Gu, Ja-Won
    • Management & Information Systems Review
    • /
    • v.37 no.3
    • /
    • pp.1-25
    • /
    • 2018
  • This research performed the empirical test for the effects of learning transfer on perceived usefulness, perceived ease of use, self-efficacy and work environment using 390 employees who have experienced e-learning in domestic and foreign companies. Analyzed the mediating effects of self-efficacy and work environment in addition to direct effect of each factor on learning transfer. The results showed that perceived usefulness and perceived ease-of-use of e-learning learner had a positive(+) effect on self-efficacy and a positive influence on supervisor and peer support and organizational climate. Self-efficacy showed a positive effect on learning transfer, and supervisor support, peer support and organizational climate had a positive influence on learning transfer as well. Perceived usefulness also had a positive effect on learning transfer. However, perceived ease-of-use had no significant effect on learning transfer. As a result of the mediating effect analysis, self-efficacy and work environment were analyzed to have mediating effects between perceived usefulness, perceived ease of use, and learning transfer. The implications of this study are as follows. First, this study designed a new research model that reflects factors influencing the effect of learning transfer on acceptance of e-learning that is common in corporate education. It has derived a research model of perceived usefulness and perceived ease-of-use, which were used as mediating variables for external characteristics factors, as independent variables, using self-efficacy and work environment as mediating variables, which were studied as external factors. Second, most of the studies on technology acceptance model and learning transfer are conducted in a single country. The reliability was enhanced by testing the study models using different samples from 26 countries. Third, perceived usefulness and ease-of-use in existing studies have been considered as key determinants of acceptance intention and learning transfer. This study explored the mediating effects of learner and environmental factors on the accepted information technology and strengthened and supplemented the path of learning transfer of perceived usefulness and ease-of-use. In addition, based on the sample analysis of various countries used in this study, it is expected that future international comparative studies will be possible.

Development of Cloud Detection Method Considering Radiometric Characteristics of Satellite Imagery (위성영상의 방사적 특성을 고려한 구름 탐지 방법 개발)

  • Won-Woo Seo;Hongki Kang;Wansang Yoon;Pyung-Chae Lim;Sooahm Rhee;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1211-1224
    • /
    • 2023
  • Clouds cause many difficult problems in observing land surface phenomena using optical satellites, such as national land observation, disaster response, and change detection. In addition, the presence of clouds affects not only the image processing stage but also the final data quality, so it is necessary to identify and remove them. Therefore, in this study, we developed a new cloud detection technique that automatically performs a series of processes to search and extract the pixels closest to the spectral pattern of clouds in satellite images, select the optimal threshold, and produce a cloud mask based on the threshold. The cloud detection technique largely consists of three steps. In the first step, the process of converting the Digital Number (DN) unit image into top-of-atmosphere reflectance units was performed. In the second step, preprocessing such as Hue-Value-Saturation (HSV) transformation, triangle thresholding, and maximum likelihood classification was applied using the top of the atmosphere reflectance image, and the threshold for generating the initial cloud mask was determined for each image. In the third post-processing step, the noise included in the initial cloud mask created was removed and the cloud boundaries and interior were improved. As experimental data for cloud detection, CAS500-1 L2G images acquired in the Korean Peninsula from April to November, which show the diversity of spatial and seasonal distribution of clouds, were used. To verify the performance of the proposed method, the results generated by a simple thresholding method were compared. As a result of the experiment, compared to the existing method, the proposed method was able to detect clouds more accurately by considering the radiometric characteristics of each image through the preprocessing process. In addition, the results showed that the influence of bright objects (panel roofs, concrete roads, sand, etc.) other than cloud objects was minimized. The proposed method showed more than 30% improved results(F1-score) compared to the existing method but showed limitations in certain images containing snow.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

The Effect of Push, Pull, and Push-Pull Interactive Factors for Internationalization of Contract Foodservice Management Company (위탁급식업체 국제화를 위한 추진, 유인 및 상호작용 요인의 영향 분석)

  • Lee, Hyun-A;Han, Kyung-Soo
    • Journal of Nutrition and Health
    • /
    • v.42 no.4
    • /
    • pp.386-396
    • /
    • 2009
  • The purpose of this study was to analyze the effect of push, pull and push-pull interactive factors for CFMC (Contract Foodservice Management Company)'s internationalization. The study was a quantitative study part in mixed methods (QUAL ${\rightarrow}$ quan) which was mainly qualitative study and quantitative study. Mail survey was carried out for quantitative study. For study subjects, 1,281 persons who completed 'Food Service Management Professional Program' of 'Y' University were selected as a population because the program was mainly for CFMC's workers. The analysis methods used in this study were frequency analysis, factor analysis, correlation analysis and multiple regression analysis with SPSS 17.0. Push factors had the saturation in domestic market and the manager's purpose (fac.1) and the investment for internationalization (fac.2). Pull factors had the company's external environment for internationalization (fac.3) and the global network and spread of culture (fac.4). Push-pull interactive factors had the information about foreign market (fac.5), the procedure and budget of overseas expansion (fac.6) and the national network and size of domestic market (fac.7). Internal dynamics factors had the deterrents for internationalization (fac.8) and the enablers for internationalization (fac.9). The result showed that the company's external environment in pull factors had positive effects on the deterrents for internationalization. The global network and the spread of culture had positive effects on the enablers for internationalization. The information about foreign market in push-pull interactive factors had positive effects on the deterrents and enablers for internationalization. The national network and the size of domestic market had positive effects on the enablers for internationalization. The deterrents and enablers for internationalization had positive effects on the level of internationalization, and the deterrents had more effects on the level of internationalization than the enablers did (${\beta}$= .492 > .177).

Application and Analysis of Ocean Remote-Sensing Reflectance Quality Assurance Algorithm for GOCI-II (천리안해양위성 2호(GOCI-II) 원격반사도 품질 검증 시스템 적용 및 결과)

  • Sujung Bae;Eunkyung Lee;Jianwei Wei;Kyeong-sang Lee;Minsang Kim;Jong-kuk Choi;Jae Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1565-1576
    • /
    • 2023
  • An atmospheric correction algorithm based on the radiative transfer model is required to obtain remote-sensing reflectance (Rrs) from the Geostationary Ocean Color Imager-II (GOCI-II) observed at the top-of-atmosphere. This Rrs derived from the atmospheric correction is utilized to estimate various marine environmental parameters such as chlorophyll-a concentration, total suspended materials concentration, and absorption of dissolved organic matter. Therefore, an atmospheric correction is a fundamental algorithm as it significantly impacts the reliability of all other color products. However, in clear waters, for example, atmospheric path radiance exceeds more than ten times higher than the water-leaving radiance in the blue wavelengths. This implies atmospheric correction is a highly error-sensitive process with a 1% error in estimating atmospheric radiance in the atmospheric correction process can cause more than 10% errors. Therefore, the quality assessment of Rrs after the atmospheric correction is essential for ensuring reliable ocean environment analysis using ocean color satellite data. In this study, a Quality Assurance (QA) algorithm based on in-situ Rrs data, which has been archived into a database using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Bio-optical Archive and Storage System (SeaBASS), was applied and modified to consider the different spectral characteristics of GOCI-II. This method is officially employed in the National Oceanic and Atmospheric Administration (NOAA)'s ocean color satellite data processing system. It provides quality analysis scores for Rrs ranging from 0 to 1 and classifies the water types into 23 categories. When the QA algorithm is applied to the initial phase of GOCI-II data with less calibration, it shows the highest frequency at a relatively low score of 0.625. However, when the algorithm is applied to the improved GOCI-II atmospheric correction results with updated calibrations, it shows the highest frequency at a higher score of 0.875 compared to the previous results. The water types analysis using the QA algorithm indicated that parts of the East Sea, South Sea, and the Northwest Pacific Ocean are primarily characterized as relatively clear case-I waters, while the coastal areas of the Yellow Sea and the East China Sea are mainly classified as highly turbid case-II waters. We expect that the QA algorithm will support GOCI-II users in terms of not only statistically identifying Rrs resulted with significant errors but also more reliable calibration with quality assured data. The algorithm will be included in the level-2 flag data provided with GOCI-II atmospheric correction.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

우리나라의 출산력과 가정경제행태에 관한 연구

  • 노공균;조남훈
    • Korea journal of population studies
    • /
    • v.10 no.2
    • /
    • pp.17-45
    • /
    • 1987
  • This study contributes to understanding women's labor market behavior by focusing on a particular set of labor force transitions - labor force withdrawal and entry during the period surrounding the first birth of a child. In particular, this study provides a dynamic analyses, using longitudinal data and event history analysis, to conceptualize labor force behaviors in a straightforward way. The main research question addresses which factors increase or decrease the hazard rates of leaving and entering the labor market. This study used piecewise Gompertz model, following the guide of the non-parametric analysis on the hazard rates, which allowed relatively detailed description on the distribution of timing of leave and entry to the labor market as parameters of interest. The results show that preferences and structural variables, as well as economic considerations, are very important factors to explain the labor market behavior of women in the period surrounding childbirth.

  • PDF

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

A Study on the Optimum Design of Multiple Screw Type Dryer for Treatment of Sewage Sludge (하수슬러지 처리를 위한 다축 스크류 난류 접촉식 건조기의 최적 설계 연구)

  • Na, En-Soo;Shin, Sung-Soo;Shin, Mi-Soo;Jang, Dong-Soon
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.34 no.4
    • /
    • pp.223-231
    • /
    • 2012
  • The purpose of this study is to investigate basically the mechanism of heat transfer by the resolution of complex fluid flow inside a sophisticated designed screw dryer for the treatment of sewage sludge by using numerical analysis and experimental study. By doing this, the result was quite helpful to obtain the design criteria for enhancing drying efficiency, thereby achieving the optimal design of a multiple screw type dryer for treating inorganic and organic sludge wastes. One notable design feature of the dryer was to bypass a certain of fraction of the hot combustion gases into the bottom of the screw cylinder, by the fluid flow induction, across the delicately designed holes on the screw surface to agitate internally the sticky sludges. This offers many benefits not only in the enhancement of thermal efficiency even for the high viscosity material but also greater flexibility in the application of system design and operation. However, one careful precaution was made in operation in that when distributing the hot flue gas over the lump of sludge for internal agitation not to make any pore blocking and to avoid too much pressure drop caused by inertial resistance across the lump of sludge. The optimal retention time for rotating the screw at 1 rpm in order to treat 200 kg/hr of sewage sludge was determined empirically about 100 minutes. The corresponding optimal heat source was found to be 150,000 kcal/hr. A series of numerical calculation is performed to resolve flow characteristics in order to assist in the system design as function of important system and operational variables. The numerical calculation is successfully evaluated against experimental temperature profile and flow field characteristics. In general, the calculation results are physically reasonable and consistent in parametric study. In further studies, more quantitative data analyses such as pressure drop across the type and loading of drying sludge will be made for the system evaluation in experiment and calculation.