• Title/Summary/Keyword: search method

Search Result 5,588, Processing Time 0.032 seconds

Method Development for the Profiling Analysis of Endogenous Metabolites by Accurate-Mass Quadrupole Time-of-Flight(Q-TOF) LC/MS (LC/TOFMS를 이용한 생체시료의 내인성 대사체 분석법 개발)

  • Lee, In-Sun;Kim, Jin-Ho;Cho, Soo-Yeul;Shim, Sun-Bo;Park, Hye-Jin;Lee, Jin-Hee;Lee, Ji-Hyun;Hwang, In-Sun;Kim, Sung-Il;Lee, Jung-Hee;Cho, Su-Yeon;Choi, Don-Woong;Cho, Yang-Ha
    • Journal of Food Hygiene and Safety
    • /
    • v.25 no.4
    • /
    • pp.388-394
    • /
    • 2010
  • Metabolomics aims at the comprehensive, qualitative and quantitative analysis of wide arrays of endogenous metabolites in biological samples. It has shown particular promise in the area of toxicology and drug development, functional genomics, system biology and clinical diagnosis. In this study, analytical technique of MS instrument with high resolution mass measurement, such as time-of-flight (TOF) was validated for the purpose of investigation of amino acids, sugars and fatty acids. Rat urine and serum samples were extracted by selected each solvent (50% acetonitrile, 100% acetonitrile, acetone, methanol, water, ether) extraction method. We determined the optimized liquid chromatography/time-of-flight mass spectrometry (LC/TOFMS) system and selected appropriated columns, mobile phases, fragment energy and collision energy, which could search 17 metabolites. The spectral data collected from LC/TOFMS were tested by ANOVA. Obtained with the use of LC/TOFMS technique, our results indicated that (1) MS and MS/MS parameters were optimized and most abundant product ion of each metabolite were selected to be monitorized; (2) with design of experiment analysis, methanol yielded the optimal extraction efficiency. Therefore, the results of this study are expected to be useful in the endogenous metabolite fields according to validated SOP for endogenous amino acids, sugars and fatty acids.

In-Depth Interview of Parents Experienced First Infant Oral Examination (1차 영유아 구강검진을 경험한 부모의 심층면담)

  • Lee, Su-Na;Lim, Soon-Ryun
    • Journal of dental hygiene science
    • /
    • v.17 no.6
    • /
    • pp.543-551
    • /
    • 2017
  • The purpose of this study is to analyze the experience of the parents who examined the first infant oral examination and to understand how to improve the practical oral examination business. In-depth interviews were held with 10 parents who did the first infant oral examination, and their children's age was less than 18 to 29 months. The following conclusions were obtained by deriving the concepts and categories of the recorded contents. First, the main reason for the unsatisfactory examination of this study was that it was formal. Parents were disappointed in the fact that they did not look at the mouth of the child at the same time as it was fast and they said because it is carried out free of charge, it is more formal than the examination for general dental treatment. Second, most of the participants questioned whether they should resume infant oral examination. Third, it appears that the tooth number or dental terminology in the result notice is difficult to understand. Fourth, the opinion on the improvement of the infant oral examinations was should provided that the oral health management information after examination and the direct oral health management method education at the examination. In addition, we identified the need for parents' oral health care education for infants. Therefore, it has been confirmed that in order for the infant oral examination and young children to be practically carried out, the problems should be improved by collecting opinions of the parents. Also it is necessary to search for efficient business management method through repeated research related to infant oral examination.

Development of Intelligent ATP System Using Genetic Algorithm (유전 알고리듬을 적용한 지능형 ATP 시스템 개발)

  • Kim, Tai-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.131-145
    • /
    • 2010
  • The framework for making a coordinated decision for large-scale facilities has become an important issue in supply chain(SC) management research. The competitive business environment requires companies to continuously search for the ways to achieve high efficiency and lower operational costs. In the areas of production/distribution planning, many researchers and practitioners have developedand evaluated the deterministic models to coordinate important and interrelated logistic decisions such as capacity management, inventory allocation, and vehicle routing. They initially have investigated the various process of SC separately and later become more interested in such problems encompassing the whole SC system. The accurate quotation of ATP(Available-To-Promise) plays a very important role in enhancing customer satisfaction and fill rate maximization. The complexity for intelligent manufacturing system, which includes all the linkages among procurement, production, and distribution, makes the accurate quotation of ATP be a quite difficult job. In addition to, many researchers assumed ATP model with integer time. However, in industry practices, integer times are very rare and the model developed using integer times is therefore approximating the real system. Various alternative models for an ATP system with time lags have been developed and evaluated. In most cases, these models have assumed that the time lags are integer multiples of a unit time grid. However, integer time lags are very rare in practices, and therefore models developed using integer time lags only approximate real systems. The differences occurring by this approximation frequently result in significant accuracy degradations. To introduce the ATP model with time lags, we first introduce the dynamic production function. Hackman and Leachman's dynamic production function in initiated research directly related to the topic of this paper. They propose a modeling framework for a system with non-integer time lags and show how to apply the framework to a variety of systems including continues time series, manufacturing resource planning and critical path method. Their formulation requires no additional variables or constraints and is capable of representing real world systems more accurately. Previously, to cope with non-integer time lags, they usually model a concerned system either by rounding lags to the nearest integers or by subdividing the time grid to make the lags become integer multiples of the grid. But each approach has a critical weakness: the first approach underestimates, potentially leading to infeasibilities or overestimates lead times, potentially resulting in excessive work-inprocesses. The second approach drastically inflates the problem size. We consider an optimized ATP system with non-integer time lag in supply chain management. We focus on a worldwide headquarter, distribution centers, and manufacturing facilities are globally networked. We develop a mixed integer programming(MIP) model for ATP process, which has the definition of required data flow. The illustrative ATP module shows the proposed system is largely affected inSCM. The system we are concerned is composed of a multiple production facility with multiple products, multiple distribution centers and multiple customers. For the system, we consider an ATP scheduling and capacity allocationproblem. In this study, we proposed the model for the ATP system in SCM using the dynamic production function considering the non-integer time lags. The model is developed under the framework suitable for the non-integer lags and, therefore, is more accurate than the models we usually encounter. We developed intelligent ATP System for this model using genetic algorithm. We focus on a capacitated production planning and capacity allocation problem, develop a mixed integer programming model, and propose an efficient heuristic procedure using an evolutionary system to solve it efficiently. This method makes it possible for the population to reach the approximate solution easily. Moreover, we designed and utilized a representation scheme that allows the proposed models to represent real variables. The proposed regeneration procedures, which evaluate each infeasible chromosome, makes the solutions converge to the optimum quickly.

The Bibiographical Investigation of effect of Clematis mandshurica Maxim (위령선(威靈仙)의 약리(藥理)에 대한 사상의학적(四象醫學的) 고찰(考察))

  • Jung, Kuk-yung;Song, Il-byung
    • Journal of Sasang Constitutional Medicine
    • /
    • v.10 no.2
    • /
    • pp.151-162
    • /
    • 1998
  • Purpose and Method : We have many difficulty of using the existing medical Hurbs based on the theory of Yin-yang and the five elements, this is why we still do not explain the Sasang Constitutional medical Hurb Classification and do not have the Sasang Constitutional Pharmacology exactly, so we easily enter into a dispute and confusion. So through literary consideration about clematis mandshurica Maxim. I try to objectify Sasang Constitutional Clasification of Clematis mandshurica Maxim and the spirit of using Clematis mandshurica Maxim and common property of Sasang Constitutional Medical Hurb and try to find out a clue that search the effect of other Sasang Constitutional Medical Hurb. Result : Qi(氣) and mi(味) of Clematis mandshurica Maxim has bitter and hot taste and have won Qi(溫氣), the color is dark, the using portion of clematis mandshurica Maxim is root as medial Hurb. So Clematis mandshurica Maxim fall down from lung and divied impurity and purity and able to remove the symptom that dryness and fever is solidified like Magnoliae cortex(厚朴). Clematis mandshurica Maxim have the effect of awakening Jin-Qi(眞氣) of lung and divide impurity and purity of Qi(氣) and ack(液) and improve the fuction and structure of Taeumin(太陰人) I think that the method of literay consideration on objectification of Sasang Constitution Pharmacology is of great value.

  • PDF

A Systematic Review on the Effects of Virtual reality-based Telerehabilitation for Stroke Patients (뇌졸중 환자를 위한 가상현실 기반의 원격재활 효과에 관한 체계적 고찰)

  • Lim, Young-Myoung;Lee, ji-Yong;Jo, Seong-Jun;Ahn, Ye-Seul;Yoo, Doo-Han
    • The Journal of Korean society of community based occupational therapy
    • /
    • v.7 no.1
    • /
    • pp.59-70
    • /
    • 2017
  • Objective : The purpose of this study was to examine the effect of virtual reality-based remote rehabilitation on stroke patients systematically and to look for its effect and how to apply it domestically. Methods : In order to search data, EMBASE and CINAHL database were used. Relevant research used those terms of virtual reality, telerehabilitation, and stroke. A total of 10 studies satisfying the selection criteria was analyzed according to their qualitative level, general characteristics, and PICO method. Results : Based on the selected 10 studies, virtual reality-based telerehabilitation system was applied. Sensory and motor feedback was provided with inputting visual and auditory senses through a video in the home environment, and it stimulated changes in the client's nervous system. Tools to measure the results were upper extremity function, balance and gait, activities of daily living, etc. Those virtual reality-based telerehabilitation method had an effect on upper extremity function and ability of sense of balance in all studies, and on the activities of daily living partially. Telerehabilitation service to make up environmental specificity improved satisfaction of client. That meaned the effect of the intervention to maintain the function. Conclusion : The virtual reality-based telerehabilitation system was applied to upper extremity function, sense of balance, and activities of daily living largely, and it showed that it helped to improve functions through intervention, supervision, and training of therapist in the home environment as well. This study suggests the basis and possibility of clinical application on virtual-reality based telerehabilitation. Additional research is needed to diverse virtual reality intervention methods and the effect of telerehabilitation in the future.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Development of Music Recommendation System based on Customer Sentiment Analysis (소비자 감성 분석 기반의 음악 추천 알고리즘 개발)

  • Lee, Seung Jun;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.197-217
    • /
    • 2018
  • Music is one of the most creative act that can express human sentiment with sound. Also, since music invoke people's sentiment to get empathized with it easily, it can either encourage or discourage people's sentiment with music what they are listening. Thus, sentiment is the primary factor when it comes to searching or recommending music to people. Regard to the music recommendation system, there are still lack of recommendation systems that are based on customer sentiment. An algorithm's that were used in previous music recommendation systems are mostly user based, for example, user's play history and playlists etc. Based on play history or playlists between multiple users, distance between music were calculated refer to basic information such as genre, singer, beat etc. It can filter out similar music to the users as a recommendation system. However those methodology have limitations like filter bubble. For example, if user listen to rock music only, it would be hard to get hip-hop or R&B music which have similar sentiment as a recommendation. In this study, we have focused on sentiment of music itself, and finally developed methodology of defining new index for music recommendation system. Concretely, we are proposing "SWEMS" index and using this index, we also extracted "Sentiment Pattern" for each music which was used for this research. Using this "SWEMS" index and "Sentiment Pattern", we expect that it can be used for a variety of purposes not only the music recommendation system but also as an algorithm which used for buildup predicting model etc. In this study, we had to develop the music recommendation system based on emotional adjectives which people generally feel when they listening to music. For that reason, it was necessary to collect a large amount of emotional adjectives as we can. Emotional adjectives were collected via previous study which is related to them. Also more emotional adjectives has collected via social metrics and qualitative interview. Finally, we could collect 134 individual adjectives. Through several steps, the collected adjectives were selected as the final 60 adjectives. Based on the final adjectives, music survey has taken as each item to evaluated the sentiment of a song. Surveys were taken by expert panels who like to listen to music. During the survey, all survey questions were based on emotional adjectives, no other information were collected. The music which evaluated from the previous step is divided into popular and unpopular songs, and the most relevant variables were derived from the popularity of music. The derived variables were reclassified through factor analysis and assigned a weight to the adjectives which belongs to the factor. We define the extracted factors as "SWEMS" index, which describes sentiment score of music in numeric value. In this study, we attempted to apply Case Based Reasoning method to implement an algorithm. Compare to other methodology, we used Case Based Reasoning because it shows similar problem solving method as what human do. Using "SWEMS" index of each music, an algorithm will be implemented based on the Euclidean distance to recommend a song similar to the emotion value which given by the factor for each music. Also, using "SWEMS" index, we can also draw "Sentiment Pattern" for each song. In this study, we found that the song which gives a similar emotion shows similar "Sentiment Pattern" each other. Through "Sentiment Pattern", we could also suggest a new group of music, which is different from the previous format of genre. This research would help people to quantify qualitative data. Also the algorithms can be used to quantify the content itself, which would help users to search the similar content more quickly.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

Antimicrobial Synergistic Effects of Gallnut Extract and Natural Product Mixture against Human Skin Pathogens (피부 병원성균에 대한 오배자 천연 복합물의 시너지 항균 효과)

  • Kim, Ju Hee;Choi, Yun Sun;Kim, Wang Bae;Park, Jin Oh;Im, Dong Joong
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.47 no.2
    • /
    • pp.155-161
    • /
    • 2021
  • This study was attempted to investigate natural materials with antimicrobial activity and to apply as natural preservatives in cosmetics. The disc diffusion method was used to search for nine species of natural antibacterial material for three species of skin pathogenic bacteria (Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa) and Candida albicans. As a result of measuring the size of inhibition zone, Rhus Semialata gall (Gallnut) extract, Oak vinegar, and ε-polylysine were showed strongest antibacterial activities (> 10 mm). The Minimum Bactericidal Concentration (MBC) of gallnut and oak vinegar ranged from 10 to 20 mg/mL and from 20 to 40 mg/mL against five human skin pathogens. The MBC of ε-polylysine ranged from 0.5 to 2 mg/mL in fungus. The synergic effect of gallnut extract/oak vinegar mixture and gallnut extract/ε-polylysine mixture were evaluated by checkerboard test. Compared to when used alone, the MBC of gallnut extract/oak vinegar mixture were at 4 times lower concentration against E. coli, C. albicans, and A. brasiliensis. Also Furthermore, the MBC of gallnut extract/ε-polylysine mixture were at 4 times lower concentration against C. albicans and A. brasiliensis. It was confirmed that the combination of gallnut extract with oak vinegar or ε-polylysine resulted in synergistic antibacterial effect against three human skin pathogens. Thus, it is expected that gallnut extract and natural product mixture can not only demonstrate antibacterial synergies, but also be applied in cosmetics as a natural preservative system with a wide antibacterial spectrum.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.