• Title/Summary/Keyword: Information Poor

Search Result 1,963, Processing Time 0.035 seconds

A Study on the Efficiency of Clinical Practice for Nursing Education in the Junior College of Nursing in Korea (전문대학 간호과의 임상 실험 효율화를 위한 연구)

  • Lee, Kun-Ja;Kim, Myung-Soon;Yang, Young Hee
    • Journal of Korean Public Health Nursing
    • /
    • v.3 no.2
    • /
    • pp.77-108
    • /
    • 1989
  • The purpose of this study was to find out the present condition of clinical practice and to develop a scheme on the efficiency of clinical practice for nursing education in junior college of nursing in korea. This study was conducted by 2 sections. Ist section was to find out the present condition of clinical practice to 42 directors of nursing collegd and data were collected July 8 to September 30, 1988. 2nd section wat to develop a scheme on the efficiency of clinical practice for nursing education and subjects were nursing professors 258: and clinical nurses 223 in 42 junior nursing colleges their clinical settings in korea. So total subjects were 481. Data were collected july 8, 1988 to June 30, 1988 and were analysed to get the mean, standand deviation, frequency, percentage, t-test, x-test used by SPSS - pc. Major findings were as follows: 1. The present condition of clinical education in junior college of nursing in Korea. 1) 32 colleges (76.2%) were managed by a-yeas system. 2) 25 colleges (59.5%) were performed by individual practice for each subject. 3) 4 weeks interval between class education and clinical education was a major type among total colleges(36.6%, J5 colleges) 4) 30 colleges (71.4%) provided clinical education for all subjects that should be practiced. Nursing administration wes not practiced in 5 colleges (41.9%) among the remainder(12 colleges). The main cause that all practice subjects were not practiced was the lack or absence of suitable clinical settings(8 colleges. 66.7%) 5) 18 colleges (42.9%) responded that a clinical educator was, subject-charged professor. 6) 12 colleges (29.3%) responded that a clinical instructor was in charge of 6~10 students. 7) The evaluation ration ratio(professor to head nurse) by each evaluator was mostly 50% to 50 % and 60% to 40%, respectively 11 colleges(27.5%) The most common evaluation methods were evaluation by head nures, report, presence, conference (11 colleges, 27.5%) 8) The field carrier of professor was mostly 2 years (79 persons, 20.7%) and mean was 3.2 years. The education carrier of a professor was mostly over than 6 years (261 persons, 66.4%) and mean was 9.2 years. The charge hours per-week of a professor were mostly 16-18 hours (16 persons, 131.8%) 9) 34 colleges (82.9%) approved that clinical practice hour was class hour and 18 colleges (43.9 %) counted that 2 hours of clinical education equaled 1 hour of class education. 2. A study 'on the efficiency of clinical practice for nursing education. L) general characteristics of subjects were as follows: kung-sang province (145 persons, 30.5%), 30-34 years (190 persons, 39.8%), graduated degree (245 persons, 51.5%), 6-10 years of carrier (199 persons, 41.4%) were the majority. 2) suitable clinical setting was responded the systematic ward with responsible clinical educator by 210 persons(43.8%) The response by working field of subjects showed a significant difference (p< 0.01) 3) 259 subjects (54.0%) responded that the desirable qualfication of clinical instructor was 3-5 years of clinical experience with master degree or higher. 4) The mean score of desirable quality degree of clinical instructor was 3.43 professors, score (3.54) was significantly higher than clinical nurses' (3.28) (p<0.01) 412 subjects (86.0%) responded that the insufficient guality of instructor was improved by continuing to seek more new information in reference. 5) 196 subjects (41.4%) responded that desirable qualification of head nurse was more than 2 years of head position among 5 years of clinical experience. The response by working' field of subjects showed a significant difference (p<0.05) 6) The mean score of desirable quality degree of head nurse was 3.18 Clinical nurses' score(3.38) was significantly higher than professors' (3.01) (p<0.01) 419 subjects (87.8%) responded that the insufficient of head nurse was improved by continuing relationship with instructor and being responsible from planing of clinical education. 7) The mean score of performance level of the desirable clinical education guide incollege was 2.91 Professors' score (2.96) was significantly higher than clinical nurses' (2.84) (p<0.01) 340 subjects (71.1%) responded that the possible resolution for poor performance was the more specified syllabus of clinical education and the satisfiable orientation for students. 8) The mean score of performance level of the desirable clinical education guide in hospital was 3.03 9) 141 subjects (29.6%) responded that the desirable clinical evaluator was the group of professor, head nurse, staff nurse. Response by working field of subjects was a significant difference (p< 0.05) 10) The mean score of performance level of the evaluation content needed in clinical education was 3.50 Clinical nurses' score (3.56) was significantly higher than professors' (3.45) (p<0.01) 11) 433 subjects (90.2%) responded that6 desirable evaluation method for clinical education was the presence. 12) The mean score of performance level about how personal difference among clinical educators was minimized was 2.89 and response by working field of subjects was not significant. The cause of poor performance was too much workload at clinical settings and too many students st colleges by 386 subjects (81.1%).

  • PDF

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Determining the Authenticity of Labeled Traceability Information by DNA Identity Test for Hanwoo Meats Distributed in Seoul, Korea (DNA 동일성 검사를 통한 서울지역 유통 한우육의 표시 이력정보 진위 판별)

  • Yeon-jae Bak;Mi-ae Park;Su-min Lee;Hyung-suk Park
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.1
    • /
    • pp.12-18
    • /
    • 2023
  • Beef traceability systems help prevent the distribution of Hanwoo (Korean native cattle) meat as imported beef. In particular, assigning a traceability number to each cattle can provide all information regarding the purchased Hanwoo meat to the consumers. In the present study, a DNA identity test was conducted on 344 samples of Hanwoo meat from large livestock product stores in Seoul between 2021 and 2022 to determine the authenticity of important label information, such as the traceability number. Traceability number mismatch was confirmed in 45 cases (13.1%). The mismatch rate decreased to 11.3% in 2022 from 14.7% in 2021, and the mismatch rate was higher in the northern region (16.9%) than in the southern region (10.2%). In addition, of the six brands, B and D showed satisfactory traceability system management, whereas E and A showed poor traceability system management, with significant differences (P<0.001). The actual traceability number confirmation rate was only 53.9% among the mismatch samples. However, examination of the authenticity of label information of the samples within the identified range revealed false marking in the order of the traceability number (13.1%), sex (2.9%), slaughterhouse name (2.2%), and grade (1.6%); no false marking of breed (Hanwoo) was noted. To prevent the distribution of erroneously marked livestock products, the authenticity of label information must be determined promptly. Therefore, a legal basis must be established mandating the filling of a daily work sheet, including the traceability number of beef, in partial meat subdivisions. Our findings can be used as reference data to guide the management direction of traceability systems for ensuring transparency in the distribution of livestock products.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Social Network Analysis for the Effective Adoption of Recommender Systems (추천시스템의 효과적 도입을 위한 소셜네트워크 분석)

  • Park, Jong-Hak;Cho, Yoon-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.305-316
    • /
    • 2011
  • Recommender system is the system which, by using automated information filtering technology, recommends products or services to the customers who are likely to be interested in. Those systems are widely used in many different Web retailers such as Amazon.com, Netfix.com, and CDNow.com. Various recommender systems have been developed. Among them, Collaborative Filtering (CF) has been known as the most successful and commonly used approach. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. However, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting in advance whether the performance of CF recommender system is acceptable or not is practically important and needed. In this study, we propose a decision making guideline which helps decide whether CF is adoptable for a given application with certain transaction data characteristics. Several previous studies reported that sparsity, gray sheep, cold-start, coverage, and serendipity could affect the performance of CF, but the theoretical and empirical justification of such factors is lacking. Recently there are many studies paying attention to Social Network Analysis (SNA) as a method to analyze social relationships among people. SNA is a method to measure and visualize the linkage structure and status focusing on interaction among objects within communication group. CF analyzes the similarity among previous ratings or purchases of each customer, finds the relationships among the customers who have similarities, and then uses the relationships for recommendations. Thus CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. Under the assumption that SNA could facilitate an exploration of the topological properties of the network structure that are implicit in transaction data for CF recommendations, we focus on density, clustering coefficient, and centralization which are ones of the most commonly used measures to capture topological properties of the social network structure. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. We explore how these SNA measures affect the performance of CF performance and how they interact to each other. Our experiments used sales transaction data from H department store, one of the well?known department stores in Korea. Total 396 data set were sampled to construct various types of social networks. The dependant variable measuring process consists of three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used UCINET 6.0 for SNA. The experiments conducted the 3-way ANOVA which employs three SNA measures as dependant variables, and the recommendation accuracy measured by F1-measure as an independent variable. The experiments report that 1) each of three SNA measures affects the recommendation accuracy, 2) the density's effect to the performance overrides those of clustering coefficient and centralization (i.e., CF adoption is not a good decision if the density is low), and 3) however though the density is low, the performance of CF is comparatively good when the clustering coefficient is low. We expect that these experiment results help firms decide whether CF recommender system is adoptable for their business domain with certain transaction data characteristics.

A Survey on the Consumer's Recognition of Food Labeling in Seoul Area (서울지역 소비자들의 식품표시에 대한 인식도 조사)

  • Choi, Mi-Hee;Youn, Su-Jin;Ahn, Yeong-Sun;Seo, Kab-Jong;Park, Ki-Hwan;Kim, Gun-Hee
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.39 no.10
    • /
    • pp.1555-1564
    • /
    • 2010
  • This study investigated consumer's recognition of food labeling in order to contribute to the development of food labels which are more informative to consumers. The questionnaires had been collected from 120 male and female customers living in Seoul with the age between 10's and 60's from November 2nd to November 7th 2009. For checking the food label at the time of purchase, 58.3% of the consumers checked the food label and the main reason for checking the food label was to confirm sell-by date (60.1%). Sixty percent of the consumers were satisfied with the current food labeling. Among those who are not satisfied, 30.6% complained about difficult terms to understand and 25.8% were dissatisfied with insufficient information. In every age group, most people were not satisfied with labeling on food ingredient and additives, followed by date of manufacture and sell-by date. 53.1% of consumers demanded to label date of manufacture and sell-by date together. For more clear information, consumers wanted use-by date (47.5%) rather than sell-by date (23.3%). 56.7% of consumers was dissatisfied with warning information such as allergic warning and the reasons for dissatisfaction were poor visibility (37.5%) and insufficient information (33.4%). Moreover most consumers (90.0%) showed little knowledge on irradiation. To improve of the food labeling standards into consumer-oriented standards, both amendment of the food labeling standards and consumer education will be necessary.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Making Cache-Conscious CCMR-trees for Main Memory Indexing (주기억 데이타베이스 인덱싱을 위한 CCMR-트리)

  • 윤석우;김경창
    • Journal of KIISE:Databases
    • /
    • v.30 no.6
    • /
    • pp.651-665
    • /
    • 2003
  • To reduce cache misses emerges as the most important issue in today's situation of main memory databases, in which CPU speeds have been increasing at 60% per year, and memory speeds at 10% per year. Recent researches have demonstrated that cache-conscious index structure such as the CR-tree outperforms the R-tree variants. Its search performance can be poor than the original R-tree, however, since it uses a lossy compression scheme. In this paper, we propose alternatively a cache-conscious version of the R-tree, which we call MR-tree. The MR-tree propagates node splits upward only if one of the internal nodes on the insertion path has empty room. Thus, the internal nodes of the MR-tree are almost 100% full. In case there is no empty room on the insertion path, a newly-created leaf simply becomes a child of the split leaf. The height of the MR-tree increases according to the sequence of inserting objects. Thus, the HeightBalance algorithm is executed when unbalanced heights of child nodes are detected. Additionally, we also propose the CCMR-tree in order to build a more cache-conscious MR-tree. Our experimental and analytical study shows that the two-dimensional MR-tree performs search up to 2.4times faster than the ordinary R-tree while maintaining slightly better update performance and using similar memory space.

A Study on Neutron Resonance Energy of 180Ta below 1eV Energy (1 eV 이하 에너지 영역에서의 180Ta 동위원소의 중성자공명에 대한 연구)

  • Lee, Samyol
    • Journal of the Korean Society of Radiology
    • /
    • v.8 no.6
    • /
    • pp.287-292
    • /
    • 2014
  • In this study, the neutron capture cross section of $^{180}Ta$(natural existence ratio: 0.012 %) obtain by measuring has been compared with the evaluated data for the capture data. In generally, the neutron capture resonance is defined as Breit-Wigner formula. The formula consists of the resonance parameters such as neutron width, total width and neutron width. However in the case of $^{180}Ta$, these are very poor experimental neutron capture cross section data and resonance information in below 10 eV. Therefore, in the study, we analyzed the neutron resonance of $^{180}Ta$ with the measuring the prompt gamma-ray from the sample. And the resonance was compared with the evaluated data by Mughabghab, ENDF/B-VII, JEFF-3.1 and TENDL 2012. Neutron sources from photonuclear reaction with 46-MeV electron linear accelerator at Research Reactor Institute, Kyoto University used for cross section measurement of $^{180}Ta(n,{\gamma})^{181}Ta$ reaction. $BGO(Bi_4Ge_3O_{12})$ scintillation detectors used for measurement of the prompt gamma ray from the $^{180}Ta(n,{\gamma})^{181}Ta$ reaction. The BGO spectrometer was composed geometrically as total energy absorption detector.

Awareness and practice of dental caries prevention according to concerns and recognition for off-spring's oral health (자녀의 구강건강 관심도 및 인지도에 따른 치아우식예방법의 인식과 실천)

  • Lee, Ji-Young;Cho, Pyeong-Kyu
    • Journal of Korean society of Dental Hygiene
    • /
    • v.11 no.6
    • /
    • pp.1005-1016
    • /
    • 2011
  • Objectives : The purpose of this study was to examine the awareness of mothers on their children's oral health and their concern for that by socio-demographic characteristics and the relationship of their awareness of methods of dental-caries prevention to their practice of the methods. Methods : The subjects in this study were 337 guardians of preschoolers at kindergartens and daycare centers. A self-administered survey was conducted from April 25 to May 27, 2011, and the collected data were analyzed by the statistical package SPSS 18.0. Results : 1.Self-rated concern for children's oral health, 87.7 percent and 12.1 percent replied, "So-so." Whether they were working or not and whether they were mainly responsible for child rearing made significant differences to that(p<.05). 2. As to subjective awareness of their children's oral health, the largest group of the mothers answered "So-so." (44.9%) The second replied that their children were in good oral health(40.5%), and the third group in poor oral health(14.2%). 3. The relationship between self-rated concern for their children's oral health and awareness of methods of caries prevention, statistically significant differences were found according to toothbrushing education and sealant(p<.05). There were no statistically significant differences in practice, but application of fluoride was the least. 4. The relationship between self-rated awareness for their children's oral health and awareness of the preventive methods of caries, there were statistically significant gaps in awareness of toothbrushing education(p<.05). In practice, statistically significant gaps were found in practice of toothbrushing education and sugar-intake restriction(p<.01). 5. In regard to the correlation between awareness and practice of the preventive methods of caries, awareness of all the factors involving toothbrushing education, sealant, application of fluoride and restriction of sugar intake had a significant positive correlation to practice of them. Better awareness led to better practice. Conclusions : In order to ensure children's successful oral health care, more authentic education of how to prevent dental caries should be offered by experts such as dental hygienists and dentists. Especially, detailed information on application of fluoride, restriction of sugar intake and pit and sealant should be provided for mothers to help their children to stay away from dental caries.