• Title/Summary/Keyword: methods of data collection

Search Result 1,579, Processing Time 0.03 seconds

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

DEVELOPMENT OF SAFETY-BASED LEVEL-OF-SERVICE CRITERIA FOR ISOLATED SIGNALIZED INTERSECTIONS (독립신호 교차로에서의 교통안전을 위한 서비스수준 결정방법의 개발)

  • Dr. Tae-Jun Ha
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.3-32
    • /
    • 1995
  • The Highway Capacity Manual specifies procedures for evaluating intersection performance in terms of delay per vehicle. What is lacking in the current methodology is a comparable quantitative procedure for ass~ssing the safety-based level of service provided to motorists. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections based on the relative hazard of alternative intersection designs and signal timing plans. Conflict opportunity models were developed for those crossing, diverging, and stopping maneuvers which are associated with left-turn and rear-end accidents. Safety¬based level-of-service criteria were then developed based on the distribution of conflict opportunities computed from the developed models. A case study evaluation of the level of service analysis methodology revealed that the developed safety-based criteria were not as sensitive to changes in prevailing traffic, roadway, and signal timing conditions as the traditional delay-based measure. However, the methodology did permit a quantitative assessment of the trade-off between delay reduction and safety improvement. The Highway Capacity Manual (HCM) specifies procedures for evaluating intersection performance in terms of a wide variety of prevailing conditions such as traffic composition, intersection geometry, traffic volumes, and signal timing (1). At the present time, however, performance is only measured in terms of delay per vehicle. This is a parameter which is widely accepted as a meaningful and useful indicator of the efficiency with which an intersection is serving traffic needs. What is lacking in the current methodology is a comparable quantitative procedure for assessing the safety-based level of service provided to motorists. For example, it is well¬known that the change from permissive to protected left-turn phasing can reduce left-turn accident frequency. However, the HCM only permits a quantitative assessment of the impact of this alternative phasing arrangement on vehicle delay. It is left to the engineer or planner to subjectively judge the level of safety benefits, and to evaluate the trade-off between the efficiency and safety consequences of the alternative phasing plans. Numerous examples of other geometric design and signal timing improvements could also be given. At present, the principal methods available to the practitioner for evaluating the relative safety at signalized intersections are: a) the application of engineering judgement, b) accident analyses, and c) traffic conflicts analysis. Reliance on engineering judgement has obvious limitations, especially when placed in the context of the elaborate HCM procedures for calculating delay. Accident analyses generally require some type of before-after comparison, either for the case study intersection or for a large set of similar intersections. In e.ither situation, there are problems associated with compensating for regression-to-the-mean phenomena (2), as well as obtaining an adequate sample size. Research has also pointed to potential bias caused by the way in which exposure to accidents is measured (3, 4). Because of the problems associated with traditional accident analyses, some have promoted the use of tqe traffic conflicts technique (5). However, this procedure also has shortcomings in that it.requires extensive field data collection and trained observers to identify the different types of conflicts occurring in the field. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections that would be compatible and consistent with that presently found in the HCM for evaluating efficiency-based level of service as measured by delay per vehicle (6). The intent was not to develop a new set of accident prediction models, but to design a methodology to quantitatively predict the relative hazard of alternative intersection designs and signal timing plans.

  • PDF

Incidence and Spectrum of Chromosomal Abnormalities associated with Spontaneous Abortions in Korea: 470 Products of Conception over a Period of 6 Years (2005-2010) (국내 자연유산에 의한 수태산물 핵형분석에서 관찰된 염색체 이상의 발생율과 유형: 6년(2005-2010)간 수태산물 470예 분석)

  • Han, Sung-Hee;An, Jeong-Wook;Yang, Young-Ho;Kim, Young-Jin;Cho, Han-Ik;Lee, Kyoung-Ryul
    • Journal of Genetic Medicine
    • /
    • v.8 no.1
    • /
    • pp.44-52
    • /
    • 2011
  • Purpose: Cytogenetic analysis of spontaneous abortions (SABs) provides valuable information to establish the causes of fetal loss, information that is essential to provide accurate reproductive and genetic counseling couples. Such analysis also provides information on the frequencies and types of chromosomal abnormalities and associated risks of recurrence. However, there have only been a few reports of chromosomal abnormalities in small samples of SABs in the Korean population. Here, we report the incidence and spectrum of chromosomal abnormalities for cases of 470 SAB in Korea. Material and Methods: Between 2005 and 2010, a total of 470 products of conception (POC) resulting from SABs were submitted to our laboratory for cytogenetic analysis from various medical sites in Korea. The incidences and types of specific chromosomal abnormalities were determined. The abnormalities were distinguished by gestational age at the time of SAB and by maternal age. Results: The frequency of chromosomal abnormalities in POCs was 54.3% (255/470), including 228 (89.3%) numerical and 27 (10.7%: 3 balanced and 24 unbalanced) structural abnormalities. Among the numerical abnormalities, trisomy was predominant (67.0%), followed by monosomy X (12.5%), polyploidy (8.2%), triple X (0.8%), and autosomal monosomy (0.8%). The overall sex ratio (male: female) among the 470 POCs with normal and abnormal karyotypes were 0.58 and 0.65, respectively. Trisomies were identified for each autosome, with the exceptions of 1, 3, and 19. Among the 171 autosomal trisomies, trisomy 16 was the most common (19.9%), followed by trisomy 22 (13.5%), trisomy 21 (12.3 %), trisomy 15 (9.9%), and trisomies 18 and 13 (5.3%). The frequency of chromosomal abnormalities decreased with gestational age and increased with maternal age, but only because of increases in trisomies and complex abnormalities. Conclusions: We have presented a large collection of cytogenetic data for SABs collected during the past 6 years and provided a database for prenatal genetic counseling of parents who have experienced SABs in Korea.

A Study on the Sensitivity of Human Rights and the Advocacy Activities of Korean Occupational Therapists (국내 작업치료사의 인권감수성이 옹호활동에 미치는 영향)

  • Kim, Ji-Man;Hong, Ki-Hoon;Lee, Chun-Yeop;Kim, Hee-Jung
    • The Journal of Korean society of community based occupational therapy
    • /
    • v.10 no.2
    • /
    • pp.11-24
    • /
    • 2020
  • Objective : The Human Rights constitute one of the basic pillars of every work where persons are involved, such is the case of the occupational therapy field. Methods : In this study we investigate the human rights sensitivity and the advocacy activities of occupational therapists. The differences according to their characteristics, the relationship and the impact of the human rights sensitivity are examined and presented. Making use of online surveys 116 subjects participated in the study. Results : The measured average of human right sensitivity is 69.00 ± 17.67 point, being them distributed according to the following subcategories: to the perception of the situation corresponds 23.25±5.62 points, to the perception of the consequences 22.75±6.54 points and for the perception of the responsibility 23±6.54 points. In all the cases have been taken in account the equal rights, the right to education in disables, the right to pursue the happiness of the elderly, the right of the disables to have personal freedom, the privacy rights and the privacy rights for mental illness people. According to the working area the Human Right sensitiveness is higher in Seoul than in the Gyeongsang province meanwhile the advocacy activities is higher in Seoul and in Gyeonggi province than in Gyeongsang province. Depending of the type of service, general hospitals and rehabilitation/nursing hospitals showed higher human rights sensitivity than other service organizations According to the working field, occupational therapy group focused in elderly showed higher Human Right sensitivity than other fields. Professionals belonging groups of clinical experience from 3 to 5 years and from 6 to 10 years showed higher advocacy activities than professionals with more than 11 years of experience. A positive correlation was showed between the human rights sensitivity and the advocacy activities. For this situation, the human rights sensitiveness was divided in sub-categories in perception of the situation, perception of the consequences and perception of the responsibility. As showed by the result of multiple regression analyses the advocacy activities of human would grow up in accordance with the increase of the human rights sensitiveness of responsibility perception. Conclusion : Due to the actual lack of information, the collection and study of basic data is fundamental for the development of practical human rights educational programs and to emphasize the role of the defense of the human rights.

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

GnRH Agonist Stimulation Test (GAST) for Prediction of Ovarian Response in Controlled Ovarian Stimulation (COH) (난소기능평가를 위한 Gonadotropin Releasing Hormone Agonist Stimulation Test (GAST)의 효용성에 관한 연구)

  • Kim, Mee-Ran;Song, In-Ok;Yeon, Hye-Jeong;Choi, Bum-Chae;Paik, Eun-Chan;Koong, Mi-Kyoung;Song, Il-Pyo;Lee, Jin-Woo;Kang, Inn-Soo
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.26 no.2
    • /
    • pp.163-170
    • /
    • 1999
  • Objectives: The aims of this study are 1) to determine if GAST is a better indicator in predicting ovarian response to COH compared with patient's age or basal FSH level and 2) to evaluate its role in detecting abnormal ovarian response. Design: Prospective study in 118 patients undergoing IVF-ET using GnRH-a short protocol during May-September 1995. Materials and Methods: After blood sampling for basal FSH and estradiol $(E_2)$ on cycle day two, 0.5ml (0.525mg) GnRH agonist ($Suprefact^{(r)}$, Hoechst) was injected subcutaneously. Serum $E_2$ was measured 24 hours later. Initial $E_2$ difference $({\Delta}E_2)$ was defined as the change in $E_2$ on day 3 over the baseline day 2 value. Sixteen patients with ovarian cyst or single ovary or incorrect blood collection time were excluded from the analysis. The patients were divided into three groups by ${\Delta}E_2$; group A (n=30):${\Delta}E_2$<40 pg/ml, group B (n=52): 40 pg/ml${\leq}{\Delta}E_2$<100 pg/ml, group C (n=20): ${\Delta}E_2{\leq}100$ pg/ml. COH was done by GnRH agonist/HMG/hCG and IVF-ET was followed. Ratio of $E_2$ on day of hCG injection over the number of ampules of gonadotropins used ($E_2hCGday$/Amp) was regarded as ovarian responsiveness. Poor ovarian response and overstimulation were defined as $E_2$ hCGday less than 600 pg/ml and greater than 5000 pg/ml, respectively. Results: Mean age $({\pm}SEM)$ in group A, B and C were $33.7{\pm}0.8^*,\;31.5{\pm}0.6\;and\;30.6{\pm}0.5^*$, respectively ($^*$: p<0.05). Mean basal FSH level of group $A(11.1{\pm}1.1mlU/ml)$ was significantly higher than those of $B(7.4{\pm}0.2mIU/ml)$ and C $(6.8{\pm}0.4mIU/ml)$ (p<0.001). Mean $E_2hCGday$ of group A was significantly lower than those of group B or C, i.e., $1402.1{\pm}187.7pg/ml,\;3153.2{\pm}240.0pg/ml,\;4078.8{\pm}306.4pg/ml$ respectively (p<0.0001). The number of ampules of gonadotropins used in group A was significantly greater than those in group B or C: $38.6{\pm}2.3,\;24.2{\pm}1.1\;and\;18.5{\pm}1.0$ (p<0.0001). The number of oocytes retrieved in group A was significantly smaller than those in group B or C: $6.4{\pm}1.1,\;15.5{\pm}1.1\;and\;18.6{\pm}1.6$, respectively (p<0.0001). By stepwise multiple regression, only ${\Delta}E_2$ showed a significant correlation (r=0.68, p<0.0001) with $E_2HCGday$/Amp, while age or basal FSH level were not significant. Likewise, only ${\Delta}E_2$ correlated significantly with the number of oocytes retrieved (r=0.57, p<0.001). All four patients whose COH was canceled due to poor ovarian response belonged to group A only (Fisher's exact test, p<0.01). Whereas none of 30 patients in group A (0%) had overstimulation, 14 patients among 72 patients (19.4%) in group B and C had overstimulation (Fisher's exact test, p<0.01). Conclusions: These data suggest that initial $E_2$ difference after GAST may be a better prognostic indicator of ovarian response to COH than age or basal FSH level. Since initial $E_2$ difference demonstrates significant association with abnormal ovarian response such as poor ovarian response necessitating cycle cancellation or overstimulation, GAST may be helpful in monitoring and consultation of patients during COH in IVF-ET cycle.

  • PDF

Distributions of 137Cs and 90Sr in the Soil of Uljin, South Korea (울진토양에서의 137Cs 및 90Sr 분포)

  • Song, JiYeon;Kim, Wan;Maeng, Seongjin;Lee, Sang Hoon
    • Journal of Radiation Protection and Research
    • /
    • v.41 no.1
    • /
    • pp.49-55
    • /
    • 2016
  • Background: For the purpose of baseline data collection and enhancement of environmental monitoring the distribution studies of $^{137}Cs$ and $^{90}Sr$ in the soil of Uljin province was performed and the relation between surface soil activities and soil properties (pH, TOC and median of the surface soil) was analyzed. Materials and Methods: For 14 spots within 10 km from the NPP surface soil samples were collected and soils for depth profile were sampled for 3 spots in April 2011. Using ${\gamma}$-ray spectrometry with HPGe detector, the concentrations of $^{137}Cs$ were determined and the concentrations of $^{90}Sr$ were measured by counting ${\beta}$-activity of $^{90}Y$ (in equilibrium with $^{90}Sr$) in a gas flow proportional counter. Results and Discussion: The concentration ranges of $^{137}Cs$ and $^{90}Sr$ were $<0.479-39.6Bq{\cdot}(kg-dry)^{-1}$ (avg. $7.51Bq{\cdot}(kg-dry)^{-1}$) and $0.209-1.85Bq{\cdot}(kg-dry)^{-1}$ (avg. $0.74Bq{\cdot}(kg-dry)^{-1}$) which were similar to the reported values from other regions in Korea. The activity ratio of $^{137}Cs$ to $^{90}Sr$ in surface soils was around 9.67, which is much bigger than the initial value of 1.75 for worldwide fallouts because of faster downward movement of $^{90}Sr$ after fallout than that of $^{137}Cs$. For depth profile studies soils were collected down to 40 cm depth for the locations of Deokgu, Hujeong and Maehwa. The $^{137}Cs$ concentration distribution of the first two showed maximum values at top soils and decreased rapidly in exponential manner, while $^{90}Sr$ showed two local maximum values for soils near top and about 30 cm depth. Through linear fittings between the $^{137}Cs$ and $^{90}Sr$ concentrations of surface soil and pH, TOC and median of the surface soil, the only probable relationship obtained was between $^{137}Cs$ and TOC (determination coefficient $R^2=0.6$). Conclusion: The concentration ranges of $^{137}Cs$ and $^{90}Sr$ in Uljin were similar to the reported values from other regions in Korea. The only probable relationship obtained between activities and soil properties was between $^{137}Cs$ and TOC.

Current Status and Actual Conditions of the Use of Occupational Therapy Evaluation Tools in Relation to the Type of Therapy Institution (국내 아동작업치료 기관별 평가도구 사용 현황 및 실태에 관한 연구)

  • Gil, Young-Suk;Yoo, Doo-Han
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.21 no.1
    • /
    • pp.47-58
    • /
    • 2023
  • Objective : This study aimed to investigate the current status and actual use of assessment tools by institutions in the field of occupational therapy with children in Korea. Methods : The study was conducted with 67 occupational therapists working with children in Korea. To investigate the use of evaluation tools by area, knowledge of the evaluation tools, and desire to participate in further education, the questionnaires used in studies by Lee, Hong, and Park (2018) and Kim (2015) were modified and supplemented according to the child evaluation tools currently in usein institutions in Korea. For data collection, we distributed Google questionnaires to child occupational therapists for 3 weeks using convenience sampling. Excel was used to analyze the use of the evaluation tools according to institution. Technical statistics and frequency analyses were used to verify the general characteristics, evaluation-related information, status of evaluation tool use, knowledge levels relating to evaluation tools, and desire to participate in education. A t-test was used for the evaluation tool status. Results : Welfare centers used the most evaluation tools, with an average of 11.1, followed by university hospitals, rehabilitation hospitals, clinics, and daycare centers. There were differences in the choice of tools used, hospital with the Jebsen-Taylor hand function test and the Wee-FIM (Functional Independence Measure) being the most frequently applied. Centers, daycare centers, and welfare center the Sensory Profile test and clinical observation were also used often. Regarding the level of knowledge of evaluation tools and the desire to participate further in education, 30 (44.8%) of the respondents had not completed their education, and 42 (62.7%) rated their knowledge level as generally low. When asked about the importance of using a manual to guide them in their use of evaluation tools, 66 (98.6%) answered positively, and 66 (98.6%) answered that they needed specialized training in the use of evaluation tools. Conclusion : This study makes it possible to understand the use and status of evaluation tools as used by different institutions in Korea in the field of child occupational therapy It is anticipated that it will provide the basis for introducing existing evaluation tools and preparing new evaluation tools to be used in this field in Korea.

Prediction on the Quality of Total Mixed Ration for Dairy Cows by Near Infrared Reflectance Spectroscopy (근적외선 분광법에 의한 국내 축우용 TMR의 성분추정)

  • Ki, Kwang-Seok;Kim, Sang-Bum;Lee, Hyun-June;Yang, Seung-Hak;Lee, Jae-Sik;Jin, Ze-Lin;Kim, Hyeon-Shup;Jeo, Joon-Mo;Koo, Jae-Yeon;Cho, Jong-Ku
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.29 no.3
    • /
    • pp.253-262
    • /
    • 2009
  • The present study was conducted to develop a rapid and accurate method of evaluating chemical composition of total mixed ration (TMR) for dairy cows using near infrared reflectance spectroscopy (NIRS). A total of 253 TMR samples were collected from TMR manufacturers and dairy farms in Korea. Prior to NIR analysis, TMR samples were dried at $65^{\circ}C$ for 48 hour and then ground to 2 mm size. The samples were scanned at 2 nm interval over the wavelength range of 400-2500 nm on a FOSS-NIR Systems Model 6500. The values obtained by NIR analysis and conventional chemical methods were compared. Generally, the relationship between chemical analysis and NIR analysis was linear: $R^2$ and standard error of calibration (SEC) were 0.701 (SEC 0.407), 0.965 (SEC 0.315), 0.796 (SEC 0.406), 0.889 (SEC 0.987), 0.894 (SEC 0.311), 0.933 (SEC 0.885) and 0.889 (SEC 1.490) for moisture, crude protein, ether extract, crude fiber, crude ash, acid detergent fiber (ADF) and neutral detergent fiber (NDF), respectively. In addition, the standard error of prediction (SEP) value was 0.371, 0.290, 0.321, 0.380, 0.960, 0.859 and 1.446 for moisture, crude protein, ether extract, crude fiber, crude ash, ADF and NDF, respectively. The results of the present study showed that the NIR analysis for unknown TMR samples would be relatively accurate. Use of the developed NIR calibration curve can obtain fast and reliable data on chemical composition of TMR. Collection and analysis of more TMR samples will increase accuracy and precision of NIR analysis to TMR.

A Study about the Direction and Responsibility of the National Intelligence Agency to the Cyber Security Issues (사이버 안보에 대한 국가정보기구의 책무와 방향성에 대한 고찰)

  • Han, Hee-Won
    • Korean Security Journal
    • /
    • no.39
    • /
    • pp.319-353
    • /
    • 2014
  • Cyber-based technologies are now ubiquitous around the glob and are emerging as an "instrument of power" in societies, and are becoming more available to a country's opponents, who may use it to attack, degrade, and disrupt communications and the flow of information. The globe-spanning range of cyberspace and no national borders will challenge legal systems and complicate a nation's ability to deter threats and respond to contingencies. Through cyberspace, competitive powers will target industry, academia, government, as well as the military in the air, land, maritime, and space domains of our nations. Enemies in cyberspace will include both states and non-states and will range from the unsophisticated amateur to highly trained professional hackers. In much the same way that airpower transformed the battlefield of World War II, cyberspace has fractured the physical barriers that shield a nation from attacks on its commerce and communication. Cyberthreats to the infrastructure and other assets are a growing concern to policymakers. In 2013 Cyberwarfare was, for the first time, considered a larger threat than Al Qaeda or terrorism, by many U.S. intelligence officials. The new United States military strategy makes explicit that a cyberattack is casus belli just as a traditional act of war. The Economist describes cyberspace as "the fifth domain of warfare and writes that China, Russia, Israel and North Korea. Iran are boasting of having the world's second-largest cyber-army. Entities posing a significant threat to the cybersecurity of critical infrastructure assets include cyberterrorists, cyberspies, cyberthieves, cyberwarriors, and cyberhacktivists. These malefactors may access cyber-based technologies in order to deny service, steal or manipulate data, or use a device to launch an attack against itself or another piece of equipment. However because the Internet offers near-total anonymity, it is difficult to discern the identity, the motives, and the location of an intruder. The scope and enormity of the threats are not just focused to private industry but also to the country's heavily networked critical infrastructure. There are many ongoing efforts in government and industry that focus on making computers, the Internet, and related technologies more secure. As the national intelligence institution's effort, cyber counter-intelligence is measures to identify, penetrate, or neutralize foreign operations that use cyber means as the primary tradecraft methodology, as well as foreign intelligence service collection efforts that use traditional methods to gauge cyber capabilities and intentions. However one of the hardest issues in cyber counterintelligence is the problem of "Attribution". Unlike conventional warfare, figuring out who is behind an attack can be very difficult, even though the Defense Secretary Leon Panetta has claimed that the United States has the capability to trace attacks back to their sources and hold the attackers "accountable". Considering all these cyber security problems, this paper examines closely cyber security issues through the lessons from that of U.S experience. For that purpose I review the arising cyber security issues considering changing global security environments in the 21st century and their implications to the reshaping the government system. For that purpose this study mainly deals with and emphasis the cyber security issues as one of the growing national security threats. This article also reviews what our intelligence and security Agencies should do among the transforming cyber space. At any rate, despite of all hot debates about the various legality and human rights issues derived from the cyber space and intelligence service activity, the national security should be secured. Therefore, this paper suggests that one of the most important and immediate step is to understanding the legal ideology of national security and national intelligence.

  • PDF