• Title/Summary/Keyword: First-degree relative

Search Result 161, Processing Time 0.029 seconds

Human Impact on the Environment of Highland Central Mexico during the Pre-and Post-Conquest (멕시코 중부 고산 지역에서 스페인 식민 통치 시기를 전후하여 일어난 인위적 환경 변화)

  • Park, Jung-Jae
    • Journal of the Korean Geographical Society
    • /
    • v.40 no.4 s.109
    • /
    • pp.428-440
    • /
    • 2005
  • There is currently no agreement among archaeologists, environmental historians, and paleoecologists as to the relative significance of pre- and post-Conquest human impact on the environments of Highland Mexico. This paper presents the results of pollen, microscopic charcoal, dung fungal spore, isotope, and magnetic susceptibility analyses on ca. 4m sediment core. The coring site is Hoya Rincon de Parangueo, one of the seven maar lakes in the Valle do Santiago. Amaranthaceae pollen, one of important disturbance indicators and Zea mays pollen obviously indicate two periods of agricultural activities. The first period begins ca. 400 B.C. and ends ca. A.D. 850. The second begins around A.D. 1550 and continues to the present. During the first period, the degree of agricultural activities was related to periodical sunspot cycles and the most intense activities were present between ca. A.D. 150-ca. A.D. 400. The abrupt increase of $\delta^{18}O$ around 280cm may reflect that an important transition to a dry phase took place around A.D. 450. People probably stopped cultivating crops due to dry conditions prevailing since ca. A.D. 450. The second period, the post-Conquest, exhibits a dramatic increase of sporormiella, dung fungal spores resulted fron the introduction of cattle. Low Poaceae frequency and charcoal production and high $\delta^{13}C$ values, magnetic susceptibility, and organic contents all indicate the arrival of the Spanish. Most importantly, it seems that mesquite (Prosopis juliflora) could have benefits from declined fire frequencies caused by cattle grazing. The study area is now entirely dominated by woody plants like mesquite, which clearly demonstrates that serious vegetation change occurred in the study area.

The Effects of Intention Inferences on Scarcity Effect: Moderating Effect of Scarcity Type, Scarcity Depth (소비자의 기업의도 추론이 희소성 효과에 미치는 영향: 수량한정 유형과 폭의 조절효과)

  • Park, Jong-Chul;Na, June-Hee
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.4
    • /
    • pp.195-215
    • /
    • 2008
  • The scarcity is pervasive aspect of human life and is a fundamental precondition of economic behavior of consumers. Also, the effect of scarcity message is a power social influence principle used by marketers to increase the subjective desirability of products. Because valuable objects are often scare, consumers tend to infer the scarce objects are valuable. Marketers often do base promotional appeals on the principle of scarcity to increase the subjective desirability their products among consumers. Specially, advertisers and retailers often promote their products using restrictions. These restriction act to constraint consumers' ability th take advantage of the promotion and can assume several forms. For example, some promotions are advertised as limited time offers, while others limit the quantity that can be bought at the deal price by employing the statements such as 'limit one per consumer,' 'limit 5 per customer,' 'limited products for special commemoration celebration,' Some retailers use statements extensively. A recent weekly flyer by a prominent retailer limited purchase quantities on 50% of the specials advertised on front page. When consumers saw these phrase, they often infer value from the product that has limited availability or is promoted as being scarce. But, the past researchers explored a direct relationship between the purchase quantity and time limit on deal purchase intention. They also don't explored that all restriction message are not created equal. Namely, we thought that different restrictions signal deal value in different ways or different mechanism. Consumers appear to perceive that time limits are used to attract consumers to the brand, while quantity limits are necessary to reduce stockpiling. This suggests other possible differences across restrictions. For example, quantity limits could imply product quality (i.e., this product at this price is so good that purchases must be limited). In contrast, purchase preconditions force the consumer to spend a certain amount to qualify for the deal, which suggests that inferences about the absolute quality of the promoted item would decline from purchase limits (highest quality) to time limits to purchase preconditions (lowest quality). This might be expected to be particularly true for unfamiliar brands. However, a critical but elusive issue in scarcity message research is the impacts of a inferred motives on the promoted scarcity message. The past researchers not explored possibility of inferred motives on the scarcity message context. Despite various type to the quantity limits message, they didn't separated scarcity message among the quantity limits. Therefore, we apply a stricter definition of scarcity message(i.e. quantity limits) and consider scarcity message type(general scarcity message vs. special scarcity message), scarcity depth(high vs. low). The purpose of this study is to examine the effect of the scarcity message on the consumer's purchase intension. Specifically, we investigate the effect of general versus special scarcity messages on the consumer's purchase intention using the level of the scarcity depth as moderators. In other words, we postulates that the scarcity message type and scarcity depth play an essential moderating role in the relationship between the inferred motives and purchase intention. In other worlds, different from the past studies, we examine the interplay between the perceived motives and scarcity type, and between the perceived motives and scarcity depth. Both of these constructs have been examined in isolation, but a key question is whether they interact to produce an effect in reaction to the scarcity message type or scarcity depth increase. The perceived motive Inference behind the scarcity message will have important impact on consumers' reactions to the degree of scarcity depth increase. In relation ti this general question, we investigate the following specific issues. First, does consumers' inferred motives weaken the positive relationship between the scarcity depth decrease and the consumers' purchase intention, and if so, how much does it attenuate this relationship? Second, we examine the interplay between the scarcity message type and the consumers' purchase intention in the context of the scarcity depth decrease. Third, we study whether scarcity message type and scarcity depth directly affect the consumers' purchase intention. For the answer of these questions, this research is composed of 2(intention inference: existence vs. nonexistence)${\times}2$(scarcity type: special vs. general)${\times}2$(scarcity depth: high vs. low) between subject designs. The results are summarized as follows. First, intention inference(inferred motive) is not significant on scarcity effect in case of special scarcity message. However, nonexistence of intention inference is more effective than existence of intention inference on purchase intention in case of general scarcity. Second, intention inference(inferred motive) is not significant on scarcity effect in case of low scarcity. However, nonexistence of intention inference is more effective than existence of intention inference on purchase intention in case of high scarcity. The results of this study will help managers to understand the relative importance among the type of the scarcity message and to make decisions in using their scarcity message. Finally, this article have several contribution. First, we have shown that restrictions server to activates a mental resource that is used to render a judgment regarding a promoted product. In the absence of other information, this resource appears to read to an inference of value. In the presence of other value related cue, however, either database(i.e., scarcity depth: high vs. low) or conceptual base(i.e.,, scarcity type special vs. general), the resource is used in conjunction with the other cues as a basis for judgment, leading to different effects across levels of these other value-related cues. Second, our results suggest that a restriction can affect consumer behavior through four possible routes: 1) the affective route, through making consumers feel irritated, 2) the cognitive making route, through making consumers infer motivation or attribution about promoted scarcity message, and 3) the economic route, through making the consumer lose an opportunity to stockpile at a low scarcity depth, or forcing him her to making additional purchases, lastly 4) informative route, through changing what consumer believe about the transaction. Third, as a note already, this results suggest that we should consider consumers' inferences of motives or attributions for the scarcity dept level and cognitive resources available in order to have a complete understanding the effects of quantity restriction message.

  • PDF

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

A Study on the Risk Factors for Maternal and Child Health Care Program with Emphasis on Developing the Risk Score System (모자건강관리를 위한 위험요인별 감별평점분류기준 개발에 관한 연구)

  • 이광옥
    • Journal of Korean Academy of Nursing
    • /
    • v.13 no.1
    • /
    • pp.7-21
    • /
    • 1983
  • For the flexible and rational distribution of limited existing health resources based on measurements of individual risk, the socalled Risk Approach is being proposed by the World Health Organization as a managerial tool in maternal and child health care program. This approach, in principle, puts us under the necessity of developing a technique by which we will be able to measure the degree of risk or to discriminate the future outcomes of pregnancy on the basis of prior information obtainable at prenatal care delivery settings. Numerous recent studies have focussed on the identification of relevant risk factors as the Prior infer mation and on defining the adverse outcomes of pregnancy to be dicriminated, and also have tried on how to develope scoring system of risk factors for the quantitative assessment of the factors as the determinant of pregnancy outcomes. Once the scoring system is established the technique of classifying the patients into with normal and with adverse outcomes will be easily de veloped. The scoring system should be developed to meet the following four basic requirements. 1) Easy to construct 2) Easy to use 3) To be theoretically sound 4) To be valid In searching for a feasible methodology which will meet these requirements, the author has attempted to apply the“Likelihood Method”, one of the well known principles in statistical analysis, to develop such scoring system according to the process as follows. Step 1. Classify the patients into four groups: Group $A_1$: With adverse outcomes on fetal (neonatal) side only. Group $A_2$: With adverse outcomes on maternal side only. Group $A_3$: With adverse outcome on both maternal and fetal (neonatal) sides. Group B: With normal outcomes. Step 2. Construct the marginal tabulation on the distribution of risk factors for each group. Step 3. For the calculation of risk score, take logarithmic transformation of relative proport-ions of the distribution and round them off to integers. Step 4. Test the validity of the score chart. h total of 2, 282 maternity records registered during the period of January 1, 1982-December 31, 1982 at Ewha Womans University Hospital were used for this study and the“Questionnaire for Maternity Record for Prenatal and Intrapartum High Risk Screening”developed by the Korean Institute for Population and Health was used to rearrange the information on the records into an easy analytic form. The findings of the study are summarized as follows. 1) The risk score chart constructed on the basis of“Likelihood Method”ispresented in Table 4 in the main text. 2) From the analysis of the risk score chart it was observed that a total of 24 risk factors could be identified as having significant predicting power for the discrimination of pregnancy outcomes into four groups as defined above. They are: (1) age (2) marital status (3) age at first pregnancy (4) medical insurance (5) number of pregnancies (6) history of Cesarean sections (7). number of living child (8) history of premature infants (9) history of over weighted new born (10) history of congenital anomalies (11) history of multiple pregnancies (12) history of abnormal presentation (13) history of obstetric abnormalities (14) past illness (15) hemoglobin level (16) blood pressure (17) heart status (18) general appearance (19) edema status (20) result of abdominal examination (21) cervix status (22) pelvis status (23) chief complaints (24) Reasons for examination 3) The validity of the score chart turned out to be as follows: a) Sensitivity: Group $A_1$: 0.75 Group $A_2$: 0.78 Group $A_3$: 0.92 All combined : 0.85 b) Specificity : 0.68 4) The diagnosabilities of the“score chart”for a set of hypothetical prevalence of adverse outcomes were calculated as follows (the sensitivity“for all combined”was used). Hypothetidal Prevalence : 5% 10% 20% 30% 40% 50% 60% Diagnosability : 12% 23% 40% 53% 64% 75% 80%.

  • PDF

Effect of the Autumnal Cutting Times on the Regrowth , Accumulation of Carbohydrate and Dry Matter Yield of Italian ryegrass ( Loium multiflorum ) (Italian ryegrass의 추계예취시기가 목초의 재생 , 탄수화물축적 및 건물수량에 미치는 영향)

  • 안계수
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.5 no.1
    • /
    • pp.13-21
    • /
    • 1985
  • This experiment was carried out to study the effect of the autumnal cutting times on the regrowth, the accumulated carbohydrate and dry matter yield of Italian ryegrass The results were summarized as follows: 1. In dry matter yield, the plot of earlier cutting was shown the highest yield (p<0.05), and that of the last-cutting was shown lower yield of dry matter than that the none-cutting plot. 2. TSC (Total Water Soluble Carbohydrate) content slightly decreased after the first cutting and gradually increased according to the regrowth, and then decreased again to the second cutting time. And also the TSC content levels of stubble, stem and leaf at one week before falling to sub-zero temperature were all the highest in the eariler cutting plot (p<0.01), and there was significant correlation between the TSC content level and the second harvested dry matter yield (p<0.05). 3. CGR (Crop Growth Rate) was decreased below $8^{\circ}C$. RLGR (Relative Leaf area Growth Rate) and NAR (Net Assimilation Rate) were both high during 30 days after regrowth, and low after regrowth in all plots. LAI (Leaf Area Index) rapidly increased during 50 days after cutting, and then slowly increased in all the plots, and maximum LAI was 3.4-5.8. Also dry matter yield increased in the plots having a high LAI to 70 days after cutting. 4. It was recognized that there were significant correlation between TSC, LAI, CGR, NAR, LWR (Leaf Weight Ratio) and the second harvested dry matter yield during the low temperature periods, and the degree of contribution to dry matter yield was in order of LWR>LAI>TSC>NAR>CGR.

  • PDF

The Effects of the Heavy and Chemical Industry Policy of the 1970s on the Capital Efficiency and Export Competitiveness of Korean Manufacturing Industries (1970년대(年代) 중화학공업정책(重化學工業政策)이 자본효율성(資本效率性)과 수출경쟁력(輸出競爭力)에 미친 영향(影響))

  • Yoo, Jung-ho
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.65-113
    • /
    • 1991
  • Korea's rapid economic growth of the past thirty years was led by extremely fast export growth under extensive government intervention. Until very recently, the political regimes were authoritarian and oppressed human rights and labor movements. Because of these characteristics, many inside and outside Korea are under the impression that the rapid economic growth was made possible by the government's relentless push for export growth through industrial targetjng. Whether or not the government intervention was pivotal in Korean economic growth is an important issue because of its normative implications on the role of government and the degree of economic policy intervention in a market economy. A good example of industrial targeting policy in Korea is the "Heavy and Chemical Industry (HCI)" policy, which began in the early 1970s and lasted for one decade. Under the HCI policy the government intervened in resource allocation through preferential tax, trade, and credit and interest rate policies for "key industries" which included iron and steel, non-ferrous metals, shipbuilding, general machinery, chemicals, and electronics. This paper investigates the effects of. the HCI policy on the efficiency of capital and the export competitiveness of manufacturing industries. For individual three-digit KSIC (Korea Standard Industrial Classification) industries and for two industry groups, one favored by HCI Policy and the other not, this paper: (1) computes capital intensities and discusses the impact of the HCI policy on the changes in the intensities over time, (2) estimates the capital efficiencies and examines them on the basis of optimal condition of resource allocation, and (3) compares the Korean and Taiwanese shares of total imports by the OECD countries as a way of weighing the effects of the policy on the industries' export competitiveness. Taiwan is a good reference, as it did not adopt the kind of industrial targeting policy that Korea did, while the Taiwanese and Korean economies share similar characteristics. In the 1973-78 period, the capital intensity rose rapidly for the "HC Group" the group of industries favored by the policy, while it first declined and later showed an anemic rise for the "Light Group," the remaining manufacturing industries. Capital efficiency was much lower in the HC Group than in the Light Group, at least until the late 1970s. This paper acribes these results to excess investments in the favored industries and concludes that growth could have been faster in the absence of the HCI policy. The Korean Light Group's share in total imports by the OECD was larger than that of its Taiwanese counterpart but has become much smaller since 1978. For the HC Group Korea's market share was smaller than Taiwan's and has declined even more since the mid-1970s. This weakening in the export competitiveness of Korea's industries relative to Taiwan's lasted until the mid-1980s. This paper concludes that the HCI policy had either no positive effect on the competitiveness of the Korean manufacturing industries or negative effects.

  • PDF

Developing Fire-Danger Rating Model (산림화재예측(山林火災豫測) Model의 개발(開發)을 위(爲)한 연구(硏究))

  • Han, Sang Yeol;Choi, Kwan
    • Journal of Korean Society of Forest Science
    • /
    • v.80 no.3
    • /
    • pp.257-264
    • /
    • 1991
  • Korea has accomplished the afforestation of its forest land in the early 1980's. To meet the increasing demand for forest products and forest recreation, a development of scientific forest management system is needed as a whole. For this purpose the development of efficient forestfire management system is essential. In this context, the purpose of this study is to develop a theoretical foundation of forestfire danger rating system. In this study, it is hypothesized that the degree of forestfire risk is affected by Weather Factor and Man-Caused Risk Factor. (1) To accommodate the Weather Factor, a statistical model was estimated in which weather variables such as humidity, temperature, precipitation, wind velocity, duration of sunshine were included as independent variables and the probability of forestfire occurrence as dependent variable. (2) To account man-caused risk, historical data of forestfire occurrence was investigated. The contribution of man's activities make to risk was evaluated from three inputs. The first, potential risk class is a semipermanent number which ranks the man-caused fire potential of the individual protection unit relative to that of the other protection units. The second, the risk sources ratio, is that portion of the potential man-caused fire problem which can be charged to a specific cause. The third, daily activity level is that the fire control officer's estimate of how active each of these sources is, For each risk sources, evaluate its daily activity level ; the resulting number is the partial risk factor. Sum up the partial risk factors, one for each source, to get the unnormalized Man-Caused Risk. To make up the Man-Caused Risk, the partial risk factor and the unit's potential risk class were considered together. (3) At last, Fire occurrence index was formed fire danger rating estimation by the Weather Factors and the Man-Caused Risk Index were integrated to form the final Fire Occurrence Index.

  • PDF

Effect of Temperature on Hatchability of Overwintering Eggs and Nymphal Development of Pochazia shantungensis (Hemiptera: Ricaniidae) (갈색날개매미충(Pochazia shantungensis) 월동알 부화와 약충 발육에 미치는 온도의 영향)

  • Choi, Duck-Soo;Ko, Sug-Ju;Ma, Kyeong-Cheul;Kim, Hyo-Jeong;Lee, Jin-Hee;Kim, Do-Ik
    • Korean journal of applied entomology
    • /
    • v.55 no.4
    • /
    • pp.453-457
    • /
    • 2016
  • This study investigates the hatching periods and hatchability of the eggs of Pochazia shantungensis at different collection times from 2011 to 2014, and the effect of temperature on the growth of P. shantungensis nymphs in an area of its outbreak. The hatchability of P. shantungensis eggs varies with their collection time; their hatchability in late November was higher than that in March of the next year, but no difference was observed in their hatching periods. The hatching periods of the eggs were 51.2, 31.3, 24.8, 19.4, 17.1, and 19.4 days at 15, 18, 21, 24, 27, and $30^{\circ}C$, respectively. The hatchability was above 70% at temperatures ranging from 18 to $27^{\circ}C$. The hatching time of the overwintered eggs in the Gurye region in Korea was reduced by 9 days from 2011 to 2014. The hatching rate was relatively higher when the average temperature in the winter season was relatively warmer. The dvelopmental periods of the first to fifth nymphs were 82.8, 58.0, 45.8, and 39.6 days at 18, 21, 24, and $27^{\circ}C$, respectively, at the relative humidity of 40~70%, and a photoperiod off 14 h light:10 h dark. The higher the temperature, the shorter the developmental period. At $30^{\circ}C$, all life stages after the fourth nymph died. Thus, the optimum growth temperature was estimated to be $27^{\circ}C$. For all life stages from the egg to the fourth nymph, the relationship between the temperature and developmental rate was expressed by the linear equation Y = 0.0015 X - 0.014. The lower developmental threshold was $9.3^{\circ}C$ and the effective cumulative temperature was 693.3 degree-days. The lower developmental threshold of approximately $3.8^{\circ}C$ was the lowest at the fourth nymph stage.

Four-year change and tracking of serum lipids in Korean adolescents (강화지역 청소년의 4년간 혈청 지질의 변화와 지속성)

  • Lee, Kang-Hee;Suh, Il;Jee, Sun-Ha;Nam, Chung-Mo;Kim, Sung-Soon;Shim, Won-Heum;Ha, Jong-Won;Kim, Suk-Il;Kang, Hyung-Gon
    • Journal of Preventive Medicine and Public Health
    • /
    • v.30 no.1 s.56
    • /
    • pp.45-59
    • /
    • 1997
  • It has been known that there is a tracking phenomenon in the level of serum lipids. However, no study has been performed to examine the change and tracking of serum lipids in Korean adolescents. The purpose of this study is to examine the changes of serum lipids in Korean adolescents from 12 to 16 years of age, and to examine whether or not there is a tracking phenomenon in serum lipids level during the period. In 1992 serum lipids(total cholesterol(TC), triglyceride(TG), LDL cholesterol(LDL-C), HDL cholesterol(HDL-C)) were measured in 318 males, 365 females who were 12 years of age in Kangwha county, Korea. These participants have been followed up to 1996 and serum lipids level were examined in 1994 and 1996. Among the participants 162 males and 147 females completed all three examinations in fasting state. To examine the effect of eliminating adolescents with incomplete data, we compared serum lipids, blood pressure and anthropometric measures at baseline between adolescents with complete follow-up and adolescents who were withdrawn. To examine the change of serum lipids we compared mean values of serum lipids according to age in males and females. Repeated analysis of variance was used to test the change according to age. We used three methods to examine the existence of tracking. First, we analyzed the trends in serum lipids over 4-year period within quartile groups formed on the basis of the first-year serum lipids level to see whether or not the relative ranking of the mean serum lipids among the quartile groups remained in the same group for 4-year period. Second, we quantified the degree of tracking by calculating Spearman's rank correlation coefficient between every tests. Third, the persistence extreme quartile method was used. This method divides the population into quartile groups according to the initial level of blood lipids and then calculates the percent of the subjects who stayed in the same group at follow-up measurement. The decreases in levels were noted during 4 years for TC, LDL-C, primarily for boys. The level of HDL-C decreased between baseline and first follow-up for both sexes. Tracking, as measured by both correlation coefficients and persistence extreme quartiles, was evident for all of the lipids. The correlation coefficients of TC between baseline and 4 years later in boys and girls were 0.55 and 0.68, respectively. And the corresponding values for HDL-C were 0.58 and 0.69. More than 50% of adolescents who belonged to the highest quartile group in TC, HDL-C and LDL-C at the baseline were remained at the same group at the examination performed 2 years later for both sexes. The probabilities of remaining at the same group were more than 35% when examined 4 years later. The tracking phenomenon of TG was less evident compared with the other lipids. Percents of girls who stayed at the same group 2 years later and 4 years later were 42.9% and 25.7%, respectively. It was evident that serum lipid levels tracked in Korean adolescents. Researches with longer follow-up would be needed in the future to investigate the long-term change of lipids from adolescents to adults.

  • PDF

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.