• 제목/요약/키워드: R/S

Search Result 29,840, Processing Time 0.069 seconds

The Impact of Bladder Volume on Acute Urinary Toxicity during Radiation Therapy for Prostate Cancer (전립선암의 방사선치료시 방광 부피가 비뇨기계 부작용에 미치는 영향)

  • Lee, Ji-Hae;Suh, Hyun-Suk;Lee, Kyung-Ja;Lee, Re-Na;Kim, Myung-Soo
    • Radiation Oncology Journal
    • /
    • v.26 no.4
    • /
    • pp.237-246
    • /
    • 2008
  • Purpose: Three-dimensional conformal radiation therapy (3DCRT) and intensity-modulated radiation therapy (IMRT) were found to reduce the incidence of acute and late rectal toxicity compared with conventional radiation therapy (RT), although acute and late urinary toxicities were not reduced significantly. Acute urinary toxicity, even at a low-grade, not only has an impact on a patient's quality of life, but also can be used as a predictor for chronic urinary toxicity. With bladder filling, part of the bladder moves away from the radiation field, resulting in a small irradiated bladder volume; hence, urinary toxicity can be decreased. The purpose of this study is to evaluate the impact of bladder volume on acute urinary toxicity during RT in patients with prostate cancer. Materials and Methods: Forty two patients diagnosed with prostate cancer were treated by 3DCRT and of these, 21 patients made up a control group treated without any instruction to control the bladder volume. The remaining 21 patients in the experimental group were treated with a full bladder after drinking 450 mL of water an hour before treatment. We measured the bladder volume by CT and ultrasound at simulation to validate the accuracy of ultrasound. During the treatment period, we measured bladder volume weekly by ultrasound, for the experimental group, to evaluate the variation of the bladder volume. Results: A significant correlation between the bladder volume measured by CT and ultrasound was observed. The bladder volume in the experimental group varied with each patient despite drinking the same amount of water. Although weekly variations of the bladder volume were very high, larger initial CT volumes were associated with larger mean weekly bladder volumes. The mean bladder volume was $299{\pm}155\;mL$ in the experimental group, as opposed to $187{\pm}155\;mL$ in the control group. Patients in experimental group experienced less acute urinary toxicities than in control group, but the difference was not statistically significant. A trend of reduced toxicity was observed with the increase of CT bladder volume. In patients with bladder volumes greater than 150 mL at simulation, toxicity rates of all grades were significantly lower than in patients with bladder volume less than 150 mL. Also, patients with a mean bladder volume larger than 100 mL during treatment showed a slightly reduced Grade 1 urinary toxicity rate compared to patients with a mean bladder volume smaller than 100 mL. Conclusion: Despite the large variability in bladder volume during the treatment period, treating patients with a full bladder reduced acute urinary toxicities in patients with prostate cancer. We recommend that patients with prostate cancer undergo treatment with a full bladder.

The Variation of Natural Population of Pinus densiflora S. et Z. in Korea (V) -Characteristics of Needle and Wood of Injye, Jeongsun, Samchuk Populations- (소나무 천연집단(天然集團)의 변이(變異)에 관(關)한 연구(硏究)(V) -인제(麟蹄), 정선(旌善), 삼척집단(三陟集團)의 침엽(針葉) 및 재질형질(材質形質)-)

  • Yim, Kyong Bin;Kwon, Ki Won;Lee, Kyong Jae
    • Journal of Korean Society of Forest Science
    • /
    • v.36 no.1
    • /
    • pp.9-25
    • /
    • 1977
  • As a successive work of the variation studies of natural Pinus densiflora stands, some characteristics of individual trees of the three natural populations selected from the Kwang-won Province, the middle-east part of Korean peninsula, as shown in the location map, were investigated. And the statiscal differences between individuals within population, and between populations were analysed. Twenty trees from each population were selected for this study purpose. Doing this, those trees lagged in growth, usually showing poorer form, were eliminated. The results obtained are summarized as follows: 1. Though the average population ages had the ranage between 50 and 63, the growth of height or diameter was similar. Population No.9 is, however, considered to have better tree forms at glance. Population No.8 showed the heighest value not only in the clear-stem-length ratio. 0.53 but also in the crown-index 0.91. The higher value can be result from those trees having long lateral branches and relatively short crown height, meaning undesirable crown shape. In regard to the fine branchedness and the acuteness of branching angle, the population No.9. is considered to be a better one, whereas there was almost no difference in crown height among populations. 2. Checking the frequency distributions of the ratio of the clear-stem-height to the total height and the crown-indices, some difference between populations are considered. These might be attributed to the previous way of stand mangement which alters the density. 3. In the serration density, the average number of 54 per 1cm needle length, the significant differences exist between individual trees within population but not between populations. A few trees which extremly high serration density were observed. As in serration, so tendencies were in the number of stomata row and resin duct. 4. The population 8 had the resin duct index value of 0.074 as the highest which was twice or triple of the other ones. 5. The patterns of increasing process of the average 10-year-ring-segment were not similar till the 30 years of age, but beyond this, the tendency lines were aggregated. 6. Regading the average summer wood ratio, no diffrence between populations, but in the ranges, i.e. 23 to 30 in population No.8. and 16 to 36 in population No.9., with regad to the specific gravity of wood, there were hardly observed any difference between populations even in the ranges values. As the increase of tree ages, the increase of specific gravity was followed but the increasing patterns were not similar between populations. 7. No significant differences between populations in the average tracheid length and the range were detected. However, the length was increased according to the age increase. The increasing pattern was same between populations.

  • PDF

The micro-tensile bond strength of two-step self-etch adhesive to ground enamel with and without prior acid-etching (산부식 전처리에 따른 2단계 자가부식 접착제의 연마 법랑질에 대한 미세인장결합강도)

  • Kim, You-Lee;Kim, Jee-Hwan;Shim, June-Sung;Kim, Kwang-Mahn;Lee, Keun-Woo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.2
    • /
    • pp.148-156
    • /
    • 2008
  • Statement of problems: Self-etch adhesives exhibit some clinical benefits such as ease of manipulation and reduced technique-sensitivity. Nevertheless, some concern remains regarding the bonding effectiveness of self-etch adhesives to enamel, in particular when so-called 'mild' self-etch adhesives are employed. This study compared the microtensile bond strengths to ground enamel of the two-step self-etch adhesive Clearfil SE Bond (Kuraray) to the three-step etch-and- rinse adhesive Scotchbond Multi-Purpose (3M ESPE) and the one-step self-etch adhesive iBond (Heraeus Kulzer). Purpose: The purpose of this study was to determine the effect of a preceding phosphoric acid conditioning step on the bonding effectiveness of a two-step self-etch adhesive to ground enamel. Material and methods: The two-step self-etch adhesive Clearfil SE Bond non-etch group, Clearfil SE Bond etch group with prior 35% phosphoric acid etching, and the one-step self-etch adhesive iBond group were used as experimental groups. The three-step etch-and-rinse adhesive Scotchbond Multi-Purpose was used as a control group. The facial surfaces of bovine incisors were divided in four equal parts cruciformly, and randomly distributed into each group. The facial surface of each incisor was ground with 800-grit silicon carbide paper. Each adhesive group was applied according to the manufacturer's instructions to ground enamel, after which the surface was built up using Light-Core (Bisco). After storage in distilled water at $37^{\circ}C$ for 1 week, the restored teeth were sectioned into enamel beams approximately 0.8*0.8mm in cross section using a low speed precision diamond saw (TOPMET Metsaw-LS). After storage in distilled water at $37^{\circ}C$ for 1 month, 3 months, microtensile bond strength evaluations were performed using microspecimens. The microtensile bond strength (MPa) was derived by dividing the imposed force (N) at time of fracture by the bond area ($mm^2$). The mode of failure at the interface was determined with a microscope (Microscope-B nocular, Nikon). The data of microtensile bond strength were statistically analyzed using a one-way ANOVA, followed by Least Significant Difference Post Hoc Test at a significance level of 5%. Results: The mean microtensile bond strength after 1 month of storage showed no statistically significant difference between all adhesive groups (P>0.05). After 3 months of storage, adhesion to ground enamel of iBond was not significantly different from Clearfil SE Bond etch (P>>0.05), while Clearfil SE Bond non-etch and Scotchbond Multi-Purpose demonstrated significantly lower bond strengths (P<0.05), with no significant differences between the two adhesives. Conclusion: In this study the microtensile bond strength to ground enamel of two-step self-etch adhesive Clearfil SE Bond was not significantly different from three-step etch-and-rinse adhesive Scotchbond Multi-Purpose, and prior etching with 35% phosphoric acid significantly increased the bonding effectiveness of Clearfil SE Bond to enamel at 3 months.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

Studies on the Biochemical Features of Soybean Seeds for Higher Protein Variety -With Emphasis on Accumulation during Maturation and Electrophoretic Patterns of Proteins- (고단백 대두 품종 육성을 위한 종실의 생화학적 특성에 관한 연구 -단백질의 축적과 전기영동 유형을 중심으로)

  • Jong-Suk Lee
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.22 no.1
    • /
    • pp.135-166
    • /
    • 1977
  • Some biochemical features of varietal variation in seed protein and their implications for soybean breeding for high protein were pursued employing 86 soybean varieties of Korea, Japan, and the U.S.A. origins. Also, studied comparatively was the temporal pattern of protein components accumulation during seed development characteristic to the high protein variety. Seed protein content of the 86 soybean varieties varied 34.4 to 50.6%. Non-existence of variety having high content of both protein and oil, or high protein content with average oil content as well as high negative correlation between the content of protein and oil (r=-0.73$^{**}$) indicate strongly a great difficulty to breed high protein variety while conserving oil content. The total content of essential amino acids varied 32.82 to 36.63% and the total content of sulfur-containing amino acids varied 2.09 to 2.73% as tested for 12 varieties differing protein content from 40.0 to 50.6%. The content of methionine was positively correlated with the content of glutamic acid, which was the major amino acid (18.5%) in seed protein of soybean. In particular, the varieties Bongeui and Saikai #20 had high protein content as well as high content of sulfur-containing amino acids. The content of lysine was negatively correlated with that of isoleucine, but positively correlated with protein content. The content of alanine, valine or leucine was correlated positively with oil content. The seed protein of soybean was built with 12 to 16 components depending on variety as revealed on disc acrylamide gel electrophoresis. The 86 varieties were classified into 11 groups of characteristic electrophoretic pattern. The protein component of Rm=0.14(b) showed the greatest varietal variation among the components in their relative contents, and negative correlation with the content of the other components, while the protein component of Rm=0.06(a) had a significant, positive correlation with protein content. There was sequential phases of rapid decrease, slow increase and stay in the protein content during seed development. Shorter period and lower rate of decrease followed by longer period and higher rate of increase in protein content during seed development was of characteristic to high protein variety together with earlier and continuous development at higher rate of the protein component a. Considering the extremely low methionine content of the protein component a, breeding for high protein content may result in lower quality of soybean protein.n.

  • PDF

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Development of a Traffic Accident Prediction Model and Determination of the Risk Level at Signalized Intersection (신호교차로에서의 사고예측모형개발 및 위험수준결정 연구)

  • 홍정열;도철웅
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.7
    • /
    • pp.155-166
    • /
    • 2002
  • Since 1990s. there has been an increasing number of traffic accidents at intersection. which requires more urgent measures to insure safety on intersection. This study set out to analyze the road conditions, traffic conditions and traffic operation conditions on signalized intersection. to identify the elements that would impose obstructions in safety, and to develop a traffic accident prediction model to evaluate the safety of an intersection using the cop relation between the elements and an accident. In addition, the focus was made on suggesting appropriate traffic safety policies by dealing with the danger elements in advance and on enhancing the safety on the intersection in developing a traffic accident prediction model fir a signalized intersection. The data for the study was collected at an intersection located in Wonju city from January to December 2001. It consisted of the number of accidents, the road conditions, the traffic conditions, and the traffic operation conditions at the intersection. The collected data was first statistically analyzed and then the results identified the elements that had close correlations with accidents. They included the area pattern, the use of land, the bus stopping activities, the parking and stopping activities on the road, the total volume, the turning volume, the number of lanes, the width of the road, the intersection area, the cycle, the sight distance, and the turning radius. These elements were used in the second correlation analysis. The significant level was 95% or higher in all of them. There were few correlations between independent variables. The variables that affected the accident rate were the number of lanes, the turning radius, the sight distance and the cycle, which were used to develop a traffic accident prediction model formula considering their distribution. The model formula was compared with a general linear regression model in accuracy. In addition, the statistics of domestic accidents were investigated to analyze the distribution of the accidents and to classify intersections according to the risk level. Finally, the results were applied to the Spearman-rank correlation coefficient to see if the model was appropriate. As a result, the coefficient of determination was highly significant with the value of 0.985 and the ranks among the intersections according to the risk level were appropriate too. The actual number of accidents and the predicted ones were compared in terms of the risk level and they were about the same in the risk level for 80% of the intersections.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.