• Title/Summary/Keyword: experimental work

Search Result 4,418, Processing Time 0.032 seconds

A Numerical and Experimental Study for Fry-drying of Various Sludge (슬러지 유중 건조에 대한 전산 해석 및 실험적 연구)

  • Shin, Mi-Soo;Kim, Hey-Suk;Kim, Byeong-Gap;Hwang, Min-Jeong;Jang, Dong-Soon;Ohm, Tae-In
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.4
    • /
    • pp.341-348
    • /
    • 2010
  • The basic principle of fry drying process of sludge lies in the rapid pressure change of sludge material caused by the change of temperature between oil and moisture due to the difference of specific heat. Therefore, the rapid increase of pressure in drying sludge induces the efficient moisture escape through sludge pores toward heating oil media. The object of this study is to carry out a systematic investigation of the influence of various parameters associated with the sludge fry drying processes on the drying efficiency. To this end, a series of parametric experimental investigation has been made together with the numerical calculation in order to obtain typical drying curves as function of important parameters such as drying temperature, sludge diameter, oil type and sludge type. In the aspect of frying temperature, especially it is found that the operation higher than $140^{\circ}C$ was favorable in drying efficiency regardless of type of waste oil employed in this study. The same result was also noted consistently in the investigation of numerical calculation, that is, in that the sludge particle drying was efficiently made over $140^{\circ}C$ irrespective of the change of particle diameter. As expected, in general, the decrease of diameter in sludge was found efficient both experiment and numerical calculation in drying due to the increased surface area per unit volume. In the investigation of oil type and property, the effect of the viscosity of waste oil was found to be more influential in drying performance. In particular, when the oil with high viscosity, a visible time delay was noticed in moisture evaporation especially in the early stage of drying. However, the effect of high viscosity decreased significantly over the temperature of $140^{\circ}C$. There was no visible difference observed in the study of sludge type but the sewage sludge with a slightly better efficiency. The numerical study is considered to be a quite useful tool to assist in experiment with more detailed empirical modeling as further work.

Effect of Reaction Factors on the Properties of Complex Oxide Powder Produced by Spray Roasting Process (분무배소법에 의해 생성되는 복합산화물 분말들의 특성에 미치는 반응인자들의 영향)

  • 유재근;이성수;박희범;안주삼;남용현;손진군
    • Resources Recycling
    • /
    • v.9 no.4
    • /
    • pp.16-27
    • /
    • 2000
  • In order to produce raw material powder of advanced magnetic material by spray roasting process, newly modified spray roasting system was developed in this work. In this spray roasting system, raw material solution was effectively atomized and sprayed into the reaction furnace. Also, uniform temperature distribution inside reaction furnace made thermal decomposition process fully completed, and produced powder was effectively collected in cyclone and bag filter. This system equipped with apparatus which can purify hazard produced gas. In this study complex acid solution was prepared by dissolution of mill scale and ferro-Mn into the acid solution, and the pH of this complex acid solution was controlled about to 4. It was conformed that mill scale and ferro-Mn containing a lot of impurities such as $SiO_2$, P and Al could be used as raw material by reducing the impurities content of complex acid solution below 20 ppm. Complex oxide powder of Fe-Mn system was produced by spraying purified complex acid solution into the spray roaster through nozzle, and the variations of produced powder characters were studied by changing he reaction conditions such as reaction temperature, the injection velocity of solution and air, nozzle tip size and concentration of solution. The morphology of produced powder had spherical shape under the most experimental conditions, and concentration of solution. The morphology of produced powder has spherical shape under the most experimental conditions, and the composition and the particle size distribution were almost uniform, which tells the excellence of this spray roasting system. The grain size of most produced powder was below 100 nm. From the above results, it will be possible to produce ultra fine oxide powder from the chloride of Fe, Mn, Ni, Cu and rare earth by using this spray roasting system, and also to produce ultra fine pure metal powder by changing reaction atmosphere.

  • PDF

The Influence of Loyalty Program on the Effect of Customer Retention: Focused on Education Service Industry (고객보상 프로그램이 고객 유지에 미치는 효과: 교육 서비스 산업을 중심으로)

  • Jeon, Hoseong
    • Asia Marketing Journal
    • /
    • v.13 no.3
    • /
    • pp.25-53
    • /
    • 2011
  • This study probes the effect of loyalty program on the customer retention based on the real transaction data(n=2,892) acquired from education service industry. We try to figure out the outcomes of reward program through more than 1 year-long data gathered and analyzed according to quasi-experimental design(i.e., before and after design). We adopt this kinds of research scheme in regard that previous studies measured the effect of loyalty program by dividing the customers into two group(i.e., members vs. non-members) after the firms or stores had started the program. We believe that it might not avoid the self-selection bias. The research questions of this study could be explained such as: First, most research said that the loyalty programs could increase the customer loyalty and contribute to the sustainable growth of company. But there are little confirmation that this promotional tool could be justified in terms of financial perspective. Thus, we are interested in both the retention rate and financial outcomes caused by the introduction of loyalty programs. Second, reward programs target mainly current customer. Especially CRM(customer relationship management) said that it is more profitable for company to build positive relationship with current customer instead of pursuing new customer. And it claims that reward program is excellent means to achieve this goal. For this purpose, we check in this study whether there is a interaction effect between loyalty program and customer type in retaining customer. Third, it is said that dis-satisfied customers are more likely to leave the company than satisfied customers. While, Bolton, Kannan and Bramlett(2000) claimed that reward program could contribute to minimize the effect of negative service by building emotional link with customer, it is not empirically confirmed. This point of view explained that the loyalty programs might work as exit barrier to current customer. Thus, this study tries to identify whether there is a interaction effect between loyalty program and service experience in keeping customer. To achieve this purpose, this study adopt both Kaplan-Meier survival analysis and Cox proportional hazard model. The research outcomes show that the average retention period is 179 days before introducing loyalty program but it is increased to 227 days after reward is given to the customers. Since this difference is statistically significant, it could be said that H1 is supported. In addition, the contribution margin coming from increased transaction period is bigger than the cost for administering loyalty programs. To address other research questions, we probe the interaction effect between loyalty program and other factors(i.e., customer type and service experience) affecting it. The analysis of Cox proportional hazard model said that the current customer is more likely to engage in building relationship with company compared to new customer. In addition, retention rate of satisfied customer is significantly increased in relation to dis-satisfied customer. Interestingly, the transaction period of dis-satisfied customer is notably increased after introducing loyalty programs. Thus, it could be said that H2, H3, and H4 are also supported. In summary, we found that the loyalty programs have values as a promotional tool in forming positive relationship with customer and building exit barrier.

  • PDF

Cardio-pulmonary Adaptation to Physical Training (운동훈련(運動訓練)에 대(對)한 심폐기능(心肺機能)의 적응(適應)에 관(關)한 연구(硏究))

  • Cho, Kang-Ha
    • The Korean Journal of Physiology
    • /
    • v.1 no.1
    • /
    • pp.103-120
    • /
    • 1967
  • As pointed out by many previous investigators, the cardio-pulmonary system of well trained athletes is so adapted that they can perform a given physical exercise more efficiently as compared to non-trained persons. However, the time course of the development of these cardio-pulmonary adaptations has not been extensively studied in the past. Although the development of these training effects is undoubtedly related to the magnitude of an exercise load which is repeatedly given, it would be practical if one could maintain a good physical fitness with a minimal daily exercise. Hence, the present investigation was undertaken to study the time course of the development of cardio-pulmonary adaptations while a group of non-athletes was subjected to a daily 6 to 10 minutes running exercise for a period of 4 weeks. Six healthy male medical students (22 to 24 years old) were randomly selected as experimental subjects, and were equally divided into two groups (A and B). Both groups were subjected to the same daily running exercise (approximately 1,000 kg-m). 6 days a week for 4 weeks, but the rate of exercise was such that the group A ran on treadmill with 8.6% grade for 10 min daily at a speed of 127 m/min while the group B ran for 6 min at a speed of 200 m/min. In order to assess the effects of these physical trainings on the cardio-pulmonary system, the minute volume, the $O_2$ consumption, the $CO_2$ output and the heart rate were determined weekly while the subject was engaged in a given running exercise on treadmill (8.6% grade and 127 m/min) for a period of 5 min. In addition, the arterial blood pressure, the cardiac output, the acid-base state of arterial blood and the gas composition of arterial blood were also determined every other week in 4 subjects (2 from each group) while they were engaged in exercise on a bicycle ergometer at a rate of approximately 900 kg m/min until exhaustion. The maximal work capacity was also determined by asking the subject to engage in exercise on treadmill and ergometer until exhaustion. For the measurement of minute volume, the expired gas was collected in a Douglas bag. The $O_2$ consumption and the $CO_2$ output were subsequently computed by analysing the expired gas with a Scholander micro gas analyzer. The heart rate was calculated from the R-R interval of ECG tracings recorded by an Offner RS Dynograph. A 19 gauge Cournand needle was inserted into a brachial artery, through which arterial blood samples were taken. A Statham $P_{23}AA$ pressure transducer and a PR-7 Research Recorder were used for recording instantaneous arterial pressure. The cardiac output was measured by indicator (Cardiogreen) dilution method. The results may be summarized as follows: (1) The maximal running time on treadmill increased linearly during the 4 week training period at the end of which it increased by 2.8 to 4.6 times. In general, an increase in the maximal running time was greater when the speed was fixed at a level at which the subject was trained. The mammal exercise time on bicycle ergometer also increased linearly during the training period. (2) In carrying out a given running exercise on treadmill (8.6%grade, 127 m/min), the following changes in cardio·pulmonary functions were observed during the training period: (a) The minute volume as well as the $O_2$ consumption during steady state exercise tended to decrease progressively and showed significant reductions after 3 weeks of training. (b) The $CO_2$ production during steady state exercise showed a significant reduction within 1 week of training. (c) The heart rate during steady state exercise tended to decrease progressively and showed a significant reduction after 2 weeks of training. The reduction of heart rate following a given exercise tended to become faster by training and showed a significant change after 3 weeks. Although the resting heart rate also tended to decrease by training, no significant change was observed. (3) In rallying out a given exercise (900 kg-m/min) on a bicycle ergometer, the following change in cardio-vascular functions were observed during the training period: (3) The systolic blood pressure during steady state exercise was not affected while the diastolic blood Pressure was significantly lowered after 4 weeks of training. The resting diastolic pressure was also significantly lowered by the end of 4 weeks. (b) The cardiac output and the stroke volume during steady state exercise increased maximally within 2 weeks of training. However, the resting cardiac output was not altered while the resting stroke volume tended to increase somewhat by training. (c) The total peripheral resistance during steady state exercise was greatly lowered within 2 weeks of training. The mean circulation time during exorcise was also considerably shortened while the left heart work output during exercise increased significantly within 2 weeks. However, these functions_at rest were not altered by training. (d) Although both pH, $P_{co2}\;and\;(HCO_3-)$ of arterial plasma decreased during exercise, the magnitude of reductions became less by training. On the other hand, the $O_2$ content of arterial blood decreased during exercise before training while it tended to increase slightly after training. There was no significant alteration in these values at rest. These results indicate that cardio-pulmonary adaptations to physical training can be acquired by subjecting non-athletes to brief daily exercise routine for certain period of time. Although the time of appearance of various adaptive phenomena is not identical, it may be stated that one has to engage in daily exercise routine for at least 2 weeks for the development of significant adaptive changes.

  • PDF

A Study of the Relationship between Realistic Expression of Objects and Graphic Novel in Korean Comics - Focused on the work by Kwon, Ga-Ya - (한국만화에 있어 대상의 사실적 표현과 그래픽 노블의 연관관계에 대한 연구 - 권가야의 <남한산성>작품을 중심으로 -)

  • Park, Hee-Bok;Kim, Kwang-Su
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.361-392
    • /
    • 2014
  • Regarding works that express objects realistically in painting, Gustave Courbet advocated realism in the mid-19th century, France, resisting the then academist style of painting, and works in realist style were produced in earnest by painters such as H. Daumier or Jean F. Millet, who went along with him. Later, realism has expanded into the realm of general literature, including fine art, which has had profound impacts on works of art and literary works. In comics, too, in the same historical context as a form of painting, realistic comics began to be produced by painters or cartoonists at the time. These realism comics are those dealing with stories based on facts, and in terms of contents, objective description and representation of the social realities of the times is one of the most important objectives, but it could not be concluded that in their visual aspect, that is, that of expressing the objects, they were realistic. In the meantime, a graphic novel was born, which was the intermediate form between comics and novels around the United States and Europe since the 1980s. Graphic novels appeared in forms and styles with strong literary and artistic values in the comics market in the U.S. which was full of the superhero genre (comics around heroes), and their major characteristics are very realistic expressions in terms of contents and visual aspect. They are complex and delicate and even have artistic, literary values as if readers read a fiction or literary work of which its narrative structures or pictures are produced with graphics. The characteristics of realistic expressions shown in graphic novels are very different from the previous works of comics. It is noteworthy that they began to be acknowledged as works of art like painting or illustration, thanks to their features of strongly individual auteurist painting style, a fairly high degree of completion of the works, and creative and experimental expression techniques or methods, instead of following the fashion of the times. In recent years, in South Korea, Hollywood blockbuster films have been released one after another and become box office hits, there are increasing interest and demand for the original graphic novels. Accordingly, many original graphic novels have been translated and started to be sold, and keeping pace with this global flow of fashion, some writers in Korea began to produce works of graphic novels. However, to look into the domestic works produced claiming to be graphic novels, there are various opinions on their format and authenticity. In this sense, this study focused on Ga-ya Kwon's Namhansanseong, one the representative works of Korean style graphic novels, and in particular, it attempted to analyze their characteristics and commonalities focusing on the visual aspect of realistic expressions of objects. It is expected that there would be an opportunity to seek for ways so that Korean style graphic novel can be further developed as a genre of comics, with competitiveness by looking back on the identity and present state of domestic graphic novels and developing and applying Korea's original subject matters differentiated from those of graphic novels in the U.S., Europe or Japan through this study. In addition, it is desired that they will be a new energizer for the stagnant domestic comics market.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Verification of Gated Radiation Therapy: Dosimetric Impact of Residual Motion (여닫이형 방사선 치료의 검증: 잔여 움직임의 선량적 영향)

  • Yeo, Inhwan;Jung, Jae Won
    • Progress in Medical Physics
    • /
    • v.25 no.3
    • /
    • pp.128-138
    • /
    • 2014
  • In gated radiation therapy (gRT), due to residual motion, beam delivery is intended to irradiate not only the true extent of disease, but also neighboring normal tissues. It is desired that the delivery covers the true extent (i.e. clinical target volume or CTV) as a minimum, although target moves under dose delivery. The objectives of our study are to validate if the intended dose is surely delivered to the true target in gRT and to quantitatively understand the trend of dose delivery on it and neighboring normal tissues when gating window (GW), motion amplitude (MA), and CTV size changes. To fulfill the objectives, experimental and computational studies have been designed and performed. A custom-made phantom with rectangle- and pyramid-shaped targets (CTVs) on a moving platform was scanned for four-dimensional imaging. Various GWs were selected and image integration was performed to generate targets (internal target volume or ITV) for planning that included the CTVs and internal margins (IM). The planning was done conventionally for the rectangle target and IMRT optimization was done for the pyramid target. Dose evaluation was then performed on a diode array aligned perpendicularly to the gated beams through measurements and computational modeling of dose delivery under motion. This study has quantitatively demonstrated and analytically interpreted the impact of residual motion including penumbral broadening for both targets, perturbed but secured dose coverage on the CTV, and significant doses delivered in the neighboring normal tissues. Dose volume histogram analyses also demonstrated and interpreted the trend of dose coverage: for ITV, it increased as GW or MA decreased or CTV size increased; for IM, it increased as GW or MA decreased; for the neighboring normal tissue, opposite trend to that of IM was observed. This study has provided a clear understanding on the impact of the residual motion and proved that if breathing is reproducible gRT is secure despite discontinuous delivery and target motion. The procedures and computational model can be used for commissioning, routine quality assurance, and patient-specific validation of gRT. More work needs to be done for patient-specific dose reconstruction on CT images.

ICM - Trophectoderm Cell Numbers of Mouse IVF/IVC Blastocysts (체외생산된 생쥐 배반포기배의 ICM과 Trophectoderm 세포수에 관한 연구)

  • Kim, E.Y.;Kim, S.E.;Uhm, S.J.;Yoon, S.H.;Park, S.P.;Chung, K.S.;Lim, J.H.
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.23 no.1
    • /
    • pp.25-32
    • /
    • 1996
  • This work has been carried out to examine the number of Total, ICM and TE cells of F1 mouse blastcysts at day 4 after IVF by differential labelling of the nuclei with polynucleotide-specific fluorochromes and to obtain a fundamental information of preimplantation mouse embryo development. Blastocysts produced by superovulated B6CBA F1(C57BL/${\times}$CBA) eggs were inseminated with $1{\times}10^6$spermatozoa/ml and cultured in M16 medium at $37^{\circ}C$, 5% $CO_2$ incubator for 95hrs. Blastocysts were classified as early, middle, expanded and hatching stage according to the developmental morphology; blastocoel expansion and zona thickness. The results obtained in these experiments were summarized as follows; 1) The development rate of blastocysts at 95hrs after IVF was 86.7% and classified blastocysts to early, middle, expanded and hatching were 16.3%, 18.9%, 10.5% and 40.9%, respectively. 2) The numbers of total blastomere using bisbenzimide in the classified blastocysts to early, middle, expanded and hatching were 35.6${\pm}$1O.4, 49.4${\pm}$8.6, 60.8${\pm}$1O.7 and 62.7${\pm}$13.9, respectively. 3) In ICM and TE cell number by using differential labelling with polynucleotide-specific fluorochrome in the classified blastocysts to early, middle, expanded and hatching; ICM numbers were 9.6${\pm}$3.0, 13.6${\pm}$3.9, 16.0${\pm}$3.3 and 19.5${\pm}$4.6, respectively and TE cell numbers were 30.6${\pm}$5.1, 39.9${\pm}$5.8, 42.2${\pm}$8.1 and 43.7${\pm}$11.1, respectively. These results showed the same increase pattern according to development advance level. Also, when compared with the results of total count were obtained between bisbenzimide only and differential labelling, both of them showed the same increase pattern according to development level and at the same time their cell numbers were almost the same. So, rapid and simple cell count method using differential labelling can be used for the examination of later preimplantation development or as an indicator of embryo quality according to the variables of culture conditions.

  • PDF