• Title/Summary/Keyword: SO

Search Result 107,201, Processing Time 0.118 seconds

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

A Study on Public Interest-based Technology Valuation Models in Water Resources Field (수자원 분야 공익형 기술가치평가 시스템에 대한 연구)

  • Ryu, Seung-Mi;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.177-198
    • /
    • 2018
  • Recently, as economic property it has become necessary to acquire and utilize the framework for water resource measurement and performance management as the property of water resources changes to hold "public property". To date, the evaluation of water technology has been carried out by feasibility study analysis or technology assessment based on net present value (NPV) or benefit-to-cost (B/C) effect, however it is not yet systemized in terms of valuation models to objectively assess an economic value of technology-based business to receive diffusion and feedback of research outcomes. Therefore, K-water (known as a government-supported public company in Korea) company feels the necessity to establish a technology valuation framework suitable for technical characteristics of water resources fields in charge and verify an exemplified case applied to the technology. The K-water evaluation technology applied to this study, as a public interest goods, can be used as a tool to measure the value and achievement contributed to society and to manage them. Therefore, by calculating the value in which the subject technology contributed to the entire society as a public resource, we make use of it as a basis information for the advertising medium of performance on the influence effect of the benefits or the necessity of cost input, and then secure the legitimacy for large-scale R&D cost input in terms of the characteristics of public technology. Hence, K-water company, one of the public corporation in Korea which deals with public goods of 'water resources', will be able to establish a commercialization strategy for business operation and prepare for a basis for the performance calculation of input R&D cost. In this study, K-water has developed a web-based technology valuation model for public interest type water resources based on the technology evaluation system that is suitable for the characteristics of a technology in water resources fields. In particular, by utilizing the evaluation methodology of the Institute of Advanced Industrial Science and Technology (AIST) in Japan to match the expense items to the expense accounts based on the related benefit items, we proposed the so-called 'K-water's proprietary model' which involves the 'cost-benefit' approach and the FCF (Free Cash Flow), and ultimately led to build a pipeline on the K-water research performance management system and then verify the practical case of a technology related to "desalination". We analyze the embedded design logic and evaluation process of web-based valuation system that reflects characteristics of water resources technology, reference information and database(D/B)-associated logic for each model to calculate public interest-based and profit-based technology values in technology integrated management system. We review the hybrid evaluation module that reflects the quantitative index of the qualitative evaluation indices reflecting the unique characteristics of water resources and the visualized user-interface (UI) of the actual web-based evaluation, which both are appended for calculating the business value based on financial data to the existing web-based technology valuation systems in other fields. K-water's technology valuation model is evaluated by distinguishing between public-interest type and profitable-type water technology. First, evaluation modules in profit-type technology valuation model are designed based on 'profitability of technology'. For example, the technology inventory K-water holds has a number of profit-oriented technologies such as water treatment membranes. On the other hand, the public interest-type technology valuation is designed to evaluate the public-interest oriented technology such as the dam, which reflects the characteristics of public benefits and costs. In order to examine the appropriateness of the cost-benefit based public utility valuation model (i.e. K-water specific technology valuation model) presented in this study, we applied to practical cases from calculation of benefit-to-cost analysis on water resource technology with 20 years of lifetime. In future we will additionally conduct verifying the K-water public utility-based valuation model by each business model which reflects various business environmental characteristics.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

The Effect of Corporate SNS Marketing on User Behavior: Focusing on Facebook Fan Page Analytics (기업의 SNS 마케팅 활동이 이용자 행동에 미치는 영향: 페이스북 팬페이지 애널리틱스를 중심으로)

  • Jeon, Hyeong-Jun;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.75-95
    • /
    • 2020
  • With the growth of social networks, various forms of SNS have emerged. Based on various motivations for use such as interactivity, information exchange, and entertainment, SNS users are also on the fast-growing trend. Facebook is the main SNS channel, and companies have started using Facebook pages as a public relations channel. To this end, in the early stages of operation, companies began to secure a number of fans, and as a result, the number of corporate Facebook fans has recently increased to as many as millions. from a corporate perspective, Facebook is attracting attention because it makes it easier for you to meet the customers you want. Facebook provides an efficient advertising platform based on the numerous data it has. Advertising targeting can be conducted using their demographic characteristics, behavior, or contact information. It is optimized for advertisements that can expose information to a desired target, so that results can be obtained more effectively. it rethink and communicate corporate brand image to customers through contents. The study was conducted through Facebook advertising data, and could be of great help to business people working in the online advertising industry. For this reason, the independent variables used in the research were selected based on the characteristics of the content that the actual business is concerned with. Recently, the company's Facebook page operation goal is to go beyond securing the number of fan pages, branding to promote its brand, and further aiming to communicate with major customers. the main figures for this assessment are Facebook's 'OK', 'Attachment', 'Share', and 'Number of Click' which are the dependent variables of this study. in order to measure the outcome of the target, the consumer's response is set as a key measurable key performance indicator (KPI), and a strategy is set and executed to achieve this. Here, KPI uses Facebook's ad numbers 'reach', 'exposure', 'like', 'share', 'comment', 'clicks', and 'CPC' depending on the situation. in order to achieve the corresponding figures, the consideration of content production must be prior, and in this study, the independent variables were organized by dividing into three considerations for content production into three. The effects of content material, content structure, and message styles on Facebook's user behavior were analyzed using regression analysis. Content materials are related to the content's difficulty, company relevance, and daily involvement. According to existing research, it was very important how the content would attract users' interest. Content could be divided into informative content and interesting content. Informational content is content related to the brand, and information exchange with users is important. Interesting content is defined as posts that are not related to brands related to interesting movies or anecdotes. Based on this, this study started with the assumption that the difficulty, company relevance, and daily involvement have an effect on the dependent variable. In addition, previous studies have found that content types affect Facebook user activity. I think it depends on the combination of photos and text used in the content. Based on this study, the actual photos were used and the hashtag and independent variables were also examined. Finally, we focused on the advertising message. In the previous studies, the effect of advertising messages on users was different depending on whether they were narrative or non-narrative, and furthermore, the influence on message intimacy was different. In this study, we conducted research on the behavior that Facebook users' behavior would be different depending on the language and formality. For dependent variables, 'OK' and 'Full Click Count' are set by every user's action on the content. In this study, we defined each independent variable in the existing study literature and analyzed the effect on the dependent variable, and found that 'good' factors such as 'self association', 'actual use', and 'hidden' are important. Could. Material difficulties', 'actual participation' and 'large scale * difficulties'. In addition, variables such as 'Self Connect', 'Actual Engagement' and 'Sexual Sexual Attention' have been shown to have a significant impact on 'Full Click'. It is expected that through research results, it is possible to contribute to the operation and production strategy of company Facebook operators and content creators by presenting a content strategy optimized for the purpose of the content. In this study, we defined each independent variable in the existing research literature and analyzed its effect on the dependent variable, and we could see that factors on 'good' were significant such as 'self-association', 'reality use', 'concernal material difficulty', 'real-life involvement' and 'massive*difficulty'. In addition, variables such as 'self-connection', 'real-life involvement' and 'formative*attention' were shown to have significant effects for 'full-click'. Through the research results, it is expected that by presenting an optimized content strategy for content purposes, it can contribute to the operation and production strategy of corporate Facebook operators and content producers.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

Comparison and evaluation of volumetric modulated arc therapy and intensity modulated radiation therapy plans for postoperative radiation therapy of prostate cancer patient using a rectal balloon (직장풍선을 삽입한 전립선암 환자의 수술 후 방사선 치료 시 용적변조와 세기변조방사선치료계획 비교 평가)

  • Jung, hae youn;Seok, jin yong;Hong, joo wan;Chang, nam jun;Choi, byeong don;Park, jin hong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.1
    • /
    • pp.45-52
    • /
    • 2015
  • Purpose : The dose distribution of organ at risk (OAR) and normal tissue is affected by treatment technique in postoperative radiation therapy for prostate cancer. The aim of this study was to compare dose distribution characteristic and to evaluate treatment efficiency by devising VMAT plans according to applying differed number of arc and IMRT plan for postoperative patient of prostate cancer radiation therapy using a rectal balloon. Materials and Methods : Ten patients who received postoperative prostate radiation therapy in our hospital were compared. CT images of patients who inserted rectal balloon were acquired with 3 mm thickness and 10 MV energy of HD120MLC equipped Truebeam STx (Varian, Palo Alto, USA) was applied by using Eclipse (Version 11.0, Varian, Palo Alto, USA). 1 Arc, 2 Arc VMAT plans and 7-field IMRT plan were devised for each patient and same values were applied for dose volume constraint and plan normalization. To evaluate these plans, PTV coverage, conformity index (CI) and homogeneity index (HI) were compared and $R_{50%}$ was calculated to assess low dose spillage as per treatment plan. $D_{25%}$ of rectum and bladder Dmean were compared on OAR. And to evaluate the treatment efficiency, total monitor units(MU) and delivery time were considered. Each assessed result was analyzed by average value of 10 patients. Additionally, portal dosimetry was carried out for accuracy verification of beam delivery. Results : There was no significant difference on PTV coverage and HI among 3 plans. Especially CI and $R_{50%}$ on 7F-IMRT were the highest as 1.230, 3.991 respectively(p=0.00). Rectum $D_{25%}$ was similar between 1A-VMAT and 2A-VMAT. But approximately 7% higher value was observed on 7F-IMRT compare to the others(p=0.02) and bladder Dmean were similar among the all plan(P>0.05). Total MU were 494.7, 479.7, 757.9 respectively(P=0.00) for 1A-VMAT, 2A-VMAT, 7F-IMRT and at the most on 7F-IMRT. The delivery time were 65.2sec, 133.1sec, 145.5sec respectively(p=0.00). The obvious shortest time was observed on 1A-VMAT. All plans indicated over 99.5%(p=0.00) of gamma pass rate (2 mm, 2%) in portal dosimetry quality assurance. Conclusion : As a result of study, postoperative prostate cancer radiation therapy for patient using a rectal balloon, there was no significant difference of PTV coverage but 1A-VMAT and 2A-VMAT were more efficient for dose reduction of normal tissue and OARs. Between VMAT plans. $R_{50%}$ and MU were little lower in 2A-VMAT but 1A-VMAT has the shortest delivery time. So it is regarded to be an effective plan and it can reduce intra-fractional motion of patient also.

  • PDF

A study on the air pollutant emission trends in Gwangju (광주시 대기오염물질 배출량 변화추이에 관한 연구)

  • Seo, Gwang-Yeob;Shin, Dae-Yewn
    • Journal of environmental and Sanitary engineering
    • /
    • v.24 no.4
    • /
    • pp.1-26
    • /
    • 2009
  • We conclude the following with air pollution data measured from city measurement net administered and managed in Gwangju for the last 7 years from January in 2001 to December in 2007. In addition, some major statistics governed by Gwangju city and data administered by Gwangju as national official statistics obtained by estimating the amount of national air pollutant emission from National Institute of Environmental Research were used. The results are as follows ; 1. The distribution by main managements of air emission factory is the following ; Gwangju City Hall(67.8%) > Gwangsan District Office(13.6%) > Buk District Office(9.8%) > Seo District Office(5.5%) > Nam District Office(3.0%) > Dong District Office(0.3%) and the distribution by districts of air emission factory ; Buk District(32.8%) > Gwangsan District(22.4%) > Seo District(21.8%) > Nam District(14.9%) > Dong District(8.1%). That by types(Year 2004~2007 average) is also following ; Type 5(45.2%) > Type 4(40.7%) > Type 3(8.6%) > Type 2(3.2%) > Type 1(2.2%) and the most of them are small size of factory, Type 4 and 5. 2. The distribution by districts of the number of car registrations is the following ; Buk District(32.8%) > Gwangsan District(22.4%) > Seo District(21.8%) > Nam District(14.9%) > Dong District(8.1%) and the distribution by use of car fuel in 2001 ; Gasoline(56.3%) > Diesel(30.3%) > LPG(13.4%) > etc.(0.2%). In 2007, there was no ranking change ; Gasoline(47.8%) > Diesel(35.6%) > LPG(16.2%) >etc.(0.4%). The number of gasoline cars increased slightly, but that of diesel and LPG cars increased remarkably. 3. The distribution by items of the amount of air pollutant emission in Gwangju is the following; CO(36.7%) > NOx(32.7%) > VOC(26.7%) > SOx(2.3%) > PM-10(1.5%). The amount of CO and NOx, which are generally generated from cars, is very large percentage among them. 4. The distribution by mean of air pollutant emission(SOx, NOx, CO, VOC, PM-10) of each county for 5 years(2001~2005) is the following ; Buk District(31.0%) > Gwangsan District(28.2%) > Seo District(20.4%) > Nam District(12.5%) > Dong District(7.9%). The amount of air pollutant emission in Buk District, which has the most population, car registrations, and air pollutant emission businesses, was the highest. On the other hand, that of air pollutant emission in Dong District, which has the least population, car registrations, and air pollutant emission businesses, was the least. 5. The average rates of SOx for 5 years(2001~2005) in Gwangju is the following ; Non industrial combustion(59.5%) > Combustion in manufacturing industry(20.4%) > Road transportation(11.4%) > Non-road transportation(3.8%) > Waste disposal(3.7%) > Production process(1.1%). And the distribution of average amount of SOx emission of each county is shown as Gwangsan District(33.3%) > Buk District(28.0%) > Seo District(19.3%) > Nam District(10.2%) > Dong District(9.1%). 6. The distribution of the amount of NOx emission in Gwangju is shown as Road transportation(59.1%) > Non-road transportation(18.9%) > Non industrial combustion(13.3%) > Combustion in manufacturing industry(6.9%) > Waste disposal(1.6%) > Production process(0.1%). And the distribution of the amount of NOx emission from each county is the following ; Buk District(30.7%) > Gwangsan District(28.8%) > Seo District(20.5%) > Nam District(12.2%) > Dong District(7.8%). 7. The distribution of the amount of carbon monoxide emission in Gwangju is shown as Road transportation(82.0%) > Non industrial combustion(10.6%) > Non-road transportation(5.4%) > Combustion in manufacturing industry(1.7%) > Waste disposal(0.3%). And the distribution of the amount of carbon monoxide emission from each county is the following ; Buk District(33.0%) > Seo District(22.3%) > Gwangsan District(21.3%) > Nam District(14.3%) > Dong District(9.1%). 8. The distribution of the amount of Volatile Organic Compound emission in Gwangju is shown as Solvent utilization(69.5%) > Road transportation(19.8%) > Energy storage & transport(4.4%) > Non-road transportation(2.8%) > Waste disposal(2.4%) > Non industrial combustion(0.5%) > Production process(0.4%) > Combustion in manufacturing industry(0.3%). And the distribution of the amount of Volatile Organic Compound emission from each county is the following ; Gwangsan District(36.8%) > Buk District(28.7%) > Seo District(17.8%) > Nam District(10.4%) > Dong District(6.3%). 9. The distribution of the amount of minute dust emission in Gwangju is shown as Road transportation(76.7%) > Non-road transportation(16.3%) > Non industrial combustion(6.1%) > Combustion in manufacturing industry(0.7%) > Waste disposal(0.2%) > Production process(0.1%). And the distribution of the amount of minute dust emission from each county is the following ; Buk District(32.8%) > Gwangsan District(26.0%) > Seo District(19.5%) > Nam District(13.2%) > Dong District(8.5%). 10. According to the major source of emission of each items, that of oxides of sulfur is Non industrial combustion, heating of residence, business and agriculture and stockbreeding. And that of NOx, carbon monoxide, minute dust is Road transportation, emission of cars and two-wheeled vehicles. Also, that of VOC is Solvent utilization emission facilities due to Solvent utilization. 11. The concentration of sulfurous acid gas has been 0.004ppm since 2001 and there has not been no concentration change year by year. It is considered that the use of sulfurous acid gas is now reaching to the stabilization stage. This is found by the facts that the use of fuel is steadily changing from solid or liquid fuel to low sulfur liquid fuel containing very little amount of sulfur element or gas, so that nearly no change in concentration has been shown regularly. 12. Concerning changes of the concentration of throughout time, the concentration of NO has been shown relatively higher than that of $NO_2$ between 6AM~1PM and the concentration of $NO_2$ higher during the other time. The concentration of NOx(NO, $NO_2$) has been relatively high during weekday evenings. This result shows that there is correlation between the concentration of NOx and car traffics as we can see the Road transportation which accounts for 59.1% among the amount of NOx emission. 13. 49.1~61.2% of PM-10 shows PM-2.5 concerning the relationship between PM-10 and PM-2.5 and PM-2.5 among dust accounts for 45.4%~44.5% of PM-10 during March and April which is the lowest rates. This proves that particles of yellow sand that are bigger than the size $2.5\;{\mu}m$ are sent more than those that are smaller from China. This result shows that particles smaller than $2.5\;{\mu}m$ among dust exist much during July~August and December~January and 76.7% of minute dust is proved to be road transportation in Gwangju.

Developmental Plans and Research on Private Security in Korea (한국 민간경비 실태 및 발전방안)

  • Kim, Tea-Hwan;Park, Ok-Cheol
    • Korean Security Journal
    • /
    • no.9
    • /
    • pp.69-98
    • /
    • 2005
  • The security industry for civilians (Private Security), was first introduced to Korea via the US army's security system in the early 1960's. Shortly after then, official police laws were enforced in 1973, and private security finally started to develop with the passing of the 'service security industry' law in 1976. Korea's Private Security industry grew rapidly in the 1980's with the support of foreign funds and products, and now there are thought to be approximately 2000 private security enterprises currently running in Korea. However, nowadays the majority of these enterprises are experiencing difficulties such as lack of funds, insufficient management, and lack of control over employees, as a result, it seems difficult for some enterprises to avoid the low production output and bankruptcy. As a result of this these enterprises often settle these matters illegally, such as excessive dumping or avoiding problems by hiring inappropriate employees who don't have the right skills or qualifications for the jobs. The main problem with the establishment of this kind of security service is that it is so easy to make inroads into this private service market. All these hindering factors inhibit the market growth and impede qualitative development. Based on these main reasons, I researched this area, and will analyze and criticize the present condition of Korea's private security. I will present a possible development plan for the private security of Korea by referring to cases from the US and Japan. My method of researching was to investigate any related documentary records and articles and to interview people for necessary evidence. The theoretical study, involves investigation books and dissertations which are published from inside and outside of the country, and studying the complete collection of laws and regulations, internet data, various study reports, and the documentary records and the statistical data of many institutions such as the National Police Office, judicial training institute, and the enterprises of private security. Also, in addition, the contents of professionals who are in charge of practical affairs on the spot in order to overcomes the critical points of documentary records when investigating dissertation. I tried to get a firm grasp of the problems and difficulties which people in these work enterprises experience, this I thought would be most effective by interviewing the workers, for example: how they feel in the work places and what are the elements which inpede development? And I also interviewed policemen who are in charge of supervising the private escort enterprises, in an effort to figure out the problems and differences in opinion between domestic private security service and the police. From this investigation and research I will try to pin point the major problems of the private security and present a developmental plan. Firstly-Companies should unify the private police law and private security service law. Secondly-It is essential to introduce the 'specialty certificate' system for the quality improvement of private security service. Thirdly-must open up a new private security market by improving old system. Fourth-must build up the competitive power of the security service enterprises which is based on an efficient management. Fifth-needs special marketing strategy to hold customers Sixth-needs positive research based on theoretical studies. Seventh-needs the consistent and even training according to effective market demand. Eighth-Must maintain interrelationship with the police department. Ninth-must reinforce the system of Korean private security service association. Tenth-must establish private security laboratory. Based on these suggestions there should be improvement of private security service.

  • PDF