• Title/Summary/Keyword: AN

Search Result 407,902, Processing Time 0.336 seconds

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A study on the Degradation and By-products Formation of NDMA by the Photolysis with UV: Setup of Reaction Models and Assessment of Decomposition Characteristics by the Statistical Design of Experiment (DOE) based on the Box-Behnken Technique (UV 공정을 이용한 N-Nitrosodimethylamine (NDMA) 광분해 및 부산물 생성에 관한 연구: 박스-벤켄법 실험계획법을 이용한 통계학적 분해특성평가 및 반응모델 수립)

  • Chang, Soon-Woong;Lee, Si-Jin;Cho, Il-Hyoung
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.1
    • /
    • pp.33-46
    • /
    • 2010
  • We investigated and estimated at the characteristics of decomposition and by-products of N-Nitrosodimethylamine (NDMA) using a design of experiment (DOE) based on the Box-Behken design in an UV process, and also the main factors (variables) with UV intensity($X_2$) (range: $1.5{\sim}4.5\;mW/cm^2$), NDMA concentration ($X_2$) (range: 100~300 uM) and pH ($X_2$) (rang: 3~9) which consisted of 3 levels in each factor and 4 responses ($Y_1$ (% of NDMA removal), $Y_2$ (dimethylamine (DMA) reformation (uM)), $Y_3$ (dimethylformamide (DMF) reformation (uM), $Y_4$ ($NO_2$-N reformation (uM)) were set up to estimate the prediction model and the optimization conditions. The results of prediction model and optimization point using the canonical analysis in order to obtain the optimal operation conditions were $Y_1$ [% of NDMA removal] = $117+21X_1-0.3X_2-17.2X_3+{2.43X_1}^2+{0.001X_2}^2+{3.2X_3}^2-0.08X_1X_2-1.6X_1X_3-0.05X_2X_3$ ($R^2$= 96%, Adjusted $R^2$ = 88%) and 99.3% ($X_1:\;4.5\;mW/cm^2$, $X_2:\;190\;uM$, $X_3:\;3.2$), $Y_2$ [DMA conc] = $-101+18.5X_1+0.4X_2+21X_3-{3.3X_1}^2-{0.01X_2}^2-{1.5X_3}^2-0.01X_1X_2+0.07X_1X_3-0.01X_2X_3$ ($R^2$= 99.4%, 수정 $R^2$ = 95.7%) and 35.2 uM ($X_1$: 3 $mW/cm^2$, $X_2$: 220 uM, $X_3$: 6.3), $Y_3$ [DMF conc] = $-6.2+0.2X_1+0.02X_2+2X_3-0.26X_1^2-0.01X_2^2-0.2X_3^2-0.004X_1X_2+0.1X_1X_3-0.02X_2X_3$ ($R^2$= 98%, Adjusted $R^2$ = 94.4%) and 3.7 uM ($X_1:\;4.5\;$mW/cm^2$, $X_2:\;290\;uM$, $X_3:\;6.2$) and $Y_4$ [$NO_2$-N conc] = $-25+12.2X_1+0.15X_2+7.8X_3+{1.1X_1}^2+{0.001X_2}^2-{0.34X_3}^2+0.01X_1X_2+0.08X_1X_3-3.4X_2X_3$ ($R^2$= 98.5%, Adjusted $R^2$ = 95.7%) and 74.5 uM ($X_1:\;4.5\;mW/cm^2$, $X_2:\;220\;uM$, $X_3:\;3.1$). This study has demonstrated that the response surface methodology and the Box-Behnken statistical experiment design can provide statistically reliable results for decomposition and by-products of NDMA by the UV photolysis and also for determination of optimum conditions. Predictions obtained from the response functions were in good agreement with the experimental results indicating the reliability of the methodology used.

Relationship between Stress and Eating Habits of Adults in Ulsan (울산지역 성인 남녀의 스트레스와 식습관)

  • Kim, Hye-Kyung;Kim, Jin-Hee
    • Journal of Nutrition and Health
    • /
    • v.42 no.6
    • /
    • pp.536-546
    • /
    • 2009
  • This study was done to investigate the effect of stress on appetite and eating habits, and other health-related behaviors. The subjects of this study consisted of 188 males and 224 females in Ulsan area. The results were as follows: When stressed, 56% (n = 231) of the subjects experienced a change in appetite and of these, 32% (n = 132) experienced an increased appetite. Stress-induced eating may be one factor contributing to the development of obesity. There was a gender-specific response to stress in which women are more likely to use food to deal with stress, whereas men are more likely to use alcohol consumption or smoking. It was found that types of stressors were individual (52.9%), social (50.7%), family relations (34.5%), work demands (34.2%) and physical environment (32.3%). Stress-induced symptoms of the subjects were anxiety (38.3%), headache (36.7%) and neck or shoulder aches (36.2%), and females experienced those symptoms more than males. Those older than 50 years had a higher eating habit score and lower stress score compared with younger subjects. There were significant differences between sex, age, occupation, family type, BMI, exercise, sleeping hours and eating habits or stress level. This study may be helpful in advancing findings in this area to better provide health professionals with appropriate counseling tools to improve the health of all individuals.

Comparison and evaluation of volumetric modulated arc therapy and intensity modulated radiation therapy plans for postoperative radiation therapy of prostate cancer patient using a rectal balloon (직장풍선을 삽입한 전립선암 환자의 수술 후 방사선 치료 시 용적변조와 세기변조방사선치료계획 비교 평가)

  • Jung, hae youn;Seok, jin yong;Hong, joo wan;Chang, nam jun;Choi, byeong don;Park, jin hong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.1
    • /
    • pp.45-52
    • /
    • 2015
  • Purpose : The dose distribution of organ at risk (OAR) and normal tissue is affected by treatment technique in postoperative radiation therapy for prostate cancer. The aim of this study was to compare dose distribution characteristic and to evaluate treatment efficiency by devising VMAT plans according to applying differed number of arc and IMRT plan for postoperative patient of prostate cancer radiation therapy using a rectal balloon. Materials and Methods : Ten patients who received postoperative prostate radiation therapy in our hospital were compared. CT images of patients who inserted rectal balloon were acquired with 3 mm thickness and 10 MV energy of HD120MLC equipped Truebeam STx (Varian, Palo Alto, USA) was applied by using Eclipse (Version 11.0, Varian, Palo Alto, USA). 1 Arc, 2 Arc VMAT plans and 7-field IMRT plan were devised for each patient and same values were applied for dose volume constraint and plan normalization. To evaluate these plans, PTV coverage, conformity index (CI) and homogeneity index (HI) were compared and $R_{50%}$ was calculated to assess low dose spillage as per treatment plan. $D_{25%}$ of rectum and bladder Dmean were compared on OAR. And to evaluate the treatment efficiency, total monitor units(MU) and delivery time were considered. Each assessed result was analyzed by average value of 10 patients. Additionally, portal dosimetry was carried out for accuracy verification of beam delivery. Results : There was no significant difference on PTV coverage and HI among 3 plans. Especially CI and $R_{50%}$ on 7F-IMRT were the highest as 1.230, 3.991 respectively(p=0.00). Rectum $D_{25%}$ was similar between 1A-VMAT and 2A-VMAT. But approximately 7% higher value was observed on 7F-IMRT compare to the others(p=0.02) and bladder Dmean were similar among the all plan(P>0.05). Total MU were 494.7, 479.7, 757.9 respectively(P=0.00) for 1A-VMAT, 2A-VMAT, 7F-IMRT and at the most on 7F-IMRT. The delivery time were 65.2sec, 133.1sec, 145.5sec respectively(p=0.00). The obvious shortest time was observed on 1A-VMAT. All plans indicated over 99.5%(p=0.00) of gamma pass rate (2 mm, 2%) in portal dosimetry quality assurance. Conclusion : As a result of study, postoperative prostate cancer radiation therapy for patient using a rectal balloon, there was no significant difference of PTV coverage but 1A-VMAT and 2A-VMAT were more efficient for dose reduction of normal tissue and OARs. Between VMAT plans. $R_{50%}$ and MU were little lower in 2A-VMAT but 1A-VMAT has the shortest delivery time. So it is regarded to be an effective plan and it can reduce intra-fractional motion of patient also.

  • PDF

THE RELATIONSHIP BETWEEN PARTICLE INJECTION RATE OBSERVED AT GEOSYNCHRONOUS ORBIT AND DST INDEX DURING GEOMAGNETIC STORMS (자기폭풍 기간 중 정지궤도 공간에서의 입자 유입률과 Dst 지수 사이의 상관관계)

  • 문가희;안병호
    • Journal of Astronomy and Space Sciences
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2003
  • To examine the causal relationship between geomagnetic storm and substorm, we investigate the correlation between dispersionless particle injection rate of proton flux observed from geosynchronous satellites, which is known to be a typical indicator of the substorm expansion activity, and Dst index during magnetic storms. We utilize geomagnetic storms occurred during the period of 1996 ~ 2000 and categorize them into three classes in terms of the minimum value of the Dst index ($Dst_{min}$); intense ($-200nT{$\leq$}Dst_{min}{$\leq$}-100nT$), moderate($-100nT{\leq}Dst_{min}{\leq}-50nT$), and small ($-50nT{\leq}Dst_{min}{\leq}-30nT$) -30nT)storms. We use the proton flux of the energy range from 50 keV to 670 keV, the major constituents of the ring current particles, observed from the LANL geosynchronous satellites located within the local time sector from 18:00 MLT to 04:00 MLT. We also examine the flux ratio ($f_{max}/f_{ave}$) to estimate particle energy injection rate into the inner magnetosphere, with $f_{ave}$ and $f_{max}$ being the flux levels during quiet and onset levels, respectively. The total energy injection rate into the inner magnetosphere can not be estimated from particle measurements by one or two satellites. However, the total energy injection rate should be at least proportional to the flux ratio and the injection frequency. Thus we propose a quantity, “total energy injection parameter (TEIP)”, defined by the product of the flux ratio and the injection frequency as an indicator of the injected energy into the inner magnetosphere. To investigate the phase dependence of the substorm contribution to the development of magnetic storm, we examine the correlations during the two intervals, main and recovery phase of storm separately. Several interesting tendencies are noted particularly during the main phase of storm. First, the average particle injection frequency tends to increase with the storm size with the correlation coefficient being 0.83. Second, the flux ratio ($f_{max}/f_{ave}$) tends to be higher during large storms. The correlation coefficient between $Dst_{min}$ and the flux ratio is generally high, for example, 0.74 for the 75~113 keV energy channel. Third, it is also worth mentioning that there is a high correlation between the TEIP and $Dst_{min}$ with the highest coefficient (0.80) being recorded for the energy channel of 75~113 keV, the typical particle energies of the ring current belt. Fourth, the particle injection during the recovery phase tends to make the storms longer. It is particularly the case for intense storms. These characteristics observed during the main phase of the magnetic storm indicate that substorm expansion activity is closely associated with the development of mangetic storm.

A Study on Market Expansion Strategy via Two-Stage Customer Pre-segmentation Based on Customer Innovativeness and Value Orientation (고객혁신성과 가치지향성 기반의 2단계 사전 고객세분화를 통한 시장 확산 전략)

  • Heo, Tae-Young;Yoo, Young-Sang;Kim, Young-Myoung
    • Journal of Korea Technology Innovation Society
    • /
    • v.10 no.1
    • /
    • pp.73-97
    • /
    • 2007
  • R&D into future technologies should be conducted in conjunction with technological innovation strategies that are linked to corporate survival within a framework of information and knowledge-based competitiveness. As such, future technology strategies should be ensured through open R&D organizations. The development of future technologies should not be conducted simply on the basis of future forecasts, but should take into account customer needs in advance and reflect them in the development of the future technologies or services. This research aims to select as segmentation variables the customers' attitude towards accepting future telecommunication technologies and their value orientation in their everyday life, as these factors wilt have the greatest effect on the demand for future telecommunication services and thus segment the future telecom service market. Likewise, such research seeks to segment the market from the stage of technology R&D activities and employ the results to formulate technology development strategies. Based on the customer attitude towards accepting new technologies, two groups were induced, and a hierarchical customer segmentation model was provided to conduct secondary segmentation of the two groups on the basis of their respective customer value orientation. A survey was conducted in June 2006 on 800 consumers aged 15 to 69, residing in Seoul and five other major South Korean cities, through one-on-one interviews. The samples were divided into two sub-groups according to their level of acceptance of new technology; a sub-group demonstrating a high level of technology acceptance (39.4%) and another sub-group with a comparatively lower level of technology acceptance (60.6%). These two sub-groups were further divided each into 5 smaller sub-groups (10 total smaller sub-groups) through two rounds of segmentation. The ten sub-groups were then analyzed in their detailed characteristics, including general demographic characteristics, usage patterns in existing telecom services such as mobile service, broadband internet and wireless internet and the status of ownership of a computing or information device and the desire or intention to purchase one. Through these steps, we were able to statistically prove that each of these 10 sub-groups responded to telecom services as independent markets. We found that each segmented group responds as an independent individual market. Through correspondence analysis, the target segmentation groups were positioned in such a way as to facilitate the entry of future telecommunication services into the market, as well as their diffusion and transferability.

  • PDF

A Study on an Effective Decellularization Technique for a Xenograft Cardiac Valve: the Effect of Osmotic Treatment with Hypotonic Solution (이종 심장 판막 이식편에서 효과적인 탈세포화 방법에 관한 연구; 저장성 용액(hypotonic solution)의 삼투압 처치법 효과)

  • Sung, Si-Chan;Kim, Yong-Jin;Choi, Sun-Young;Park, Ji-Eun;Kim, Kyung-Hwan;Kim, Woong-Han
    • Journal of Chest Surgery
    • /
    • v.41 no.6
    • /
    • pp.679-686
    • /
    • 2008
  • Background: Cellular remnants in the bioprosthetic heart valve are known to be related to a host's immunologic response and they can form the nidus for calcification. The extracellular matrix of the decellularized valve tissue can also be used as a biological scaffold for cell attachment, endothelialization and tissue reconstitution. Thus, decellularization is the most important part in making a bioprosthetic valve and biological caffold. Many protocols and agents have been suggested for decellularization, yet there ave been few reports about the effect of a treatment with hypotonic solution prior to chemical or enzymatic treatment. This study investigated the effect of a treatment with hypotonic solution and the appropriate environments such as temperature, the treatment duration and the concentration of sodium dodecylsulfate (SDS) for achieving proper decellularization. Material and Method: Porcine aortic valves were decellularized with odium dodecylsulfate at various concentrations (0.25%, 0.5%), time durations (6, 12, 24 hours) and temperatures ($4^{\circ}C$, $20^{\circ}C$)(Group B). Same the number of porcine aortic valves (group A) was treated with hypotonic solution prior to SDS treatment at the same conditions. The duration of exposure to the hypotonic solution was 4, 7 and 14 hours and he temperature was $4^{\circ}C$ and $20^{\circ}C$, respectively. The degree of decellularization was analyzed by performing hematoxylin and eosin staining. Result: There were no differences in the degree of decellularization between the two concentrations (0.25% 0.5%) of SDS. Twenty four hours treatment with SDS revealed the best decellularization effect for both roups A and B at the temperature of $4^{\circ}C$, but there was no differences between the roups at $20^{\circ}C$. Treatment with hypotonic solution (group A) showed a better ecellularization effect at all the matched conditions. Fourteen hours treatment at $4^{\circ}C$ ith ypotonic solution prior to 80S treatment revealed the best decellularization effect. The treatment with hypotonic solution at $20^{\circ}C$ revealed a good decellularization effect, but his showed significant extracellular matrix destruction. Conclusion: The exposure of porcine heart valves to hypotonic solution prior to SDS treatment is highly effective for achieving decellularization. Osmotic treatment with hypotonic solution should be considered or achieving decellularization of porcine aortic valves. Further study should be carried out to see whether the treatment with hypotonic solution could reduce the exposure duration and concentration of chemical detergents, and also to evaluate how the structure of the extracellular matrix of the porcine valve is affected by the exposure to hypotonic solution.