• Title/Summary/Keyword: total system

Search Result 16,142, Processing Time 0.051 seconds

Public Sentiment Analysis of Korean Top-10 Companies: Big Data Approach Using Multi-categorical Sentiment Lexicon (국내 주요 10대 기업에 대한 국민 감성 분석: 다범주 감성사전을 활용한 빅 데이터 접근법)

  • Kim, Seo In;Kim, Dong Sung;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.45-69
    • /
    • 2016
  • Recently, sentiment analysis using open Internet data is actively performed for various purposes. As online Internet communication channels become popular, companies try to capture public sentiment of them from online open information sources. This research is conducted for the purpose of analyzing pulbic sentiment of Korean Top-10 companies using a multi-categorical sentiment lexicon. Whereas existing researches related to public sentiment measurement based on big data approach classify sentiment into dimensions, this research classifies public sentiment into multiple categories. Dimensional sentiment structure has been commonly applied in sentiment analysis of various applications, because it is academically proven, and has a clear advantage of capturing degree of sentiment and interrelation of each dimension. However, the dimensional structure is not effective when measuring public sentiment because human sentiment is too complex to be divided into few dimensions. In addition, special training is needed for ordinary people to express their feeling into dimensional structure. People do not divide their sentiment into dimensions, nor do they need psychological training when they feel. People would not express their feeling in the way of dimensional structure like positive/negative or active/passive; rather they express theirs in the way of categorical sentiment like sadness, rage, happiness and so on. That is, categorial approach of sentiment analysis is more natural than dimensional approach. Accordingly, this research suggests multi-categorical sentiment structure as an alternative way to measure social sentiment from the point of the public. Multi-categorical sentiment structure classifies sentiments following the way that ordinary people do although there are possibility to contain some subjectiveness. In this research, nine categories: 'Sadness', 'Anger', 'Happiness', 'Disgust', 'Surprise', 'Fear', 'Interest', 'Boredom' and 'Pain' are used as multi-categorical sentiment structure. To capture public sentiment of Korean Top-10 companies, Internet news data of the companies are collected over the past 25 months from a representative Korean portal site. Based on the sentiment words extracted from previous researches, we have created a sentiment lexicon, and analyzed the frequency of the words coming up within the news data. The frequency of each sentiment category was calculated as a ratio out of the total sentiment words to make ranks of distributions. Sentiment comparison among top-4 companies, which are 'Samsung', 'Hyundai', 'SK', and 'LG', were separately visualized. As a next step, the research tested hypothesis to prove the usefulness of the multi-categorical sentiment lexicon. It tested how effective categorial sentiment can be used as relative comparison index in cross sectional and time series analysis. To test the effectiveness of the sentiment lexicon as cross sectional comparison index, pair-wise t-test and Duncan test were conducted. Two pairs of companies, 'Samsung' and 'Hanjin', 'SK' and 'Hanjin' were chosen to compare whether each categorical sentiment is significantly different in pair-wise t-test. Since category 'Sadness' has the largest vocabularies, it is chosen to figure out whether the subgroups of the companies are significantly different in Duncan test. It is proved that five sentiment categories of Samsung and Hanjin and four sentiment categories of SK and Hanjin are different significantly. In category 'Sadness', it has been figured out that there were six subgroups that are significantly different. To test the effectiveness of the sentiment lexicon as time series comparison index, 'nut rage' incident of Hanjin is selected as an example case. Term frequency of sentiment words of the month when the incident happened and term frequency of the one month before the event are compared. Sentiment categories was redivided into positive/negative sentiment, and it is tried to figure out whether the event actually has some negative impact on public sentiment of the company. The difference in each category was visualized, moreover the variation of word list of sentiment 'Rage' was shown to be more concrete. As a result, there was huge before-and-after difference of sentiment that ordinary people feel to the company. Both hypotheses have turned out to be statistically significant, and therefore sentiment analysis in business area using multi-categorical sentiment lexicons has persuasive power. This research implies that categorical sentiment analysis can be used as an alternative method to supplement dimensional sentiment analysis when figuring out public sentiment in business environment.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Spatio-Temporal Monitoring of Soil CO2 Fluxes and Concentrations after Artificial CO2 Release (인위적 CO2 누출에 따른 토양 CO2 플럭스와 농도의 시공간적 모니터링)

  • Kim, Hyun-Jun;Han, Seung Hyun;Kim, Seongjun;Yun, Hyeon Min;Jun, Seong-Chun;Son, Yowhan
    • Journal of Environmental Impact Assessment
    • /
    • v.26 no.2
    • /
    • pp.93-104
    • /
    • 2017
  • CCS (Carbon Capture and Storage) is a technical process to capture $CO_2$ from industrial and energy-based sources, to transfer and sequestrate impressed $CO_2$ in geological formations, oceans, or mineral carbonates. However, potential $CO_2$ leakage exists and causes environmental problems. Thus, this study was conducted to analyze the spatial and temporal variations of $CO_2$ fluxes and concentrations after artificial $CO_2$ release. The Environmental Impact Evaluation Test Facility (EIT) was built in Eumseong, Korea in 2015. Approximately 34kg $CO_2$ /day/zone were injected at Zones 2, 3, and 4 among the total of 5 zones from October 26 to 30, 2015. $CO_2$ fluxes were measured every 30 minutes at the surface at 0m, 1.5m, 2.5m, and 10m from the $CO_2$ releasing well using LI-8100A until November 13, 2015, and $CO_2$ concentrations were measured once a day at 15cm, 30cm, and 60cm depths at every 0m, 1.5m, 2.5m, 5m, and 10m from the well using GA5000 until November 28, 2015. $CO_2$ flux at 0m from the well started increasing on the fifth day after $CO_2$ release started, and continued to increase until November 13 even though the artificial $CO_2$ release stopped. $CO_2$ fluxes measured at 2.5m, 5.0m, and 10m from the well were not significantly different with each other. On the other hand, soil $CO_2$ concentration was shown as 38.4% at 60cm depth at 0m from the well in Zone 3 on the next day after $CO_2$ release started. Soil $CO_2$ was horizontally spreaded overtime, and detected up to 5m away from the well in all zones until $CO_2$ release stopped. Also, soil $CO_2$ concentrations at 30cm and 60cm depths at 0m from the well were measured similarly as $50.6{\pm}25.4%$ and $55.3{\pm}25.6%$, respectively, followed by 30cm depth ($31.3{\pm}17.2%$) which was significantly lower than those measured at the other depths on the final day of $CO_2$ release period. Soil $CO_2$ concentrations at all depths in all zones were gradually decreased for about 1 month after $CO_2$ release stopped, but still higher than those of the first day after $CO_2$ release stared. In conclusion, the closer the distance from the well and the deeper the depth, the higher $CO_2$ fluxes and concentrations occurred. Also, long-term monitoring should be required because the leaked $CO_2$ gas can remains in the soil for a long time even if the leakage stopped.

Field Studios of In-situ Aerobic Cometabolism of Chlorinated Aliphatic Hydrocarbons

  • Semprini, Lewts
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 2004.04a
    • /
    • pp.3-4
    • /
    • 2004
  • Results will be presented from two field studies that evaluated the in-situ treatment of chlorinated aliphatic hydrocarbons (CAHs) using aerobic cometabolism. In the first study, a cometabolic air sparging (CAS) demonstration was conducted at McClellan Air Force Base (AFB), California, to treat chlorinated aliphatic hydrocarbons (CAHs) in groundwater using propane as the cometabolic substrate. A propane-biostimulated zone was sparged with a propane/air mixture and a control zone was sparged with air alone. Propane-utilizers were effectively stimulated in the saturated zone with repeated intermediate sparging of propane and air. Propane delivery, however, was not uniform, with propane mainly observed in down-gradient observation wells. Trichloroethene (TCE), cis-1, 2-dichloroethene (c-DCE), and dissolved oxygen (DO) concentration levels decreased in proportion with propane usage, with c-DCE decreasing more rapidly than TCE. The more rapid removal of c-DCE indicated biotransformation and not just physical removal by stripping. Propane utilization rates and rates of CAH removal slowed after three to four months of repeated propane additions, which coincided with tile depletion of nitrogen (as nitrate). Ammonia was then added to the propane/air mixture as a nitrogen source. After a six-month period between propane additions, rapid propane-utilization was observed. Nitrate was present due to groundwater flow into the treatment zone and/or by the oxidation of tile previously injected ammonia. In the propane-stimulated zone, c-DCE concentrations decreased below tile detection limit (1 $\mu$g/L), and TCE concentrations ranged from less than 5 $\mu$g/L to 30 $\mu$g/L, representing removals of 90 to 97%. In the air sparged control zone, TCE was removed at only two monitoring locations nearest the sparge-well, to concentrations of 15 $\mu$g/L and 60 $\mu$g/L. The responses indicate that stripping as well as biological treatment were responsible for the removal of contaminants in the biostimulated zone, with biostimulation enhancing removals to lower contaminant levels. As part of that study bacterial population shifts that occurred in the groundwater during CAS and air sparging control were evaluated by length heterogeneity polymerase chain reaction (LH-PCR) fragment analysis. The results showed that an organism(5) that had a fragment size of 385 base pairs (385 bp) was positively correlated with propane removal rates. The 385 bp fragment consisted of up to 83% of the total fragments in the analysis when propane removal rates peaked. A 16S rRNA clone library made from the bacteria sampled in propane sparged groundwater included clones of a TM7 division bacterium that had a 385bp LH-PCR fragment; no other bacterial species with this fragment size were detected. Both propane removal rates and the 385bp LH-PCR fragment decreased as nitrate levels in the groundwater decreased. In the second study the potential for bioaugmentation of a butane culture was evaluated in a series of field tests conducted at the Moffett Field Air Station in California. A butane-utilizing mixed culture that was effective in transforming 1, 1-dichloroethene (1, 1-DCE), 1, 1, 1-trichloroethane (1, 1, 1-TCA), and 1, 1-dichloroethane (1, 1-DCA) was added to the saturated zone at the test site. This mixture of contaminants was evaluated since they are often present as together as the result of 1, 1, 1-TCA contamination and the abiotic and biotic transformation of 1, 1, 1-TCA to 1, 1-DCE and 1, 1-DCA. Model simulations were performed prior to the initiation of the field study. The simulations were performed with a transport code that included processes for in-situ cometabolism, including microbial growth and decay, substrate and oxygen utilization, and the cometabolism of dual contaminants (1, 1-DCE and 1, 1, 1-TCA). Based on the results of detailed kinetic studies with the culture, cometabolic transformation kinetics were incorporated that butane mixed-inhibition on 1, 1-DCE and 1, 1, 1-TCA transformation, and competitive inhibition of 1, 1-DCE and 1, 1, 1-TCA on butane utilization. A transformation capacity term was also included in the model formation that results in cell loss due to contaminant transformation. Parameters for the model simulations were determined independently in kinetic studies with the butane-utilizing culture and through batch microcosm tests with groundwater and aquifer solids from the field test zone with the butane-utilizing culture added. In microcosm tests, the model simulated well the repetitive utilization of butane and cometabolism of 1.1, 1-TCA and 1, 1-DCE, as well as the transformation of 1, 1-DCE as it was repeatedly transformed at increased aqueous concentrations. Model simulations were then performed under the transport conditions of the field test to explore the effects of the bioaugmentation dose and the response of the system to tile biostimulation with alternating pulses of dissolved butane and oxygen in the presence of 1, 1-DCE (50 $\mu$g/L) and 1, 1, 1-TCA (250 $\mu$g/L). A uniform aquifer bioaugmentation dose of 0.5 mg/L of cells resulted in complete utilization of the butane 2-meters downgradient of the injection well within 200-hrs of bioaugmentation and butane addition. 1, 1-DCE was much more rapidly transformed than 1, 1, 1-TCA, and efficient 1, 1, 1-TCA removal occurred only after 1, 1-DCE and butane were decreased in concentration. The simulations demonstrated the strong inhibition of both 1, 1-DCE and butane on 1, 1, 1-TCA transformation, and the more rapid 1, 1-DCE transformation kinetics. Results of tile field demonstration indicated that bioaugmentation was successfully implemented; however it was difficult to maintain effective treatment for long periods of time (50 days or more). The demonstration showed that the bioaugmented experimental leg effectively transformed 1, 1-DCE and 1, 1-DCA, and was somewhat effective in transforming 1, 1, 1-TCA. The indigenous experimental leg treated in the same way as the bioaugmented leg was much less effective in treating the contaminant mixture. The best operating performance was achieved in the bioaugmented leg with about over 90%, 80%, 60 % removal for 1, 1-DCE, 1, 1-DCA, and 1, 1, 1-TCA, respectively. Molecular methods were used to track and enumerate the bioaugmented culture in the test zone. Real Time PCR analysis was used to on enumerate the bioaugmented culture. The results show higher numbers of the bioaugmented microorganisms were present in the treatment zone groundwater when the contaminants were being effective transformed. A decrease in these numbers was associated with a reduction in treatment performance. The results of the field tests indicated that although bioaugmentation can be successfully implemented, competition for the growth substrate (butane) by the indigenous microorganisms likely lead to the decrease in long-term performance.

  • PDF

Clinical Usefulness of Implanted Fiducial Markers for Hypofractionated Radiotherapy of Prostate Cancer (전립선암의 소분할 방사선치료 시에 위치표지자 삽입의 유용성)

  • Choi, Young-Min;Ahn, Sung-Hwan;Lee, Hyung-Sik;Hur, Won-Joo;Yoon, Jin-Han;Kim, Tae-Hyo;Kim, Soo-Dong;Yun, Seong-Guk
    • Radiation Oncology Journal
    • /
    • v.29 no.2
    • /
    • pp.91-98
    • /
    • 2011
  • Purpose: To assess the usefulness of implanted fiducial markers in the setup of hypofractionated radiotherapy for prostate cancer patients by comparing a fiducial marker matched setup with a pelvic bone match. Materials and Methods: Four prostate cancer patients treated with definitive hypofractionated radiotherapy between September 2009 and August 2010 were enrolled in this study. Three gold fiducial markers were implanted into the prostate and through the rectum under ultrasound guidance around a week before radiotherapy. Glycerin enemas were given prior to each radiotherapy planning CT and every radiotherapy session. Hypofractionated radiotherapy was planned for a total dose of 59.5 Gy in daily 3.5 Gy with using the Novalis system. Orthogonal kV X-rays were taken before radiotherapy. Treatment positions were adjusted according to the results from the fusion of the fiducial markers on digitally reconstructed radiographs of a radiotherapy plan with those on orthogonal kV X-rays. When the difference in the coordinates from the fiducial marker fusion was less than 1 mm, the patient position was approved for radiotherapy. A virtual bone matching was carried out at the fiducial marker matched position, and then a setup difference between the fiducial marker matching and bone matching was evaluated. Results: Three patients received a planned 17-fractionated radiotherapy and the rest underwent 16 fractionations. The setup error of the fiducial marker matching was $0.94{\pm}0.62$ mm (range, 0.09 to 3.01 mm; median, 0.81 mm), and the means of the lateral, craniocaudal, and anteroposterior errors were $0.39{\pm}0.34$ mm, $0.46{\pm}0.34$ mm, and $0.57{\pm}0.59$ mm, respectively. The setup error of the pelvic bony matching was $3.15{\pm}2.03$ mm (range, 0.25 to 8.23 mm; median, 2.95 mm), and the error of craniocaudal direction ($2.29{\pm}1.95$ mm) was significantly larger than those of anteroposterior ($1.73{\pm}1.31$ mm) and lateral directions ($0.45{\pm}0.37$ mm), respectively (p<0.05). Incidences of over 3 mm and 5 mm in setup difference among the fractionations were 1.5% and 0% in the fiducial marker matching, respectively, and 49.3% and 17.9% in the pelvic bone matching, respectively. Conclusion: The more precise setup of hypofractionated radiotherapy for prostate cancer patients is feasible with the implanted fiducial marker matching compared with the pelvic bony matching. Therefore, a less marginal expansion of planning target volume produces less radiation exposure to adjacent normal tissues, which could ultimately make hypofractionated radiotherapy safer.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

[ $^1H$ ] MR Spectroscopy of the Normal Human Brains: Comparison between Signa and Echospeed 1.5 T System (정상 뇌의 수소 자기공명분광 소견: 1.5 T Signa와 Echospeed 자기공명영상기기에서의 비교)

  • Kang Young Hye;Lee Yoon Mi;Park Sun Won;Suh Chang Hae;Lim Myung Kwan
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.2
    • /
    • pp.79-85
    • /
    • 2004
  • Purpose : To evaluate the usefulness and reproducibility of $^1H$ MRS in different 1.5 T MR machines with different coils to compare the SNR, scan time and the spectral patterns in different brain regions in normal volunteers. Materials and Methods : Localized $^1H$ MR spectroscopy ($^1H$ MRS) was performed in a total of 10 normal volunteers (age; 20-45 years) with spectral parameters adjusted by the autoprescan routine (PROBE package). In all volunteers, MRS was performed in a three times using conventional MRS (Signa Horizon) with 1 channel coil and upgraded MRS (Echospeed plus with EXCITE) with both 1 channel and 8 channel coil. Using these three different machines and coils, SNRs of the spectra in both phantom and volunteers and (pre)scan time of MRS were compared. Two regions of the human brain (basal ganglia and deep white matter) were examined and relative metabolite ratios (NAA/Cr, Cho/Cr, and mI/Cr ratios) were measured in all volunteers. For all spectra, a STEAM localization sequence with three-pulse CHESS $H_2O$ suppression was used, with the following acquisition parameters: TR=3.0/2.0 sec, TE=30 msec, TM=13.7 msec, SW=2500 Hz, SI=2048 pts, AVG : 64/128, and NEX=2/8 (Signa/Echospeed). Results : The SNR was about over $30\%$ higher in Echospeed machine and time for prescan and scan was almost same in different machines and coils. Reliable spectra were obtained on both MRS systems and there were no significant differences in spectral patterns and relative metabolite ratios in two brain regions (p>0.05). Conclusion : Both conventional and new MRI systems are highly reliable and reproducible for $^1H$ MR spectroscopic examinations in human brains and there are no significant differences in applications for $^1H$ MRS between two different MRI systems.

  • PDF

Histological and Biochemical Studies on the Rooting of Hard-wood Cuttings in Mulberry (Morus species) (뽕나무 古條揷木의 發根에 關한 組織 및 生化學的 硏究)

  • Lim, Su-Ho
    • Journal of Sericultural and Entomological Science
    • /
    • v.23 no.1
    • /
    • pp.1-31
    • /
    • 1981
  • Rootability of the hardwood cuttings of mulberry was related not only histological characteristics but dependent on biochemical properties. In this connection, the characteristics of the hardwood cuttings were histologically observed and the growth substances produced by the cuttings were also identified by means of mung bean bioassay. Amino acid, carbohydrate, nucleic acid contents, and the C/N ratio were also analysed. The results are summarized as follows. 1. There were differences in rootability of cuttings between mulberry species and varieties Among the three mulberry species tested, Morus Lhou Koidz. showed the highest rootability while M. bombycis showed the lowest one. In varietal differences in rootability, it was shown that the varieties could be grouped according to rootability: high varieties(above 80%), medium(41~79%), and low(below 40%). The higher varieties were Kemmochi, Nakamaki, Kosen, and Wusuba roso. 2. The histological characteristic of the hardwood cuttings most closely related to rootability was cell layer arrangement in the sclerenchyma tissue. The lower rootability varieties developed two or three overlapping cell layers in the bark tissue and in the higher rootability varieties they were scattered over the primary cortex. 3. In the higher rootability varieties, there was a positive correlation between the development of root primodia and rootability of the hardwood cuttings. It was also shown that there was a close relationship between the size of primodia and the surface area of the lenticel with rootability of the cuttings. 4. Effect of growth substances extracted from the hardwood cuttings were determined by mung bean bioassay. The higher rootability varieties usually showed higher activities of the growth substances, in contrast the lower rootability varieties showed higher activities of the inhibitory substances. 5. It was evident that the substance separated by paper chromatography was identified as indole acetic acid with $R_f$ value ranging from 0.3 to 0.5. The other substances detected at a $R_f$ value ranging from 0.8 to 1.0 and origin to 0.1 were also responsible for rooting. 6. There exists a quantitatively different distribution of growth substances in a synergistic system in the tissues of cuttings, and the balance between growth and inhibitory substances gives rise to the development of rooting. Particularly, no descent of the substances from winter buds resulted in no rooting of cuttings but these substances were produced a week after planting in a warm environment. 7. It was shown that there were positive correlations between carbohydrate ($r=0.72^*$) and total sugar ($r=0.67^*$) and rootability, respectively, but there were negative correlations between reducing sugars ($r=-0.75^*$) and rootability. 8. High C/N ratio gave rise to high rootability($r=0.67^*$). The latter therefore depended on high amount of carbohydrate rather than nitrogen in the cuttings. 9. The content of RNA and DNA in the cuttings was not changed for upto two weeks after the cuttings were planted. Then an increase in RNA content took place in only the high rootability varieties. 10. There were quantitative and qualitative differences in the compositions of the amino acids between the high rootability varieties and the low rootability varieties. More aspartic acid and cystine were found in the higher rootability varieties than in the low rootability varieties.

  • PDF

Strategy for Store Management Using SOM Based on RFM (RFM 기반 SOM을 이용한 매장관리 전략 도출)

  • Jeong, Yoon Jeong;Choi, Il Young;Kim, Jae Kyeong;Choi, Ju Choel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.93-112
    • /
    • 2015
  • Depending on the change in consumer's consumption pattern, existing retail shop has evolved in hypermarket or convenience store offering grocery and daily products mostly. Therefore, it is important to maintain the inventory levels and proper product configuration for effectively utilize the limited space in the retail store and increasing sales. Accordingly, this study proposed proper product configuration and inventory level strategy based on RFM(Recency, Frequency, Monetary) model and SOM(self-organizing map) for manage the retail shop effectively. RFM model is analytic model to analyze customer behaviors based on the past customer's buying activities. And it can differentiates important customers from large data by three variables. R represents recency, which refers to the last purchase of commodities. The latest consuming customer has bigger R. F represents frequency, which refers to the number of transactions in a particular period and M represents monetary, which refers to consumption money amount in a particular period. Thus, RFM method has been known to be a very effective model for customer segmentation. In this study, using a normalized value of the RFM variables, SOM cluster analysis was performed. SOM is regarded as one of the most distinguished artificial neural network models in the unsupervised learning tool space. It is a popular tool for clustering and visualization of high dimensional data in such a way that similar items are grouped spatially close to one another. In particular, it has been successfully applied in various technical fields for finding patterns. In our research, the procedure tries to find sales patterns by analyzing product sales records with Recency, Frequency and Monetary values. And to suggest a business strategy, we conduct the decision tree based on SOM results. To validate the proposed procedure in this study, we adopted the M-mart data collected between 2014.01.01~2014.12.31. Each product get the value of R, F, M, and they are clustered by 9 using SOM. And we also performed three tests using the weekday data, weekend data, whole data in order to analyze the sales pattern change. In order to propose the strategy of each cluster, we examine the criteria of product clustering. The clusters through the SOM can be explained by the characteristics of these clusters of decision trees. As a result, we can suggest the inventory management strategy of each 9 clusters through the suggested procedures of the study. The highest of all three value(R, F, M) cluster's products need to have high level of the inventory as well as to be disposed in a place where it can be increasing customer's path. In contrast, the lowest of all three value(R, F, M) cluster's products need to have low level of inventory as well as to be disposed in a place where visibility is low. The highest R value cluster's products is usually new releases products, and need to be placed on the front of the store. And, manager should decrease inventory levels gradually in the highest F value cluster's products purchased in the past. Because, we assume that cluster has lower R value and the M value than the average value of good. And it can be deduced that product are sold poorly in recent days and total sales also will be lower than the frequency. The procedure presented in this study is expected to contribute to raising the profitability of the retail store. The paper is organized as follows. The second chapter briefly reviews the literature related to this study. The third chapter suggests procedures for research proposals, and the fourth chapter applied suggested procedure using the actual product sales data. Finally, the fifth chapter described the conclusion of the study and further research.

Statistical Analysis of Operating Efficiency and Failures of a Medical Linear Accelerator for Ten Years (선형가속기의 10년간 가동률과 고장률에 관한 통계분석)

  • Ju Sang Gyu;Huh Seung Jae;Han Youngyih;Seo Jeong Min;Kim Won Kyou;Kim Tae Jong;Shin Eun Hyuk;Park Ju Young;Yeo Inhwan J.;Choi David R.;Ahn Yong Chan;Park Won;Lim Do Hoon
    • Radiation Oncology Journal
    • /
    • v.23 no.3
    • /
    • pp.186-193
    • /
    • 2005
  • Purpose: To improve the management of a medical linear accelerator, the records of operational failures of a Varian CL2l00C over a ten year period were retrospectively analyzed. Materials and Methods: The failures were classified according to the involved functional subunits, with each class rated Into one of three levels depending on the operational conditions. The relationships between the failure rate and working ratio and between the failure rate and outside temperature were investigated. In addition, the average life time of the main part and the operating efficiency over the last 4 years were analyzed. Results: Among the recorded failures (total 587 failures), the most frequent failure was observed in the parts related with the collimation system, including the monitor chamber, which accounted for $20\%$ of all failures. With regard to the operational conditions, 2nd level of failures, which temporally interrupted treatments, were the most frequent. Third level of failures, which interrupted treatment for more than several hours, were mostly caused by the accelerating subunit. The number of failures was increased with number of treatments and operating time. The average life-times of the Klystron and Thyratron became shorter as the working ratio increased, and were 42 and $83\%$ of the expected values, respectively. The operating efficiency was maintained at $95\%$ or higher, but this value slightly decreased. There was no significant correlation between the number of failures and the outside temperature. Conclusion: The maintenance of detailed equipment problems and failures records over a long period of time can provide good knowledge of equipment function as well as the capability of predicting future failure. Wore rigorous equipment maintenance Is required for old medical linear accelerators for the advanced avoidance of serious failure and to improve the qualify of patient treatment.