• Title/Summary/Keyword: Rates

Search Result 27,924, Processing Time 0.063 seconds

A Study on the Safety of Mycotoxins in Grains and Commonly Consumed Foods (곡류 등 다소비 식품 중 곰팡이독소 안전성 조사 연구)

  • Kim, Jae-Kwan;Kim, Young-Sug;Lee, Chang-Hee;Seo, Mi Young;Jang, Mi Kyung;Ku, Eun-Jung;Park, Kwang-Hee;Yoon, Mi-Hye
    • Journal of Food Hygiene and Safety
    • /
    • v.32 no.6
    • /
    • pp.470-476
    • /
    • 2017
  • The purpose of this study was to investigate and evaluate the safety of the grains, nut products, beans and oilseeds being sold in Gyeonggi province by analyzing mycotoxins. A multi-mycotoxins analysis method based on LC-MS/MS was validated and applied for the determination of eight mycotoxins, including aflatoxins ($B_1$, $B_2$, $G_1$ and $G_2$), fumonisins ($B_1$, $B_2$), zearalenone and ochratoxcin A in 134 samples. The limit of detection (LOD) and limit of quantitation (LOQ) for the eight mycotoxins ranged from 0.14 to $8.25{\mu}g/kg$ and from 1.08 to $7.21{\mu}g/kg$, respectively. Recovery rates of mycotoxins were determined in the range of 61.1 to 97.5% with RSD of 1.0~14.5% (n=3). Fumonisin $B_1$, $B_2$, zearalenone, and ochratoxin A were detected in 22 samples, indicating that 27% of grains, 12.5% of beans and 11.8% of oilseeds were contaminated. Fumonisin and zearalenone were detected simultaneously in 2 adlays and 3 sorghums. Fumonisin $B_1$ and $B_2$ were detected simultaneously in most samples whereas fumonisin $B_1$ was detected in 1 adlay, 1 millet and 1 sesame sample. The average detected amount of fumonisin was $49.3{\mu}g/kg$ and $10.1{\mu}g/kg$ for grains and oilseeds, respectively. The average detected amount of zearalenone was $1.9{\mu}g/kg$ and $1.5{\mu}g/kg$ for grains and beans, respectively. In addition, the average amount of ochratoxin A was $0.08{\mu}g/kg$ for grains. The calculated exposure amounts of fumonisin, zeralenone and ochratoxin A for grains, beans and oilseeds were below the PMTDI/PTWI.

Effectiveness Assessment on Jaw-Tracking in Intensity Modulated Radiation Therapy and Volumetric Modulated Arc Therapy for Esophageal Cancer (식도암 세기조절방사선치료와 용적세기조절회전치료에 대한 Jaw-Tracking의 유용성 평가)

  • Oh, Hyeon Taek;Yoo, Soon Mi;Jeon, Soo Dong;Kim, Min Su;Song, Heung Kwon;Yoon, In Ha;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.33-41
    • /
    • 2019
  • Purpose : To evaluate the effectiveness of Jaw-tracking(JT) technique in Intensity-modulated radiation therapy(IMRT) and Volumetric-modulated arc therapy(VMAT) for radiation therapy of esophageal cancer by analyzing volume dose of perimetrical normal organs along with the low-dose volume regions. Materials and Method: A total of 27 patients were selected who received radiation therapy for esophageal cancer with using $VitalBeam^{TM}$(Varian Medical System, U.S.A) in our hospital. Using Eclipse system(Ver. 13.6 Varian, U.S.A), radiation treatment planning was set up with Jaw-tracking technique(JT) and Non-Jaw-tracking technique(NJT), and was conducted for the patients with T-shaped Planning target volume(PTV), including Supraclavicular lymph nodes(SCL). PTV was classified into whether celiac area was included or not to identify the influence on the radiation field. To compare the treatment plans, Organ at risk(OAR) was defined to bilateral lung, heart, and spinal cord and evaluated for Conformity index(CI) and Homogeneity index(HI). Portal dosimetry was performed to verify a clinical application using Electronic portal imaging device(EPID) and Gamma analysis was performed with establishing thresholds of radiation field as a parameter, with various range of 0 %, 5 %, and 10 %. Results: All treatment plans were established on gamma pass rates of 95 % with 3 mm/3 % criteria. For a threshold of 10 %, both JT and NJT passed with rate of more than 95 % and both gamma passing rate decreased more than 1 % in IMRT as the low dose threshold decreased to 5 % and 0 %. For the case of JT in IMRT on PTV without celiac area, $V_5$ and $V_{10}$ of both lung showed a decrease by respectively 8.5 % and 5.3 % in average and up to 14.7 %. A $D_{mean}$ decreased by $72.3{\pm}51cGy$, while there was an increase in radiation dose reduction in PTV including celiac area. A $D_{mean}$ of heart decreased by $68.9{\pm}38.5cGy$ and that of spinal cord decreased by $39.7{\pm}30cGy$. For the case of JT in VMAT, $V_5$ decreased by 2.5 % in average in lungs, and also a little amount in heart and spinal cord. Radiation dose reduction of JT showed an increase when PTV includes celiac area in VMAT. Conclusion: In the radiation treatment planning for esophageal cancer, IMRT showed a significant decrease in $V_5$, and $V_{10}$ of both lungs when applying JT, and dose reduction was greater when the irradiated area in low-dose field is larger. Therefore, IMRT is more advantageous in applying JT than VMAT for radiation therapy of esophageal cancer and can protect the normal organs from MLC leakage and transmitted doses in low-dose field.

Analysis of HBeAg and HBV DNA Detection in Hepatitis B Patients Treated with Antiviral Therapy (항 바이러스 치료중인 B형 간염환자에서 HBeAg 및 HBV DNA 검출에 관한 분석)

  • Cheon, Jun Hong;Chae, Hong Ju;Park, Mi Sun;Lim, Soo Yeon;Yoo, Seon Hee;Lee, Sun Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.1
    • /
    • pp.35-39
    • /
    • 2019
  • Purpose Hepatitis B virus (hepatitis B virus, HBV) infection is a worldwide major public health problem and it is known as a major cause of chronic hepatitis, liver cirrhosis and liver cancer. And serologic tests of hepatitis B virus is essential for diagnosing and treating these diseases. In addition, with the development of molecular diagnostics, the detection of HBV DNA in serum diagnoses HBV infection and is recognized as an important indicator for the antiviral agent treatment response assessment. We performed HBeAg assay using Immunoradiometric assay (IRMA) and Chemiluminescent Microparticle Immunoassay (CMIA) in hepatitis B patients treated with antiviral agents. The detection rate of HBV DNA in serum was measured and compared by RT-PCR (Real Time - Polymerase Chain Reaction) method Materials and Methods HBeAg serum examination and HBV DNA quantification test were conducted on 270 hepatitis B patients undergoing anti-virus treatment after diagnosis of hepatitis B virus infection. Two serologic tests (IRMA, CMIA) with different detection principles were applied for the HBeAg serum test. Serum HBV DNA was quantitatively measured by real-time polymerase chain reaction (RT-PCR) using the Abbott m2000 System. Results The detection rate of HBeAg was 24.1% (65/270) for IRMA and 82.2% (222/270) for CMIA. Detection rate of serum HBV DNA by real-time RT-PCR is 29.3% (79/270). The measured amount of serum HBV DNA concentration is $4.8{\times}10^7{\pm}1.9{\times}10^8IU/mL$($mean{\pm}SD$). The minimum value is 16IU/mL, the maximum value is $1.0{\times}10^9IU/mL$, and the reference value for quantitative detection limit is 15IU/mL. The detection rates and concentrations of HBV DNA by group according to the results of HBeAg serological (IRMA, CMIA)tests were as follows. 1) Group I (IRMA negative, CMIA positive, N = 169), HBV DNA detection rate of 17.7% (30/169), $6.8{\times}10^5{\pm}1.9{\times}10^6IU/mL$ 2) Group II (IRMA positive, CMIA positive, N = 53), HBV DNA detection rate 62.3% (33/53), $1.1{\times}10^8{\pm}2.8{\times}10^8IU/mL$ 3) Group III (IRMA negative, CMIA negative, N = 36), HBV DNA detection rate 36.1% (13/36), $3.0{\times}10^5{\pm}1.1{\times}10^6IU/mL$ 4) Group IV(IRMA positive, CMIA negative, N = 12), HBV DNA detection rate 25% (3/12), $1.3{\times}10^3{\pm}1.1{\times}10^3IU/mL$ Conclusion HBeAg detection rate according to the serological test showed a large difference. This difference is considered for a number of reasons such as characteristics of the Ab used for assay kit and epitope, HBV of genotype. Detection rate and the concentration of the group-specific HBV DNA classified serologic results confirmed the high detection rate and the concentration in Group II (IRMA-positive, CMIA positive, N = 53).

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Analysis of shopping website visit types and shopping pattern (쇼핑 웹사이트 탐색 유형과 방문 패턴 분석)

  • Choi, Kyungbin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.85-107
    • /
    • 2019
  • Online consumers browse products belonging to a particular product line or brand for purchase, or simply leave a wide range of navigation without making purchase. The research on the behavior and purchase of online consumers has been steadily progressed, and related services and applications based on behavior data of consumers have been developed in practice. In recent years, customization strategies and recommendation systems of consumers have been utilized due to the development of big data technology, and attempts are being made to optimize users' shopping experience. However, even in such an attempt, it is very unlikely that online consumers will actually be able to visit the website and switch to the purchase stage. This is because online consumers do not just visit the website to purchase products but use and browse the websites differently according to their shopping motives and purposes. Therefore, it is important to analyze various types of visits as well as visits to purchase, which is important for understanding the behaviors of online consumers. In this study, we explored the clustering analysis of session based on click stream data of e-commerce company in order to explain diversity and complexity of search behavior of online consumers and typified search behavior. For the analysis, we converted data points of more than 8 million pages units into visit units' sessions, resulting in a total of over 500,000 website visit sessions. For each visit session, 12 characteristics such as page view, duration, search diversity, and page type concentration were extracted for clustering analysis. Considering the size of the data set, we performed the analysis using the Mini-Batch K-means algorithm, which has advantages in terms of learning speed and efficiency while maintaining the clustering performance similar to that of the clustering algorithm K-means. The most optimized number of clusters was derived from four, and the differences in session unit characteristics and purchasing rates were identified for each cluster. The online consumer visits the website several times and learns about the product and decides the purchase. In order to analyze the purchasing process over several visits of the online consumer, we constructed the visiting sequence data of the consumer based on the navigation patterns in the web site derived clustering analysis. The visit sequence data includes a series of visiting sequences until one purchase is made, and the items constituting one sequence become cluster labels derived from the foregoing. We have separately established a sequence data for consumers who have made purchases and data on visits for consumers who have only explored products without making purchases during the same period of time. And then sequential pattern mining was applied to extract frequent patterns from each sequence data. The minimum support is set to 10%, and frequent patterns consist of a sequence of cluster labels. While there are common derived patterns in both sequence data, there are also frequent patterns derived only from one side of sequence data. We found that the consumers who made purchases through the comparative analysis of the extracted frequent patterns showed the visiting pattern to decide to purchase the product repeatedly while searching for the specific product. The implication of this study is that we analyze the search type of online consumers by using large - scale click stream data and analyze the patterns of them to explain the behavior of purchasing process with data-driven point. Most studies that typology of online consumers have focused on the characteristics of the type and what factors are key in distinguishing that type. In this study, we carried out an analysis to type the behavior of online consumers, and further analyzed what order the types could be organized into one another and become a series of search patterns. In addition, online retailers will be able to try to improve their purchasing conversion through marketing strategies and recommendations for various types of visit and will be able to evaluate the effect of the strategy through changes in consumers' visit patterns.

The Evaluation of Non-Coplanar Volumetric Modulated Arc Therapy for Brain stereotactic radiosurgery (뇌 정위적 방사선수술 시 Non-Coplanar Volumetric Modulated Arc Therapy의 유용성 평가)

  • Lee, Doo Sang;Kang, Hyo Seok;Choi, Byoung Joon;Park, Sang Jun;Jung, Da Ee;Lee, Geon Ho;Ahn, Min Woo;Jeon, Myeong Soo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.9-16
    • /
    • 2018
  • Purpose : Brain Stereotactic Radiosurgery can treat non-invasive diseases with high rates of complications due to surgical operations. However, brain stereotactic radiosurgery may be accompanied by radiation induced side effects such as fractionation radiation therapy because it uses radiation. The effects of Coplanar Volumetric Modulated Arc Therapy(C-VMAT) and Non-Coplanar Volumetric Modulated Arc Therapy(NC-VMAT) on surrounding normal tissues were analyzed in order to reduce the side effects caused fractionation radiation therapy such as head and neck. But, brain stereotactic radiosurgery these contents were not analyzed. In this study, we evaluated the usefulness of NC-VMAT by comparing and analyzing C-VMAT and NC-VMAT in patients who underwent brain stereotactic radiosurgery. Methods and materials : With C-VMAT and NC-VMAT, 13 treatment plans for brain stereotactic radiosurgery were established. The Planning Target Volume ranged from a minimum of 0.78 cc to a maximum of 12.26 cc, Prescription doses were prescribed between 15 and 24 Gy. Treatment machine was TrueBeam STx (Varian Medical Systems, USA). The energy used in the treatment plan was 6 MV Flattening Filter Free (6FFF) X-ray. The C-VMAT treatment plan used a half 2 arc or full 2 arc treatment plan, and the NC-VMAT treatment plan used 3 to 7 Arc 40 to 190 degrees. The angle of the couch was planned to be 3-7 angles. Results : The mean value of the maximum dose was $105.1{\pm}1.37%$ in C-VMAT and $105.8{\pm}1.71%$ in NC-VMAT. Conformity index of C-VMAT was $1.08{\pm}0.08$ and homogeneity index was $1.03{\pm}0.01$. Conformity index of NC-VMAT was $1.17{\pm}0.1$ and homogeneity index was $1.04{\pm}0.01$. $V_2$, $V_8$, $V_{12}$, $V_{18}$, $V_{24}$ of the brain were $176{\pm}149.36cc$, $31.50{\pm}25.03cc$, $16.53{\pm}12.63cc$, $8.60{\pm}6.87cc$ and $4.03{\pm}3.43cc$ in the C-VMAT and $135.55{\pm}115.93cc$, $24.34{\pm}17.68cc$, $14.74{\pm}10.97cc$, $8.55{\pm}6.79cc$, $4.23{\pm}3.48cc$. Conclusions : The maximum dose, conformity index, and homogeneity index showed no significant difference between C-VMAT and NC-VMAT. $V_2$ to $V_{18}$ of the brain showed a difference of at least 0.5 % to 48 %. $V_{19}$ to $V_{24}$ of the brain showed a difference of at least 0.4 % to 4.8 %. When we compare the mean value of $V_{12}$ that Radione-crosis begins to generate, NC-VMAT has about 12.2 % less amount than C-VMAT. These results suggest that if NC-VMAT is used, the volume of $V_2$ to $V_{18}$ can be reduced, which can reduce Radionecrosis.

  • PDF

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

An Exploratory Study on the Competition Patterns Between Internet Sites in Korea (한국 인터넷사이트들의 산업별 경쟁유형에 대한 탐색적 연구)

  • Park, Yoonseo;Kim, Yongsik
    • Asia Marketing Journal
    • /
    • v.12 no.4
    • /
    • pp.79-111
    • /
    • 2011
  • Digital economy has grown rapidly so that the new business area called 'Internet business' has been dramatically extended as time goes on. However, in the case of Internet business, market shares of individual companies seem to fluctuate very extremely. Thus marketing managers who operate the Internet sites have seriously observed the competition structure of the Internet business market and carefully analyzed the competitors' behavior in order to achieve their own business goals in the market. The newly created Internet business might differ from the offline ones in management styles, because it has totally different business circumstances when compared with the existing offline businesses. Thus, there should be a lot of researches for finding the solutions about what the features of Internet business are and how the management style of those Internet business companies should be changed. Most marketing literatures related to the Internet business have focused on individual business markets. Specifically, many researchers have studied the Internet portal sites and the Internet shopping mall sites, which are the most general forms of Internet business. On the other hand, this study focuses on the entire Internet business industry to understand the competitive circumstance of online market. This approach makes it possible not only to have a broader view to comprehend overall e-business industry, but also to understand the differences in competition structures among Internet business markets. We used time-series data of Internet connection rates by consumers as the basic data to figure out the competition patterns in the Internet business markets. Specifically, the data for this research was obtained from one of Internet ranking sites, 'Fian'. The Internet business ranking data is obtained based on web surfing record of some pre-selected sample group where the possibility of double-count for page-views is controlled by method of same IP check. The ranking site offers several data which are very useful for comparison and analysis of competitive sites. The Fian site divides the Internet business areas into 34 area and offers market shares of big 5 sites which are on high rank in each category daily. We collected the daily market share data about Internet sites on each area from April 22, 2008 to August 5, 2008, where some errors of data was found and 30 business area data were finally used for our research after the data purification. This study performed several empirical analyses in focusing on market shares of each site to understand the competition among sites in Internet business of Korea. We tried to perform more statistically precise analysis for looking into business fields with similar competitive structures by applying the cluster analysis to the data. The research results are as follows. First, the leading sites in each area were classified into three groups based on averages and standard deviations of daily market shares. The first group includes the sites with the lowest market shares, which give more increased convenience to consumers by offering the Internet sites as complimentary services for existing offline services. The second group includes sites with medium level of market shares, where the site users are limited to specific small group. The third group includes sites with the highest market shares, which usually require online registration in advance and have difficulty in switching to another site. Second, we analyzed the second place sites in each business area because it may help us understand the competitive power of the strongest competitor against the leading site. The second place sites in each business area were classified into four groups based on averages and standard deviations of daily market shares. The four groups are the sites showing consistent inferiority compared to the leading sites, the sites with relatively high volatility and medium level of shares, the sites with relatively low volatility and medium level of shares, the sites with relatively low volatility and high level of shares whose gaps are not big compared to the leading sites. Except 'web agency' area, these second place sites show relatively stable shares below 0.1 point of standard deviation. Third, we also classified the types of relative strength between leading sites and the second place sites by applying the cluster analysis to the gap values of market shares between two sites. They were also classified into four groups, the sites with the relatively lowest gaps even though the values of standard deviation are various, the sites with under the average level of gaps, the sites with over the average level of gaps, the sites with the relatively higher gaps and lower volatility. Then we also found that while the areas with relatively bigger gap values usually have smaller standard deviation values, the areas with very small differences between the first and the second sites have a wider range of standard deviation values. The practical and theoretical implications of this study are as follows. First, the result of this study might provide the current market participants with the useful information to understand the competitive circumstance of the market and build the effective new business strategy for the market success. Also it might be useful to help new potential companies find a new business area and set up successful competitive strategies. Second, it might help Internet marketing researchers take a macro view of the overall Internet market so that make possible to begin the new studies on overall Internet market beyond individual Internet market studies.

  • PDF

Influence of Oxygen Concentration on the Food Consumption and Growth of Common Carp, Cyprinus carpio L. (잉어 Cyprinus carpio의 먹이 섭취량과 성장에 미치는 용존산소량의 영향)

  • SAIFABADI Jafar;KIM In-Bae
    • Journal of Aquaculture
    • /
    • v.2 no.2
    • /
    • pp.53-90
    • /
    • 1989
  • Feeding proper level of ration matchable with the appetite of fish will enhance production and also prevent waste of food and its consequence, side effects such as pollution of culture medium. To pursue this goal, elaborate studies on dissolved oxygen concentrations- as the major force in inducing appetite and the growth outcome are necessary. The growth of common carp of 67, 200, 400, 600, and 800 gram size groups was studied at oxygen concentrations ranging from 2.0 to 6 mg/$\iota$ in relation to rations from 1 to as many percent of the initial body weight as could be consumed under constant temperature of $25^{\circ}C$. The results from the experiments are summarized as followings; 1. Appetite: The smaller fish exhibited higher degree of appetite than the bigger ones at the same oxygen concentrations. The bigger the fish the less tolerant it was to the lower oxygen thersholds, and the degree of tolerence decreased as ration level increased. 2. Growth : Growth rate (percent per day) increased - unless consumption was suppressed by low oxygen levels- as the ration was increased to maximum. In case of 67 g fish, it reached the highest point of $5.05\%$ / day at $7\%$ ration under 5.0 mg/$\iota$ of oxygen. In case of 200 g fish, the maximum growth rate of $3.75\%$/day appeared at the maximum ration of $6\%$ under 5.5 mg/$\iota$ of oxygen. In 400 g fish, the highest growth of $3.37\%$/day occurred at the maximum ration of $5\%$ and 6.0 mg/$\iota$ of oxygen. In 600 g fish, the highest growth rate of $2.82\%$ /day was at the maximum ration of $4\%$ under 5.5 mg/$\iota$ oxygen. In case of 800g fish, the highest growth rate of $1.95\%$/day was at maximum tested ration of $3\%$ under 5.0 mg/$\iota$ oxygen. 3. Food Conversion Efficiency: Food conversion efficiency ($\%$ dry feed converted into the fish tissue) first increased as the ration was increased, reached maximum at certain food level, then started decreasing with further increase in the ration. The maximum conversion efficiency stood at higher feeding rate for the smaller fish than the larger ones. In case of 67 g fish, the maximum food conversion efficiency was at $4\%$ ration within 3.0-4.0 mg/$\iota$ oxygen. In 200g fish, the maximum efficiency was at $3\%$ ration within 4.0-4.5 mg/$\iota$ oxygen. In 400g fish, the maximum efficiency was at $2\%$ ration within 4.0 - 4.5 mg/$\iota$ oxygen. In 600 and 800g fish, the maximum conversion efficiency shifted to the lowest ration ($1\%$) and lower oxygen ranges. 4. Behaviour: The fish within uncomfortably low oxygen levels exhibited suppressed appetite and movements and were observed to pass feces quicker and in larger quantity than the ones in normal condition; in untolerably low oxygen the fish were lethargic, vomited, and had their normal skin color changed into pale yellow or grey patches. All these processes contributed to reducing food conversion efficiency. On the other hand, the fish within relatively higher oxygen concentrations exhibited higher degree of movement and their food conversion tended to be depressed when compared with sister groups under corresponding size and ration within relatively low oxyen level. 5. Suitability of Oxygen Ranges to Rations: The oxygen level of 2.0- 2.5 mg/$\iota$ was adequate to sustain appetite at $1\%$ ration in all size groups. As the ration was increased higher oxygen was required to sustain the fish appetite and metabolic activity, particularly in larger fish. In 67g fish, the $2\%$ ration was well supported by 2.0-2.5 mg/$\iota$ range; as the ration increased to $5\%$, higher range of 3.0-4.0 mg/$\iota$ brought better appetite and growth; from 5 till $7\%$ (the last tested ration for 67 g fish) oxygen levels over 4.0 mg/$\iota$ could sustain appetite. In 200 g fish, the 2 and $3\%$ rations brought the best growth and conversion rates at 3.5-4.5 mg/$\iota$ oxygen level; from 3 till $6\%$ (the last tested ration at 200 g fish) oxyge groups over 4.5 mg/$\iota$ were matchable with animal's appetite. In 400, 600, and 800 g fish, all the rations above $2\%$ had to be generally supported with oxygen levels above 4.5 mg/$\iota$.

  • PDF