• Title/Summary/Keyword: Number of Sample Size

Search Result 584, Processing Time 0.027 seconds

pT1N3 Gastric Cancer (pT1N3 위암)

  • Ahn, Dae-Ho;Kwon, Sung-Joon;Yun, Hyo-Yung;Song, Young-Jin;Mok, Young-Jae;Han, Sang-Uk;Kim, Wook
    • Journal of Gastric Cancer
    • /
    • v.6 no.2
    • /
    • pp.109-113
    • /
    • 2006
  • Purpose: Various minimally invasive surgical techniques, such as an endoscopic mucosal resection and a laparoscopic gastrectomy, are becoming common practice for some cases of early gastric cancer (EGC) defined in terms of the depth of invasion being limited to the mucosa or submucosa. However, there are rare cases of early gastric cancer with massive lymph-node metastasis. Materials and Methods: From 6 university hospitals of Korea, 2,772 EGC cases were resected during the various period of analysis (1,432 cases of mucosal cancer and 1,340 of submucosal cancer). Results: As control data, we used the data from a single institute, CHA University Hospital. There were nine cases of early gastric cancer (9/2,772, 0.32%) with N3 lymph node metastasis defined by more than 15 lymph nodes being metastasized according to the UICC-TNM classification (pT1N3, stage IV). Two cases were mucosal cancer (2/1,432, 0.1 4%), and seven cases were submucosal cancer (7/1,340, 0.52%). Metastasized lymph nodes varied in number from 18 to 52. There were three male and six female patients with a mean age of 57. This is a totally reversed sex ratio compared to the usual gastric cancer or EGC. Among the total of 9 EGC patients, there were 5 who had superficial spreading carcinomas with surface areas larger than $25\;cm^2$. This is a significantly higher proportion compared to the general EGC population. When we compared the tumor size according to the LN status, the N3 group was definitely larger than the other groups. 78% of the pT1N3 cases showed lymphatic invasion, which is very high compared to the 4.7% in general EGC cases. Among the 9 cases, 6 patients had too short a follow-up period to evaluate the correct prognosis, but there was one patient with a non-curative resection and two patients with early recurrence. Although the sample size is small and the follow-up period is short, we can expect a very poor prognosis when we consider the common prognosis of EGC that is widely known and accepted. Conclusion: From these results, we can a conclude that the risk factors for pT1N3 gastric cancer are female patients, submucosal invasion, larger tumor size, and lymphatic invasion. However rare, the existence of pT1N3 gastric cancer needs to be taken into consideration, especially during the diagnosis. Furthermore, minimally invasive treatment for EGC needs to be chosen with great precaution. Since the prognosis of pT1N3 gastric cancer is expected to be poor, aggressive adjuvant chemotherapy may be necessary. (J Korean Gastric Cancer Assoc 2006;6:109-113)

  • PDF

Pre-treatment effects on softening of carrot during enzyme immersion process (당근의 전처리 조건에 따른 효소의 연화 효과 비교)

  • Kim, Se-rin;Kim, Sun-min;Chang, Jin-Hee;Han, Jung-Ah
    • Korean Journal of Food Science and Technology
    • /
    • v.50 no.3
    • /
    • pp.292-296
    • /
    • 2018
  • Softening effects of enzyme following pre-treatments were examined. Four pre-treatments: raw (R), heat (H), heat and freeze-thawing (HFT), heat and freeze-drying (HFD) were applied to carrot. Subsequently, each treated sample was immersed in 10% celluclast enzyme solution for up to 6 h and then their properties were compared. The minimum and the maximum color change was observed in HFD and H, respectively. R showed no change in hardness after 6 h immersion, indicating that the enzyme did not penetrate the carrot. The number and size of pores were greater in samples undergone HFT or HFD as observed by microstructure analysis using SEM, and HFD caused 99.5% reduction in hardness after 6 h immersion. After 6 h immersion post-HFT or 3 h immersion post-HFD, the hardness was less than $20,000N/m^2$, indicating tongue ingestion was possible, and the samples retained their original shape and easily collapsed by spoon pressing.

A Robust Hand Recognition Method to Variations in Lighting (조명 변화에 안정적인 손 형태 인지 기술)

  • Choi, Yoo-Joo;Lee, Je-Sung;You, Hyo-Sun;Lee, Jung-Won;Cho, We-Duke
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.25-36
    • /
    • 2008
  • In this paper, we present a robust hand recognition approach to sudden illumination changes. The proposed approach constructs a background model with respect to hue and hue gradient in HSI color space and extracts a foreground hand region from an input image using the background subtraction method. Eighteen features are defined for a hand pose and multi-class SVM(Support Vector Machine) approach is applied to learn and classify hand poses based on eighteen features. The proposed approach robustly extracts the contour of a hand with variations in illumination by applying the hue gradient into the background subtraction. A hand pose is defined by two Eigen values which are normalized by the size of OBB(Object-Oriented Bounding Box), and sixteen feature values which represent the number of hand contour points included in each subrange of OBB. We compared the RGB-based background subtraction, hue-based background subtraction and the proposed approach with sudden illumination changes and proved the robustness of the proposed approach. In the experiment, we built a hand pose training model from 2,700 sample hand images of six subjects which represent nine numerical numbers from one to nine. Our implementation result shows 92.6% of successful recognition rate for 1,620 hand images with various lighting condition using the training model.

Growth Characteristics and Vegetation Structure of the Pinus densiflora Forest for Sugumagi of Unmun Temple, Cheongdo-gun, Korea (청도군 운문사 입구 수구막이 소나무림 식생구조 및 생육 특성)

  • Kang, Gi Won;Lee, Do-I;Han, Bong-Ho;Kwak, Jeong-In
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.5
    • /
    • pp.1-15
    • /
    • 2020
  • This study was designed to come up with a way of managing a cultural landscape forest by conducting research on the vegetation structure and growth characteristics. This study's target site, which was 45,201㎡ in size, was Pinus densiflora forest for Sugumagi placed at the entrance of Unmun Temple, Sinwon-ri, Unmun-myeon, and Cheongdo-gun in the southernmost part of Gyeongsangbuk-do, Korea. Sugumagi means the water of the valley flows far away, and where no downstream is visible according to feng shui. The historical sources of the Sugumagi Pinus densiflora forest at the entrance of Unmun Temple isn't clear. It waw only found at that location. The Pinus densiflora forest at the entrance of Unmun Temple is located in the waterway in terms of Feng Shui. The present condition of growth was investigated through a grid surveys of 98 trees and Pinus densiflora growth. As a result of the analysis of growth status, Pinus densiflora, Larix leptolepis, Zelkova serrata, Celtis sinensis, and Rhus javanica were distributed in the conopy layer, and 28 species including Ailanthus altissima were grown in the understroy layer, and 92 species, including Ampelopsis brevipedunculata, in the shrub layer. The plant community structure was divided into low, medium and high-density Pinus densiflora forests in the study area, based on the number in the conopy layer and the grade of and the trees analyzed. As a result of the analysis, the Pinus densiflora dominated the low, medium and high-density Pinus densiflora forests, and there were no competitive species. The relative dominance of the low-density Pinus densiflora forests was 46.9% on average, medium-density was 62.6% and 50.2% was found in high-density. The mean species diversity of Shannon in the low-density study was 0.7055, medium-density study was 0.8966 and the average species diversity of Shannon in the high-density study was 0.8317. The analysis of the age and growth of 25 sample trees in the Sugumagi Pinus densiflora forest shows that the distribution of the chest diameter (DBH) of the sample Pinus densiflora is 38 to 77cm with the average chest diameter being 61.1cm. The age was 84-161 years and the average was 114 years. In the Pinus densiflora forest, most(670,659, or 98.3%) of the tree trunk wound was collected for rosins during the Japanese colonia Era, Of the total 670, 659 were Pinus densiflora, 98.3% of the total. 394 were surgically repaired in 2005. For the preservation of the Sugumagi Pinus densiflora forest, dead trees should be replaced with substitute trees appropriate to the middle and south topography. It is demanded that foreign species such as Larix leptolepis in the research area should be removed and Pinus densiflora that underwent surgical operations should be regularly sterilized. It is also emphasized that the management of insecticide is important.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Fluorine Plasma Corrosion Resistance of Anodic Oxide Film Depending on Electrolyte Temperature

  • Shin, Jae-Soo;Kim, Minjoong;Song, Je-beom;Jeong, Nak-gwan;Kim, Jin-tae;Yun, Ju-Young
    • Applied Science and Convergence Technology
    • /
    • v.27 no.1
    • /
    • pp.9-13
    • /
    • 2018
  • Samples of anodic oxide film used in semiconductor and display manufacturing processes were prepared at different electrolyte temperatures to investigate the corrosion resistance. The anodic oxide film was grown on aluminum alloy 6061 by using a sulfuric acid ($H_2SO_4$) electrolyte of 1.5 M at $0^{\circ}C$, $5^{\circ}C$, $10^{\circ}C$, $15^{\circ}C$, and $20^{\circ}C$. The insulating properties of the samples were evaluated by measuring the breakdown voltage, which gradually increased from 0.43 kV ($0^{\circ}C$) to 0.52 kV ($5^{\circ}C$), 1.02 kV ($10^{\circ}C$), and 1.46 kV ($15^{\circ}C$) as the electrolyte temperature was increased from $0^{\circ}C$ to $15^{\circ}C$, but then decreased to 1.24 kV ($20^{\circ}C$). To evaluate the erosion of the film by fluorine plasma, the plasma erosion and the contamination particles were measured. The plasma erosion was evaluated by measuring the breakdown voltage after exposing the film to $CF_4/O_2/Ar$ and $NF_3/O_2/Ar$ plasmas. With exposure to $CF_4/O_2/Ar$ plasma, the breakdown voltage of the film slightly decreased at $0^{\circ}C$, by 0.41 kV; however, the breakdown voltage significantly decreased at $20^{\circ}C$, by 0.83 kV. With exposure to $NF_3/O_2/Ar$ plasma, the breakdown voltage of the film slightly decreased at $0^{\circ}C$, by 0.38 kV; however, the breakdown voltage significantly decreased at $20^{\circ}C$, by 0. 77 kV. In addition, for the entire temperature range, the breakdown voltage decreased more when sample was exposed to $NF_3/O_2/Ar$ plasma than to $CF_4/O_2/Ar$ plasma. The decrease of the breakdown voltage was lower in the anodic oxide film samples that were grown slowly at lower temperatures. The rate of breakdown voltage decrease after exposure to fluorine plasma was highest at $20^{\circ}C$, indicating that the anodic oxide film was most vulnerable to erosion by fluorine plasma at that temperature. Contamination particles generated by exposure to the $CF_4/O_2/Ar$ and $NF_3/O_2/Ar$ plasmas were measured on a real-time basis. The number of contamination particles generated after the exposure to the respective plasmas was lower at $5^{\circ}C$ and higher at $0^{\circ}C$. In particular, for the entire temperature range, about five times more contamination particles were generated with exposure to $NF_3/O_2/Ar$ plasma than for exposure to $CF_4/O_2/Ar$ plasma. Observation of the surface of the anodic oxide film showed that the pore size and density of the non-treated film sample increased with the increase of the temperature. The change of the surface after exposure to fluorine plasma was greatest at $0^{\circ}C$. The generation of contamination particles by fluorine plasma exposure for the anodic oxide film prepared in the present study was different from that of previous aluminum anodic oxide films.

The Framework of Research Network and Performance Evaluation on Personal Information Security: Social Network Analysis Perspective (개인정보보호 분야의 연구자 네트워크와 성과 평가 프레임워크: 소셜 네트워크 분석을 중심으로)

  • Kim, Minsu;Choi, Jaewon;Kim, Hyun Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.177-193
    • /
    • 2014
  • Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

A Study on the Fitness of Adjustable Dental Impression Trays on the Chinese and Japanese (중국인과 일본인에 대한 가변형 치과 인상용 트레이의 적합성에 관한 연구)

  • Kang, Han-Joong;Lee, Jin-Han;Choi, Jong-In;Lee, In-Seop;Dong, Jin-Keun
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.2
    • /
    • pp.175-184
    • /
    • 2008
  • Purpose: This study was designed to investigate the fitness of adjustable dental impression trays on the Chinese and the Japanese. Material and methods: Initial design of the adjustable dental trays was developed from the results of the dental arch size of Korean adults. This design was applied to the CAD-CAM process in order to create tray model samples. Simple silicon-base molds were then replicated based on these sample models. Polyurethane injection into the silicon- base molds completed the process of creating a large number of test products. 60 Chinese dental students (male:30, female:30) from the Shanghai Second Medical University and 60 Japanese alumni from the Kumamoto high school (male:30, female:30) were selected for taking irreversible hydrocolloid impression with these trays. The width and length of the impression body were measured on several measuring points by Vernier caliper. The results were analyzed statistically to evaluate the fitness of the trays. Results: 1. Uniform impression material thickness was achieved on the Chinese and Japanese by controlling the width of the tray using stops and beveled guides. The material thickness was generally within the range of 3 mm to 6 mm. 2. In the maxillary tray of the Chinese, average thickness of the impression material of the labial vestibule of the incisal teeth was 6.2 mm, the canine was 5.9 mm and the midpalatal part 10.5 mm and the posterior palatal part 9.7 mm. These were relatively large values. 3. In the mandibular tray of the Chinese, average length of the impression material of the lingual vestibule of first, second premolar contact point was 8.9 mm, the incisal teeth was 7.8 mm and thickness of the labial part of canine was 6.8 mm and premolars 7.0 mm. These were relatively large values. 4. In the maxillary tray of the Japanese, average thickness of the impression material of the labial vestibule of the incisal teeth was 7.4 mm, the canine was 7.7 mm and the midpalatal part 9.1 mm. These were relatively large values. 5. In the mandibular tray of the Japanese, average thickness of the impression material of the labial vestibule of first, second premolar contact point was 8.4 mm, and thickness of the labial part of canine was 7.4 mm. These were relatively large values. Conclusion: This adjustable dental tray shows good accuracy to Korean because it was designed by the analysis of the dental arch size of Korean adult model. With this result, it can be applied to Chinese and Japanese, we can take more easy and accurate dental impressions.

RALY RNA Binding Protein-like Reduced Expression is Associated with Poor Prognosis in Clear Cell Renal Cell Carcinoma

  • Cui, Zhi-Wen;Xia, Ye;Ye, Yi-Wang;Jiang, Zhi-Mao;Wang, Ya-Dong;Wu, Jian-Ting;Sun, Liang;Zhao, Jun;Fa, Ping-Ping;Sun, Xiao-Juan;Gui, Yao-Ting;Cai, Zhi-Ming
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.13 no.7
    • /
    • pp.3403-3408
    • /
    • 2012
  • The molecular mechanisms involved in the progression of clear cell renal cell carcinomas (ccRCCs) are still unclear. The aim of this study was to analyse the relationships between expression of RALYL and clinical characteristics. In 41 paired samples of ccRCCs and adjacent normal tissues, we used real-time qPCR to evaluate the expression of RALYL mRNA. RALYL protein levels were determined in 146 samples of ccRCC and 37 adjacent normal tissues by immunohistochemistry. Statistical analysis was used to explore the relationships between expression of RALYL and the clinical characteristics (gender, age, tumor size, T stage, N stage, M stage, survival times and survival outcome) in ccRCC. In addition, these patients were follow-up period 64 months (range: 4~116months) to investigate the influence on prognosis. We found significantly differences between ccRCC tissues and normal tissues (p<0.001, paired-sample t test) in mRNA levels of RALYL. Immunohistochemistry analyses in 146 ccRCC samples and 37 adjacent normal tissues showed significantly lower RALYL protein levels in ccRCC samples (${\chi}^2$-test, p<0.001), inversely correlating with tumour size (p=0.024), T stage (0.005), N stage (p<0.001) as well as M stage (p=0.019), but not age (p=0.357) and gender (p=0.348). Kaplan-Meier survival analysis demonstrated that people with lower level of RALYL expression had a poorer survival rate than those with a higher level of RALYL expression, significantly different by the log-rank test (p=0.011). Cox regression analysis indicated that RALYL expression (p=0.039), N stage (p=0.008) and distant metastasis (p<0.001) were independent prognosis factors for the overall survival of ccRCC patients. We demonstrated that the expression of RALYL was significantly low in ccRCC and correlated with a poor prognosis in a large number of clinical samples. Our findings showed that RALYL may be a potential therapeutic target as well as a poor prognostic factor.