• Title/Summary/Keyword: prediction change

Search Result 1,921, Processing Time 0.028 seconds

Analysis of the Correlation of Job Satisfaction to Turnover Among Dental Hygienists in the Region of J (J지역 치과위생사의 직무만족과 이직의 상관관계 분석)

  • Ju, On-Ju;Kim, Kyeong-Seon;Lee, Hyun-Ok
    • Journal of dental hygiene science
    • /
    • v.7 no.4
    • /
    • pp.251-256
    • /
    • 2007
  • The purpose of this study was to examine what induced dental hygienists to take up another employment and whether their job satisfaction had anything to do with it in an attempt to help curtail their turnover rate. The subjects in this study were approximately 200 dental hygienists who worked in dental institutions. A survey was conducted from July 24 through September 24, 2006, by using structured, self-administered questionnaires. For data analysis, SPSS 11.5 program was employed to see if their turnover experience was linked to their general characteristics, why they took up another employment, how long they wanted to do that and how their job satisfaction was related to that. The findings of the study were as follows: 1. In regard to turnover experience by age, marital status and career, those who had ever changed their employment accounted for 36.2 percent of the age group from 24 to 26, 83.0 percent of the unmarried ones and 50.0 percent of those whose career was less than one to three years (p < 0.001). By monthly mean income, 50.0 percent of the dental hygienists whose monthly mean income ranged from 1.0 to 1.29 million won had that experience(p < 0.05). The gap between these groups and the others was statistically significant. 2. As for the reason of turnover, working environments were cited most often(28.1%), followed by possibilities(18.0%), relationship with supervisors and colleagues(12.4%), and compensation(4.5%). 3. Concerning a preferred new workplace, 66.2 percent of the dental hygienists who worked in dentist's offices hoped to be newly hired by public dental clinics(p < 0.001). By education, 64.3 percent of the college-educated dental hygienists wanted to work at public dental clinics as well(p < 0.01). 4. The change of employment was under the greatest influence of the possibilities of workplace, followed by workload, pay and relationship with colleagues. All the factors had a negative impact on their turnover. Those who were less satisfied sought new employment more often, and job satisfaction made a statistically significant difference to that. The job satisfaction factors made a prediction of their turnover intention ($R^2=.254$).

  • PDF

A study on the use of a Business Intelligence system : the role of explanations (비즈니스 인텔리전스 시스템의 활용 방안에 관한 연구: 설명 기능을 중심으로)

  • Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.155-169
    • /
    • 2014
  • With the rapid advances in technologies, organizations are more likely to depend on information systems in their decision-making processes. Business Intelligence (BI) systems, in particular, have become a mainstay in dealing with complex problems in an organization, partly because a variety of advanced computational methods from statistics, machine learning, and artificial intelligence can be applied to solve business problems such as demand forecasting. In addition to the ability to analyze past and present trends, these predictive analytics capabilities provide huge value to an organization's ability to respond to change in markets, business risks, and customer trends. While the performance effects of BI system use in organization settings have been studied, it has been little discussed on the use of predictive analytics technologies embedded in BI systems for forecasting tasks. Thus, this study aims to find important factors that can help to take advantage of the benefits of advanced technologies of a BI system. More generally, a BI system can be viewed as an advisor, defined as the one that formulates judgments or recommends alternatives and communicates these to the person in the role of the judge, and the information generated by the BI system as advice that a decision maker (judge) can follow. Thus, we refer to the findings from the advice-giving and advice-taking literature, focusing on the role of explanations of the system in users' advice taking. It has been shown that advice discounting could occur when an advisor's reasoning or evidence justifying the advisor's decision is not available. However, the majority of current BI systems merely provide a number, which may influence decision makers in accepting the advice and inferring the quality of advice. We in this study explore the following key factors that can influence users' advice taking within the setting of a BI system: explanations on how the box-office grosses are predicted, types of advisor, i.e., system (data mining technique) or human-based business advice mechanisms such as prediction markets (aggregated human advice) and human advisors (individual human expert advice), users' evaluations of the provided advice, and individual differences in decision-makers. Each subject performs the following four tasks, by going through a series of display screens on the computer. First, given the information of the given movie such as director and genre, the subjects are asked to predict the opening weekend box office of the movie. Second, in light of the information generated by an advisor, the subjects are asked to adjust their original predictions, if they desire to do so. Third, they are asked to evaluate the value of the given information (e.g., perceived usefulness, trust, satisfaction). Lastly, a short survey is conducted to identify individual differences that may affect advice-taking. The results from the experiment show that subjects are more likely to follow system-generated advice than human advice when the advice is provided with an explanation. When the subjects as system users think the information provided by the system is useful, they are also more likely to take the advice. In addition, individual differences affect advice-taking. The subjects with more expertise on advisors or that tend to agree with others adjust their predictions, following the advice. On the other hand, the subjects with more knowledge on movies are less affected by the advice and their final decisions are close to their original predictions. The advances in predictive analytics of a BI system demonstrate a great potential to support increasingly complex business decisions. This study shows how the designs of a BI system can play a role in influencing users' acceptance of the system-generated advice, and the findings provide valuable insights on how to leverage the advanced predictive analytics of the BI system in an organization's forecasting practices.

Prediction of the risk of skin cancer caused by UVB radiation exposure using a method of meta-analysis (Meta-analysis를 이용한 UVB 조사량에 따른 피부암 발생 위해도의 예측 연구)

  • Shin, D.C.;Lee, J.T.;Yang, J.Y.
    • Journal of Preventive Medicine and Public Health
    • /
    • v.31 no.1 s.60
    • /
    • pp.91-103
    • /
    • 1998
  • Under experimental conditions, UVB radiation, a type of ultra violet radiation, has shown to .elate with the occurrence of skin erythema (sun-burn) in human and skin cancer in experimental animal. Cumulative exposure to UVB is also believed to be at least partly responsible for the 'aging' process of the skin in human. It has also been observed to have an effect of altering DNA (deoxyribonucleic acid). UVB radiation is both an initiator and a promoter of non-melanoma skin cancer. Meta-analysis is a new discipline that critically reviews and statistically combines the results of previous researches. A recent review of meta-analysis in the field of public health emphasized its growing importance. Using a meta-analysis in this study, we explored more reliable dose-response relationships between UVB radiation and skin cancer incidence. We estimated skin cancer incidence using measured UVB radiation dose at a local area of Seoul (Shin chou-dong). The studies showing the dose-response relationships between UVB radiation and non-melanoma skin cancer incidence were searched and selected for a meta-analysis. The data for 7 reported epidemiological studies of three counties (USA, England, Australia) were pooled to estimated the risk. We estimated rate of incidence change of skin cancer using pooled data by meta-analysis method, and exponential and power models. Using either model, the regression coefficients for UVB did not differ significantly by gender and age. In each analysis of variance, non-melanoma skin cancer incidence after removing the gender and age and UVB effects was significant (p>0.01). The coefficients for UVB dose were estimated $2.07\times10^{-6}$ by the exponential model and 2.49 by the power model. At a local area of Seoul (Shinchon-Dong), BAF value were estimated 1.90 and 2.51 by the exponential and power model, respectively. The estimated BAP value were increased statistical power than that of primary studies that using a meta-analysis method.

  • PDF

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

Issue tracking and voting rate prediction for 19th Korean president election candidates (댓글 분석을 통한 19대 한국 대선 후보 이슈 파악 및 득표율 예측)

  • Seo, Dae-Ho;Kim, Ji-Ho;Kim, Chang-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.199-219
    • /
    • 2018
  • With the everyday use of the Internet and the spread of various smart devices, users have been able to communicate in real time and the existing communication style has changed. Due to the change of the information subject by the Internet, data became more massive and caused the very large information called big data. These Big Data are seen as a new opportunity to understand social issues. In particular, text mining explores patterns using unstructured text data to find meaningful information. Since text data exists in various places such as newspaper, book, and web, the amount of data is very diverse and large, so it is suitable for understanding social reality. In recent years, there has been an increasing number of attempts to analyze texts from web such as SNS and blogs where the public can communicate freely. It is recognized as a useful method to grasp public opinion immediately so it can be used for political, social and cultural issue research. Text mining has received much attention in order to investigate the public's reputation for candidates, and to predict the voting rate instead of the polling. This is because many people question the credibility of the survey. Also, People tend to refuse or reveal their real intention when they are asked to respond to the poll. This study collected comments from the largest Internet portal site in Korea and conducted research on the 19th Korean presidential election in 2017. We collected 226,447 comments from April 29, 2017 to May 7, 2017, which includes the prohibition period of public opinion polls just prior to the presidential election day. We analyzed frequencies, associative emotional words, topic emotions, and candidate voting rates. By frequency analysis, we identified the words that are the most important issues per day. Particularly, according to the result of the presidential debate, it was seen that the candidate who became an issue was located at the top of the frequency analysis. By the analysis of associative emotional words, we were able to identify issues most relevant to each candidate. The topic emotion analysis was used to identify each candidate's topic and to express the emotions of the public on the topics. Finally, we estimated the voting rate by combining the volume of comments and sentiment score. By doing above, we explored the issues for each candidate and predicted the voting rate. The analysis showed that news comments is an effective tool for tracking the issue of presidential candidates and for predicting the voting rate. Particularly, this study showed issues per day and quantitative index for sentiment. Also it predicted voting rate for each candidate and precisely matched the ranking of the top five candidates. Each candidate will be able to objectively grasp public opinion and reflect it to the election strategy. Candidates can use positive issues more actively on election strategies, and try to correct negative issues. Particularly, candidates should be aware that they can get severe damage to their reputation if they face a moral problem. Voters can objectively look at issues and public opinion about each candidate and make more informed decisions when voting. If they refer to the results of this study before voting, they will be able to see the opinions of the public from the Big Data, and vote for a candidate with a more objective perspective. If the candidates have a campaign with reference to Big Data Analysis, the public will be more active on the web, recognizing that their wants are being reflected. The way of expressing their political views can be done in various web places. This can contribute to the act of political participation by the people.

GnRH Agonist Stimulation Test (GAST) for Prediction of Ovarian Response in Controlled Ovarian Stimulation (COH) (난소기능평가를 위한 Gonadotropin Releasing Hormone Agonist Stimulation Test (GAST)의 효용성에 관한 연구)

  • Kim, Mee-Ran;Song, In-Ok;Yeon, Hye-Jeong;Choi, Bum-Chae;Paik, Eun-Chan;Koong, Mi-Kyoung;Song, Il-Pyo;Lee, Jin-Woo;Kang, Inn-Soo
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.26 no.2
    • /
    • pp.163-170
    • /
    • 1999
  • Objectives: The aims of this study are 1) to determine if GAST is a better indicator in predicting ovarian response to COH compared with patient's age or basal FSH level and 2) to evaluate its role in detecting abnormal ovarian response. Design: Prospective study in 118 patients undergoing IVF-ET using GnRH-a short protocol during May-September 1995. Materials and Methods: After blood sampling for basal FSH and estradiol $(E_2)$ on cycle day two, 0.5ml (0.525mg) GnRH agonist ($Suprefact^{(r)}$, Hoechst) was injected subcutaneously. Serum $E_2$ was measured 24 hours later. Initial $E_2$ difference $({\Delta}E_2)$ was defined as the change in $E_2$ on day 3 over the baseline day 2 value. Sixteen patients with ovarian cyst or single ovary or incorrect blood collection time were excluded from the analysis. The patients were divided into three groups by ${\Delta}E_2$; group A (n=30):${\Delta}E_2$<40 pg/ml, group B (n=52): 40 pg/ml${\leq}{\Delta}E_2$<100 pg/ml, group C (n=20): ${\Delta}E_2{\leq}100$ pg/ml. COH was done by GnRH agonist/HMG/hCG and IVF-ET was followed. Ratio of $E_2$ on day of hCG injection over the number of ampules of gonadotropins used ($E_2hCGday$/Amp) was regarded as ovarian responsiveness. Poor ovarian response and overstimulation were defined as $E_2$ hCGday less than 600 pg/ml and greater than 5000 pg/ml, respectively. Results: Mean age $({\pm}SEM)$ in group A, B and C were $33.7{\pm}0.8^*,\;31.5{\pm}0.6\;and\;30.6{\pm}0.5^*$, respectively ($^*$: p<0.05). Mean basal FSH level of group $A(11.1{\pm}1.1mlU/ml)$ was significantly higher than those of $B(7.4{\pm}0.2mIU/ml)$ and C $(6.8{\pm}0.4mIU/ml)$ (p<0.001). Mean $E_2hCGday$ of group A was significantly lower than those of group B or C, i.e., $1402.1{\pm}187.7pg/ml,\;3153.2{\pm}240.0pg/ml,\;4078.8{\pm}306.4pg/ml$ respectively (p<0.0001). The number of ampules of gonadotropins used in group A was significantly greater than those in group B or C: $38.6{\pm}2.3,\;24.2{\pm}1.1\;and\;18.5{\pm}1.0$ (p<0.0001). The number of oocytes retrieved in group A was significantly smaller than those in group B or C: $6.4{\pm}1.1,\;15.5{\pm}1.1\;and\;18.6{\pm}1.6$, respectively (p<0.0001). By stepwise multiple regression, only ${\Delta}E_2$ showed a significant correlation (r=0.68, p<0.0001) with $E_2HCGday$/Amp, while age or basal FSH level were not significant. Likewise, only ${\Delta}E_2$ correlated significantly with the number of oocytes retrieved (r=0.57, p<0.001). All four patients whose COH was canceled due to poor ovarian response belonged to group A only (Fisher's exact test, p<0.01). Whereas none of 30 patients in group A (0%) had overstimulation, 14 patients among 72 patients (19.4%) in group B and C had overstimulation (Fisher's exact test, p<0.01). Conclusions: These data suggest that initial $E_2$ difference after GAST may be a better prognostic indicator of ovarian response to COH than age or basal FSH level. Since initial $E_2$ difference demonstrates significant association with abnormal ovarian response such as poor ovarian response necessitating cycle cancellation or overstimulation, GAST may be helpful in monitoring and consultation of patients during COH in IVF-ET cycle.

  • PDF

An Analysis on Factors Related to the Job Satisfaction of Dental Hygienists at J Region (J지역 치과위생사의 직무스트레스 요인 분석)

  • Lee, Hyun-Ok;Ju, On-Ju;Kim, Young-Im
    • Journal of dental hygiene science
    • /
    • v.7 no.2
    • /
    • pp.65-72
    • /
    • 2007
  • The purpose of this study was to examine the job stress and job stressors of dental hygienists. The subjects in the study were 220 dental hygienists who worked in north Jeolla province. After a mail survey was conducted from July 24 through September 24, 2006, the responses from 180 dental hygienists(response rate 81.8%) were gathered, and 156 answer sheets were analyzed except 24 incomplete ones that couldn't be analyzable. The findings of the study were as follows: 1. As for the correlation of overall job stress to turnover intention, their entire stress was under the influence of unreasonable treatment (r = 0.382), conflicts as a professional(r = 0.285), tough working environments(r = 0.303), conflicts with colleagues(r = 0.233), and heavy workload (r = 0.262). Those who were more stressed were more willing to change their occupation, and their stress level made a statistically significant difference to that(p < 0.01). 2. A multiple regression analysis was carried out by selecting the job stressors and turnover intention as independent and dependent variables respectively to see how each of the stressors affected job stress. And unreasonable treatment(p < 0.001) was identified as what had the biggest impact on that, followed by conflicts as a professional(p < 0.05), and tough working environments (p < 0.05). The stressors made a 22.2% prediction of turnover intention.

  • PDF

DEVELOPMENT OF SAFETY-BASED LEVEL-OF-SERVICE CRITERIA FOR ISOLATED SIGNALIZED INTERSECTIONS (독립신호 교차로에서의 교통안전을 위한 서비스수준 결정방법의 개발)

  • Dr. Tae-Jun Ha
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.3-32
    • /
    • 1995
  • The Highway Capacity Manual specifies procedures for evaluating intersection performance in terms of delay per vehicle. What is lacking in the current methodology is a comparable quantitative procedure for ass~ssing the safety-based level of service provided to motorists. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections based on the relative hazard of alternative intersection designs and signal timing plans. Conflict opportunity models were developed for those crossing, diverging, and stopping maneuvers which are associated with left-turn and rear-end accidents. Safety¬based level-of-service criteria were then developed based on the distribution of conflict opportunities computed from the developed models. A case study evaluation of the level of service analysis methodology revealed that the developed safety-based criteria were not as sensitive to changes in prevailing traffic, roadway, and signal timing conditions as the traditional delay-based measure. However, the methodology did permit a quantitative assessment of the trade-off between delay reduction and safety improvement. The Highway Capacity Manual (HCM) specifies procedures for evaluating intersection performance in terms of a wide variety of prevailing conditions such as traffic composition, intersection geometry, traffic volumes, and signal timing (1). At the present time, however, performance is only measured in terms of delay per vehicle. This is a parameter which is widely accepted as a meaningful and useful indicator of the efficiency with which an intersection is serving traffic needs. What is lacking in the current methodology is a comparable quantitative procedure for assessing the safety-based level of service provided to motorists. For example, it is well¬known that the change from permissive to protected left-turn phasing can reduce left-turn accident frequency. However, the HCM only permits a quantitative assessment of the impact of this alternative phasing arrangement on vehicle delay. It is left to the engineer or planner to subjectively judge the level of safety benefits, and to evaluate the trade-off between the efficiency and safety consequences of the alternative phasing plans. Numerous examples of other geometric design and signal timing improvements could also be given. At present, the principal methods available to the practitioner for evaluating the relative safety at signalized intersections are: a) the application of engineering judgement, b) accident analyses, and c) traffic conflicts analysis. Reliance on engineering judgement has obvious limitations, especially when placed in the context of the elaborate HCM procedures for calculating delay. Accident analyses generally require some type of before-after comparison, either for the case study intersection or for a large set of similar intersections. In e.ither situation, there are problems associated with compensating for regression-to-the-mean phenomena (2), as well as obtaining an adequate sample size. Research has also pointed to potential bias caused by the way in which exposure to accidents is measured (3, 4). Because of the problems associated with traditional accident analyses, some have promoted the use of tqe traffic conflicts technique (5). However, this procedure also has shortcomings in that it.requires extensive field data collection and trained observers to identify the different types of conflicts occurring in the field. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections that would be compatible and consistent with that presently found in the HCM for evaluating efficiency-based level of service as measured by delay per vehicle (6). The intent was not to develop a new set of accident prediction models, but to design a methodology to quantitatively predict the relative hazard of alternative intersection designs and signal timing plans.

  • PDF

A Study on the Installation of Groyne using Critical Movement Velocity and Limiting Tractive Force (이동한계유속과 한계소류력을 활용한 수제 설치에 관한 연구)

  • Kim, Yeong Sik;Park, Shang Ho;An, Ik Tae;Choo, Yeon Moon
    • Journal of Wetlands Research
    • /
    • v.22 no.3
    • /
    • pp.194-199
    • /
    • 2020
  • Unlike in the past, the world is facing water shortages due to climate change and difficulties in simultaneously managing the risks of flooding. The Four Major Rivers project was carried out with the aim of realizing a powerful nation of water by managing water resources and fostering the water industry, and the construction period was relatively short compared to the unprecedented scale. Therefore, the prediction and analysis of how the river environment changes after the Four Major Rivers Project is insufficient. Currently, part of the construction section of the Four Major Rivers Project is caused by repeated erosion and sedimentation due to the effects of sandification caused by large dredging and flood-time reservoirs, and the head erosion of the tributaries occurs. In order to solve these problems, the riverbed maintenance work was installed, but it resulted in erosion of both sides of the river and the development of new approaches and techniques to keep the river bed stable, such as erosion and excessive sedimentation, is required. The water agent plays a role of securing a certain depth of water for the main stream by concentrating the flow so much in the center and preventing levee erosion by controlling the flow direction and flow velocity. In addition, Groyne products provide various ecological environments by forming a natural form of riverbeds by inducing local erosion and deposition in addition to the protection functions of the river bank and embankment. Therefore, after reviewing the method of determining the shape of the Groyne structure currently in use by utilizing the mobile limit flow rate and marginal reflux force, a new Critical Movement Velocity(${\bar{U}}_d$) and a new resistance coefficient formula considering the mathematical factors applicable to the actual domestic stream were developed and the measures applicable to Groyne installation were proposed.

Sensitivity of Simulated Water Temperature to Vertical Mixing Scheme and Water Turbidity in the Yellow Sea (수직 혼합 모수화 기법과 탁도에 따른 황해 수온 민감도 실험)

  • Kwak, Myeong-Taek;Seo, Gwang-Ho;Choi, Byoung-Ju;Kim, Chang-Sin;Cho, Yang-Ki
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.18 no.3
    • /
    • pp.111-121
    • /
    • 2013
  • Accurate prediction of sea water temperature has been emphasized to make precise local weather forecast and to understand change of ecosystem. The Yellow Sea, which has turbid water and strong tidal current, is an unique shallow marginal sea. It is essential to include the effects of the turbidity and the strong tidal mixing for the realistic simulation of temperature distribution in the Yellow Sea. Evaluation of ocean circulation model response to vertical mixing scheme and turbidity is primary objective of this study. Three-dimensional ocean circulation model(Regional Ocean Modeling System) was used to perform numerical simulations. Mellor- Yamada level 2.5 closure (M-Y) and K-Profile Parameterization (KPP) scheme were selected for vertical mixing parameterization in this study. Effect of Jerlov water type 1, 3 and 5 was also evaluated. The simulated temperature distribution was compared with the observed data by National Fisheries Research and Development Institute to estimate model's response to turbidity and vertical mixing schemes in the Yellow Sea. Simulations with M-Y vertical mixing scheme produced relatively stronger vertical mixing and warmer bottom temperature than the observation. KPP scheme produced weaker vertical mixing and did not well reproduce tidal mixing front along the coast. However, KPP scheme keeps bottom temperature closer to the observation. Consequently, numerical ocean circulation simulations with M-Y vertical mixing scheme tends to produce well mixed vertical temperature structure and that with KPP vertical mixing scheme tends to make stratified vertical temperature structure. When Jerlov water type is higher, sea surface temperature is high and sea bottom temperature is low because downward shortwave radiation is almost absorbed near the sea surface.