• 제목/요약/키워드: Grid

검색결과 9,456건 처리시간 0.031초

Interpreting Bounded Rationality in Business and Industrial Marketing Contexts: Executive Training Case Studies (집행관배훈안례연구(阐述工商业背景下的有限合理性):집행관배훈안례연구(执行官培训案例研究))

  • Woodside, Arch G.;Lai, Wen-Hsiang;Kim, Kyung-Hoon;Jung, Deuk-Keyo
    • Journal of Global Scholars of Marketing Science
    • /
    • 제19권3호
    • /
    • pp.49-61
    • /
    • 2009
  • This article provides training exercises for executives into interpreting subroutine maps of executives' thinking in processing business and industrial marketing problems and opportunities. This study builds on premises that Schank proposes about learning and teaching including (1) learning occurs by experiencing and the best instruction offers learners opportunities to distill their knowledge and skills from interactive stories in the form of goal.based scenarios, team projects, and understanding stories from experts. Also, (2) telling does not lead to learning because learning requires action-training environments should emphasize active engagement with stories, cases, and projects. Each training case study includes executive exposure to decision system analysis (DSA). The training case requires the executive to write a "Briefing Report" of a DSA map. Instructions to the executive trainee in writing the briefing report include coverage in the briefing report of (1) details of the essence of the DSA map and (2) a statement of warnings and opportunities that the executive map reader interprets within the DSA map. The length maximum for a briefing report is 500 words-an arbitrary rule that works well in executive training programs. Following this introduction, section two of the article briefly summarizes relevant literature on how humans think within contexts in response to problems and opportunities. Section three illustrates the creation and interpreting of DSA maps using a training exercise in pricing a chemical product to different OEM (original equipment manufacturer) customers. Section four presents a training exercise in pricing decisions by a petroleum manufacturing firm. Section five presents a training exercise in marketing strategies by an office furniture distributer along with buying strategies by business customers. Each of the three training exercises is based on research into information processing and decision making of executives operating in marketing contexts. Section six concludes the article with suggestions for use of this training case and for developing additional training cases for honing executives' decision-making skills. Todd and Gigerenzer propose that humans use simple heuristics because they enable adaptive behavior by exploiting the structure of information in natural decision environments. "Simplicity is a virtue, rather than a curse". Bounded rationality theorists emphasize the centrality of Simon's proposition, "Human rational behavior is shaped by a scissors whose blades are the structure of the task environments and the computational capabilities of the actor". Gigerenzer's view is relevant to Simon's environmental blade and to the environmental structures in the three cases in this article, "The term environment, here, does not refer to a description of the total physical and biological environment, but only to that part important to an organism, given its needs and goals." The present article directs attention to research that combines reports on the structure of task environments with the use of adaptive toolbox heuristics of actors. The DSA mapping approach here concerns the match between strategy and an environment-the development and understanding of ecological rationality theory. Aspiration adaptation theory is central to this approach. Aspiration adaptation theory models decision making as a multi-goal problem without aggregation of the goals into a complete preference order over all decision alternatives. The three case studies in this article permit the learner to apply propositions in aspiration level rules in reaching a decision. Aspiration adaptation takes the form of a sequence of adjustment steps. An adjustment step shifts the current aspiration level to a neighboring point on an aspiration grid by a change in only one goal variable. An upward adjustment step is an increase and a downward adjustment step is a decrease of a goal variable. Creating and using aspiration adaptation levels is integral to bounded rationality theory. The present article increases understanding and expertise of both aspiration adaptation and bounded rationality theories by providing learner experiences and practice in using propositions in both theories. Practice in ranking CTSs and writing TOP gists from DSA maps serves to clarify and deepen Selten's view, "Clearly, aspiration adaptation must enter the picture as an integrated part of the search for a solution." The body of "direct research" by Mintzberg, Gladwin's ethnographic decision tree modeling, and Huff's work on mapping strategic thought are suggestions on where to look for research that considers both the structure of the environment and the computational capabilities of the actors making decisions in these environments. Such research on bounded rationality permits both further development of theory in how and why decisions are made in real life and the development of learning exercises in the use of heuristics occurring in natural environments. The exercises in the present article encourage learning skills and principles of using fast and frugal heuristics in contexts of their intended use. The exercises respond to Schank's wisdom, "In a deep sense, education isn't about knowledge or getting students to know what has happened. It is about getting them to feel what has happened. This is not easy to do. Education, as it is in schools today, is emotionless. This is a huge problem." The three cases and accompanying set of exercise questions adhere to Schank's view, "Processes are best taught by actually engaging in them, which can often mean, for mental processing, active discussion."

  • PDF

Evaluating efficiency of Split VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes (골반 림프선을 포함한 전립선암 치료 시 Split VMAT plan의 유용성 평가)

  • Mun, Jun Ki;Son, Sang Jun;Kim, Dae Ho;Seo, Seok Jin
    • The Journal of Korean Society for Radiation Therapy
    • /
    • 제27권2호
    • /
    • pp.145-156
    • /
    • 2015
  • Purpose : The purpose of this study is to evaluate the efficiency of Split VMAT planning(Contouring rectum divided into an upper and a lower for reduce rectum dose) compare to Conventional VMAT planning(Contouring whole rectum) for prostate cancer radiotherapy involving pelvic lymph nodes. Materials and Methods : A total of 9 cases were enrolled. Each case received radiotherapy with Split VMAT planning to the prostate involving pelvic lymph nodes. Treatment was delivered using TrueBeam STX(Varian Medical Systems, USA) and planned on Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28), AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). Lower rectum contour was defined as starting 1cm superior and ending 1cm inferior to the prostate PTV, upper rectum is a part, except lower rectum from the whole rectum. Split VMAT plan parameters consisted of 10MV coplanar $360^{\circ}$ arcs. Each arc had $30^{\circ}$ and $30^{\circ}$ collimator angle, respectively. An SIB(Simultaneous Integrated Boost) treatment prescription was employed delivering 50.4Gy to pelvic lymph nodes and 63~70Gy to the prostate in 28 fractions. $D_{mean}$ of whole rectum on Split VMAT plan was applied for DVC(Dose Volume Constraint) of the whole rectum for Conventional VMAT plan. In addition, all parameters were set to be the same of existing treatment plans. To minimize the dose difference that shows up randomly on optimizing, all plans were optimized and calculated twice respectively using a 0.2cm grid. All plans were normalized to the prostate $PTV_{100%}$ = 90% or 95%. A comparison of $D_{mean}$ of whole rectum, upperr ectum, lower rectum, and bladder, $V_{50%}$ of upper rectum, total MU and H.I.(Homogeneity Index) and C.I.(Conformity Index) of the PTV was used for technique evaluation. All Split VMAT plans were verified by gamma test with portal dosimetry using EPID. Results : Using DVH analysis, a difference between the Conventional and the Split VMAT plans was demonstrated. The Split VMAT plan demonstrated better in the $D_{mean}$ of whole rectum, Up to 134.4 cGy, at least 43.5 cGy, the average difference was 75.6 cGy and in the $D_{mean}$ of upper rectum, Up to 1113.5 cGy, at least 87.2 cGy, the average difference was 550.5 cGy and in the $D_{mean}$ of lower rectum, Up to 100.5 cGy, at least -34.6 cGy, the average difference was 34.3 cGy and in the $D_{mean}$ of bladder, Up to 271 cGy, at least -55.5 cGy, the average difference was 117.8 cGy and in $V_{50%}$ of upper rectum, Up to 63.4%, at least 3.2%, the average difference was 23.2%. There was no significant difference on H.I., and C.I. of the PTV among two plans. The Split VMAT plan is average 77 MU more than another. All IMRT verification gamma test results for the Split VMAT plan passed over 90.0% at 2 mm / 2%. Conclusion : As a result, the Split VMAT plan appeared to be more favorable in most cases than the Conventional VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes. By using the split VMAT planning technique it was possible to reduce the upper rectum dose, thus reducing whole rectal dose when compared to conventional VMAT planning. Also using the split VMAT planning technique increase the treatment efficiency.

  • PDF

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • 제18권3호
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Study of East Asia Climate Change for the Last Glacial Maximum Using Numerical Model (수치모델을 이용한 Last Glacial Maximum의 동아시아 기후변화 연구)

  • Kim, Seong-Joong;Park, Yoo-Min;Lee, Bang-Yong;Choi, Tae-Jin;Yoon, Young-Jun;Suk, Bong-Chool
    • The Korean Journal of Quaternary Research
    • /
    • 제20권1호
    • /
    • pp.51-66
    • /
    • 2006
  • The climate of the last glacial maximum (LGM) in northeast Asia is simulated with an atmospheric general circulation model of NCAR CCM3 at spectral truncation of T170, corresponding to a grid cell size of roughly 75 km. Modern climate is simulated by a prescribed sea surface temperature and sea ice provided from NCAR, and contemporary atmospheric CO2, topography, and orbital parameters, while LGM simulation was forced with the reconstructed CLIMAP sea surface temperatures, sea ice distribution, ice sheet topography, reduced $CO_2$, and orbital parameters. Under LGM conditions, surface temperature is markedly reduced in winter by more than $18^{\circ}C$ in the Korean west sea and continental margin of the Korean east sea, where the ocean exposed to land in the LGM, whereas in these areas surface temperature is warmer than present in summer by up to $2^{\circ}C$. This is due to the difference in heat capacity between ocean and land. Overall, in the LGM surface is cooled by $4{\sim}6^{\circ}C$ in northeast Asia land and by $7.1^{\circ}C$ in the entire area. An analysis of surface heat fluxes show that the surface cooling is due to the increase in outgoing longwave radiation associated with the reduced $CO_2$ concentration. The reduction in surface temperature leads to a weakening of the hydrological cycle. In winter, precipitation decreases largely in the southeastern part of Asia by about $1{\sim}4\;mm/day$, while in summer a larger reduction is found over China. Overall, annual-mean precipitation decreases by about 50% in the LGM. In northeast Asia, evaporation is also overall reduced in the LGM, but the reduction of precipitation is larger, eventually leading to a drier climate. The drier LGM climate simulated in this study is consistent with proxy evidence compiled in other areas. Overall, the high-resolution model captures the climate features reasonably well under global domain.

  • PDF

A Thermal Time-Driven Dormancy Index as a Complementary Criterion for Grape Vine Freeze Risk Evaluation (포도 동해위험 판정기준으로서 온도시간 기반의 휴면심도 이용)

  • Kwon, Eun-Young;Jung, Jea-Eun;Chung, U-Ran;Lee, Seung-Jong;Song, Gi-Cheol;Choi, Dong-Geun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • 제8권1호
    • /
    • pp.1-9
    • /
    • 2006
  • Regardless of the recent observed warmer winters in Korea, more freeze injuries and associated economic losses are reported in fruit industry than ever before. Existing freeze-frost forecasting systems employ only daily minimum temperature for judging the potential damage on dormant flowering buds but cannot accommodate potential biological responses such as short-term acclimation of plants to severe weather episodes as well as annual variation in climate. We introduce 'dormancy depth', in addition to daily minimum temperature, as a complementary criterion for judging the potential damage of freezing temperatures on dormant flowering buds of grape vines. Dormancy depth can be estimated by a phonology model driven by daily maximum and minimum temperature and is expected to make a reasonable proxy for physiological tolerance of buds to low temperature. Dormancy depth at a selected site was estimated for a climatological normal year by this model, and we found a close similarity in time course change pattern between the estimated dormancy depth and the known cold tolerance of fruit trees. Inter-annual and spatial variation in dormancy depth were identified by this method, showing the feasibility of using dormancy depth as a proxy indicator for tolerance to low temperature during the winter season. The model was applied to 10 vineyards which were recently damaged by a cold spell, and a temperature-dormancy depth-freeze injury relationship was formulated into an exponential-saturation model which can be used for judging freeze risk under a given set of temperature and dormancy depth. Based on this model and the expected lowest temperature with a 10-year recurrence interval, a freeze risk probability map was produced for Hwaseong County, Korea. The results seemed to explain why the vineyards in the warmer part of Hwaseong County have been hit by more freeBe damage than those in the cooler part of the county. A dormancy depth-minimum temperature dual engine freeze warning system was designed for vineyards in major production counties in Korea by combining the site-specific dormancy depth and minimum temperature forecasts with the freeze risk model. In this system, daily accumulation of thermal time since last fall leads to the dormancy state (depth) for today. The regional minimum temperature forecast for tomorrow by the Korea Meteorological Administration is converted to the site specific forecast at a 30m resolution. These data are input to the freeze risk model and the percent damage probability is calculated for each grid cell and mapped for the entire county. Similar approaches may be used to develop freeze warning systems for other deciduous fruit trees.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • 제18권1호
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.