• Title/Summary/Keyword: Constraint analysis

Search Result 1,180, Processing Time 0.043 seconds

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.

Analysis of Intervention in Activities of Daily Living for Stroke Patients in Korea: Focusing on Single-Subject Research Design (국내 뇌졸중 환자를 대상으로 한 일상생활활동 중재 연구 분석: 단일대상연구 설계를 중심으로)

  • Sung, Ji-Young;Choi, Yoo-Im
    • Therapeutic Science for Rehabilitation
    • /
    • v.13 no.1
    • /
    • pp.9-21
    • /
    • 2024
  • Objective : The purpose of this study was to confirm the characteristics and quality of a single-subject research that conducted interventions to improve activities of daily living (ADL) in stroke patients. Methods : 'Stroke,' 'activities of daily living,' and 'single-subject studies' were searched as keywords among papers published in the last 15 years between 2009 and 2023 among Research Information Sharing Service, DBpia, and e-articles. A total of nine papers were examined for the characteristics and quality before analysis. Results : The independent variables applied to improve ADL included constraint-induced therapy, mental practice for performing functional activities, virtual reality-based task training, subjective postural vertical training without visual feedback, bilateral upper limb movement, core stability training program, traditional occupational therapy and neurocognitive rehabilitation, smooth pursuit eye movement, neck muscle vibration, and occupation-based community rehabilitation. Assessment of Motor and Process Skills was the most common evaluation tool for measuring dependent variables, with four articles, and Modified Barthel Index and Canadian Occupational Performance Measure were two articles each. As a result of confirming the qualitative level of the analyzed papers, out of a total of nine studies, seven studies were at a high level, two at a moderate level, and none were at a low level. Conclusion : Various types of rehabilitation treatments have been actively applied as intervention methods to improve the daily life activities of stroke patients; the quality level of single-subject studies applying ADL interventions was reliable.

Quality Assurance of Patients for Intensity Modulated Radiation Therapy (세기조절방사선치료(IMRT) 환자의 QA)

  • Yoon Sang Min;Yi Byong Yong;Choi Eun Kyung;Kim Jong Hoon;Ahn Seung Do;Lee Sang-Wook
    • Radiation Oncology Journal
    • /
    • v.20 no.1
    • /
    • pp.81-90
    • /
    • 2002
  • Purpose : To establish and verify the proper and the practical IMRT (Intensity--modulated radiation therapy) patient QA (Quality Assurance). Materials and Methods : An IMRT QA which consists of 3 steps and 16 items were designed and examined the validity of the program by applying to 9 patients, 12 IMRT cases of various sites. The three step OA program consists of RTP related QA, treatment information flow QA, and a treatment delivery QA procedure. The evaluation of organ constraints, the validity of the point dose, and the dose distribution are major issues in the RTP related QA procedure. The leaf sequence file generation, the evaluation of the MLC control file, the comparison of the dry run film, and the IMRT field simulate image were included in the treatment information flow procedure QA. The patient setup QA, the verification of the IMRT treatment fields to the patients, and the examination of the data in the Record & Verify system make up the treatment delivery QA procedure. Results : The point dose measurement results of 10 cases showed good agreement with the RTP calculation within $3\%$. One case showed more than a $3\%$ difference and the other case showed more than $5\%$, which was out side the tolerance level. We could not find any differences of more than 2 mm between the RTP leaf sequence and the dry run film. Film dosimetry and the dose distribution from the phantom plan showed the same tendency, but quantitative analysis was not possible because of the film dosimetry nature. No error had been found from the MLC control file and one mis-registration case was found before treatment. Conclusion : This study shows the usefulness and the necessity of the IMRT patient QA program. The whole procedure of this program should be peformed, especially by institutions that have just started to accumulate experience. But, the program is too complex and time consuming. Therefore, we propose practical and essential QA items for institutions in which the IMRT is performed as a routine procedure.

The Optimal Configuration of Arch Structures Using Force Approximate Method (부재력(部材力) 근사해법(近似解法)을 이용(利用)한 아치구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;Ro, Min Lae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.95-109
    • /
    • 1993
  • In this study, the optimal configuration of arch structure has been tested by a decomposition technique. The object of this study is to provide the method of optimizing the shapes of both two hinged and fixed arches. The problem of optimal configuration of arch structures includes the interaction formulas, the working stress, and the buckling stress constraints on the assumption that arch ribs can be approximated by a finite number of straight members. On the first level, buckling loads are calculated from the relation of the stiffness matrix and the geometric stiffness matrix by using Rayleigh-Ritz method, and the number of the structural analyses can be decreased by approximating member forces through sensitivity analysis using the design space approach. The objective function is formulated as the total weight of the structures, and the constraints are derived by including the working stress, the buckling stress, and the side limit. On the second level, the nodal point coordinates of the arch structures are used as design variables and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, the problem of optimization can be reduced to unconstrained optimal design problem which is easy to solve. Numerical comparisons with results which are obtained from numerical tests for several arch structures with various shapes and constraints show that convergence rate is very fast regardless of constraint types and configuration of arch structures. And the optimal configuration or the arch structures obtained in this study is almost the identical one from other results. The total weight could be decreased by 17.7%-91.7% when an optimal configuration is accomplished.

  • PDF

Dynamic Network Loading Model based on Moving Cell Theory (Moving Cell Theory를 이용한 동적 교통망 부하 모형의 개발)

  • 김현명
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.5
    • /
    • pp.113-130
    • /
    • 2002
  • In this paper, we developed DNL(Dynamic Network Loading) model based on Moving cell theory to analyze the dynamic characteristics of traffic flow in congested network. In this paper vehicles entered into link at same interval would construct one cell, and the cells moved according to Cell following rule. In the past researches relating to DNL model a continuous single link is separated into two sections such as running section and queuing section to describe physical queue so that various dynamic states generated in real link are only simplified by running and queuing state. However, the approach has some difficulties in simulating various dynamic flow characteristics. To overcome these problems, we present Moving cell theory which is developed by combining Car following theory and Lagrangian method mainly using for the analysis of air pollutants dispersion. In Moving cell theory platoons are represented by cells and each cell is processed by Cell following theory. This type of simulation model is firstly presented by Cremer et al(1999). However they did not develop merging and diverging model because their model was applied to basic freeway section. Moreover they set the number of vehicles which can be included in one cell in one interval so this formulation cant apply to signalized intersection in urban network. To solve these difficulties we develop new approach using Moving cell theory and simulate traffic flow dynamics continuously by movement and state transition of the cells. The developed model are played on simple network including merging and diverging section and it shows improved abilities to describe flow dynamics comparing past DNL models.

Various Possibilities of Dispositif Film (디스포지티프 영화의 다양한 가능성)

  • KIM, Chaehee
    • Trans-
    • /
    • v.3
    • /
    • pp.55-86
    • /
    • 2017
  • This study begins with the necessity of the concept of reincarnation of film media and the inclusion of specific tendencies of contemporary films as post - cinema comes. Variable movements around recent films Challenging and experimental films show aesthetics that are difficult to approach with the analysis of classical mise en scene and montage. In this way, I review the dispositif proposed by Martin in films that are puzzling to criticize with the classical conceptual framework. This is because the concept of dispositive is a conceptual pile that extends more than a mise en scene and a montage. Dispositif films tend to be non-reproducible and non-narrative, but not all non-narrativef tendencies are dispositif films. Only the dispositif film is included in the flow. Dispositif movement has increased dramatically in the modern environment on which digital technology is based, but it is not a tendency to be found in any particular age. The movement has been detected in classical films, and the dispositif tendency has continued to exist in avant-garde films in the 1920s and some modernist films. First, for clear conceptualization of cinematic dispositif, this study examines the sources of dispositif debates that are being introduced into film theory today. In this process, the theory of Jean Louis Baudry, Michel Foucault, Agamben, Flusser, and Deleuze will help. The concept of dispositif was discussed by several scholars, including Baudry and Foucault, and today the notion of dispositif is defined across all these definitions. However, these various discussions are distinctly different from the cinematic dispositif or dispositif films that Martin advocates. Martin's proposed concept reminds us of the fundamentals of cinematic aesthetics that have distinguished between the mise-en-scene and the montage. And it will be able to reconsider those concepts and make it possible to view a thing a new light or create new films. The basic implications of dispositif are apparatus as devices, disposition and arrangement, the combination of heterogeneity. Thus, if you define a dispositif film in a word, it is a new 'constraint' consisting of rearrangement and arrangement of the heterogeneous elements that make up the conditions of the classical film. In order for something to become a new design, changes must be made in the arrangement and arrangement of the elements, forces, and forces that make up it. Naturally, the elements encompass both internal and external factors. These dispositif films have a variety of possibilities, such as reflection on the archival possibilities and the role of supervision, the reestablishment of active and creative audience, the reason for the film medium, and the ideological reflection. films can also 'network' quickly and easily with other media faster than any medium and create a new 'devised' aesthetic style. And the dispositif film that makes use of this will be a key keyword in reading the films that present the new trend of modern film. Because dispositif are so comprehensive and have a broad implication, there are certainly areas that are difficult to sophisticate. However this will have a positive effect on the future activation of dispositif studies end for end. Dispositif is difficult to elaborate the concept clearly, so it can be accessed from a wide range of dimensions and has theoretically infinite extensibility. At the beginning and end of the 21st century film, the concept of cinematic dispositif will become a decisive factor to dismantle old film aesthetics.

  • PDF

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Prediction of Expected Residual Useful Life of Rubble-Mound Breakwaters Using Stochastic Gamma Process (추계학적 감마 확률과정을 이용한 경사제의 기대 잔류유효수명 예측)

  • Lee, Cheol-Eung
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.3
    • /
    • pp.158-169
    • /
    • 2019
  • A probabilistic model that can predict the residual useful lifetime of structure is formulated by using the gamma process which is one of the stochastic processes. The formulated stochastic model can take into account both the sampling uncertainty associated with damages measured up to now and the temporal uncertainty of cumulative damage over time. A method estimating several parameters of stochastic model is additionally proposed by introducing of the least square method and the method of moments, so that the age of a structure, the operational environment, and the evolution of damage with time can be considered. Some features related to the residual useful lifetime are firstly investigated into through the sensitivity analysis on parameters under a simple setting of single damage data measured at the current age. The stochastic model are then applied to the rubble-mound breakwater straightforwardly. The parameters of gamma process can be estimated for several experimental data on the damage processes of armor rocks of rubble-mound breakwater. The expected damage levels over time, which are numerically simulated with the estimated parameters, are in very good agreement with those from the flume testing. It has been found from various numerical calculations that the probabilities exceeding the failure limit are converged to the constraint that the model must be satisfied after lasting for a long time from now. Meanwhile, the expected residual useful lifetimes evaluated from the failure probabilities are seen to be different with respect to the behavior of damage history. As the coefficient of variation of cumulative damage is becoming large, in particular, it has been shown that the expected residual useful lifetimes have significant discrepancies from those of the deterministic regression model. This is mainly due to the effect of sampling and temporal uncertainties associated with damage, by which the first time to failure tends to be widely distributed. Therefore, the stochastic model presented in this paper for predicting the residual useful lifetime of structure can properly implement the probabilistic assessment on current damage state of structure as well as take account of the temporal uncertainty of future cumulative damage.

Design and Economic Analysis of Low Pressure Liquid Air Production Process using LNG cold energy (LNG 냉열을 활용한 저압 액화 공기 생산 공정 설계 및 경제성 평가)

  • Mun, Haneul;Jung, Geonho;Lee, Inkyu
    • Korean Chemical Engineering Research
    • /
    • v.59 no.3
    • /
    • pp.345-358
    • /
    • 2021
  • This study focuses on the development of the liquid air production process that uses LNG (liquefied natural gas) cold energy which usually wasted during the regasification stage. The liquid air can be transported to the LNG exporter, and it can be utilized as the cold source to replace certain amount of refrigerant for the natural gas liquefaction. Therefore, the condition of the liquid air has to satisfy the available pressure of LNG storage tank. To satisfy pressure constraint of the membrane type LNG tank, proposed process is designed to produce liquid air at 1.3bar. In proposed process, the air is precooled by heat exchange with LNG and subcooled by nitrogen refrigeration cycle. When the amount of transported liquid air is as large as the capacity of the LNG carrier, it could be economical in terms of the transportation cost. In addition, larger liquid air can give more cold energy that can be used in natural gas liquefaction plant. To analyze the effect of the liquid air production amount, under the same LNG supply condition, the proposed process is simulated under 3 different air flow rate: 0.50 kg/s, 0.75 kg/s, 1.00 kg/s, correspond to Case1, Case2, and Case3, respectively. Each case was analyzed thermodynamically and economically. It shows a tendency that the more liquid air production, the more energy demanded per same mass of product as Case3 is 0.18kWh higher than Base case. In consequence the production cost per 1 kg liquid air in Case3 was $0.0172 higher. However, as liquid air production increases, the transportation cost per 1 kg liquid air has reduced by $0.0395. In terms of overall cost, Case 3 confirmed that liquid air can be produced and transported with $0.0223 less per kilogram than Base case.

Evaluating efficiency of Split VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes (골반 림프선을 포함한 전립선암 치료 시 Split VMAT plan의 유용성 평가)

  • Mun, Jun Ki;Son, Sang Jun;Kim, Dae Ho;Seo, Seok Jin
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.2
    • /
    • pp.145-156
    • /
    • 2015
  • Purpose : The purpose of this study is to evaluate the efficiency of Split VMAT planning(Contouring rectum divided into an upper and a lower for reduce rectum dose) compare to Conventional VMAT planning(Contouring whole rectum) for prostate cancer radiotherapy involving pelvic lymph nodes. Materials and Methods : A total of 9 cases were enrolled. Each case received radiotherapy with Split VMAT planning to the prostate involving pelvic lymph nodes. Treatment was delivered using TrueBeam STX(Varian Medical Systems, USA) and planned on Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28), AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). Lower rectum contour was defined as starting 1cm superior and ending 1cm inferior to the prostate PTV, upper rectum is a part, except lower rectum from the whole rectum. Split VMAT plan parameters consisted of 10MV coplanar $360^{\circ}$ arcs. Each arc had $30^{\circ}$ and $30^{\circ}$ collimator angle, respectively. An SIB(Simultaneous Integrated Boost) treatment prescription was employed delivering 50.4Gy to pelvic lymph nodes and 63~70Gy to the prostate in 28 fractions. $D_{mean}$ of whole rectum on Split VMAT plan was applied for DVC(Dose Volume Constraint) of the whole rectum for Conventional VMAT plan. In addition, all parameters were set to be the same of existing treatment plans. To minimize the dose difference that shows up randomly on optimizing, all plans were optimized and calculated twice respectively using a 0.2cm grid. All plans were normalized to the prostate $PTV_{100%}$ = 90% or 95%. A comparison of $D_{mean}$ of whole rectum, upperr ectum, lower rectum, and bladder, $V_{50%}$ of upper rectum, total MU and H.I.(Homogeneity Index) and C.I.(Conformity Index) of the PTV was used for technique evaluation. All Split VMAT plans were verified by gamma test with portal dosimetry using EPID. Results : Using DVH analysis, a difference between the Conventional and the Split VMAT plans was demonstrated. The Split VMAT plan demonstrated better in the $D_{mean}$ of whole rectum, Up to 134.4 cGy, at least 43.5 cGy, the average difference was 75.6 cGy and in the $D_{mean}$ of upper rectum, Up to 1113.5 cGy, at least 87.2 cGy, the average difference was 550.5 cGy and in the $D_{mean}$ of lower rectum, Up to 100.5 cGy, at least -34.6 cGy, the average difference was 34.3 cGy and in the $D_{mean}$ of bladder, Up to 271 cGy, at least -55.5 cGy, the average difference was 117.8 cGy and in $V_{50%}$ of upper rectum, Up to 63.4%, at least 3.2%, the average difference was 23.2%. There was no significant difference on H.I., and C.I. of the PTV among two plans. The Split VMAT plan is average 77 MU more than another. All IMRT verification gamma test results for the Split VMAT plan passed over 90.0% at 2 mm / 2%. Conclusion : As a result, the Split VMAT plan appeared to be more favorable in most cases than the Conventional VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes. By using the split VMAT planning technique it was possible to reduce the upper rectum dose, thus reducing whole rectal dose when compared to conventional VMAT planning. Also using the split VMAT planning technique increase the treatment efficiency.

  • PDF