• Title/Summary/Keyword: Time Constraint

Search Result 1,045, Processing Time 0.03 seconds

Effect of Lugol's Iodine Preservation on Cyanobacterial Biovolume and Estimate of Live Cell Biovolume Using Shrinkage Ratio (Lugol's Iodine Solution 첨가 후 보존 기간별 남조류 세포부피 변화 및 수축비를 이용한 생세포 부피 산정)

  • Park, Hae-Kyung;Lee, Hyeon-Je;Lee, Hae-Jin;Shin, Ra-Young
    • Journal of Korean Society on Water Environment
    • /
    • v.34 no.4
    • /
    • pp.375-381
    • /
    • 2018
  • The monitoring of phytoplankton biomass and community structure is essential as a first step to control the harmful cyanobacterial blooms in freshwater systems, such as seen in rivers and lakes, due to the process of eutrophication and climate change. In order to quantify the biomass of phytoplankton with a wide range in size and shape, the measurement of cell biovolume along with cell density is required for a comprehensive review on this issue. However, most routine monitoring programs preserve the gathered phytoplankton samples before analysis using chemical additives, because of the constraint of time and the number of samples. The purpose of this study was to investigate the cell biovolume change characteristics of six cyanobacterial species, which are common bloom-causing cyanobacteria in the Nakdong River, after the preservation with Lugol's iodine solution. All species showed a statistically significant difference after the addition of Lugol's iodine solution compared to the live cell biovolume, and the cell biovolume decreased to the level of 34.0 ~ 56.3 % at maximum in each species after the preservation. The nonlinear regression models for determining the shrinkage ratio by a preservation period were derived by using the cell biovolume measured until 180 days preservation of each target species, and the equation to convert the cell biovolume measured after preservation for a certain period to the cell biovolume of viable cell was derived using that formula. The conversion equation derived from this study can be used to estimate the actual cell biovolume in the natural environment at the time of sampling, by using the measured biovolume after the preservation in the phytoplankton monitoring. Moreover this is expected to contribute to the final interpretation of the water quality and aquatic ecosystem impacts due to the cyanobacterial blooms.

Design of Synchronous 256-bit OTP Memory (동기식 256-bit OTP 메모리 설계)

  • Li, Long-Zhen;Kim, Tae-Hoon;Shim, Oe-Yong;Park, Mu-Hun;Ha, Pan-Bong;Kim, Young-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.7
    • /
    • pp.1227-1234
    • /
    • 2008
  • In this paper is designed a 256-bit synchronous OTP(one-time programmable) memory required in application fields such as automobile appliance power ICs, display ICs, and CMOS image sensors. A 256-bit synchronous memory cell consists of NMOS capacitor as antifuse and access transistor without a high-voltage blocking transistor. A gate bias voltage circuit for the additional blocking transistor is removed since logic supply voltage VDD(=1.5V) and external program voltage VPPE(=5.5V) are used instead of conventional three supply voltages. And loading current of cell to be programmed increases according to RON(on resistance) of the antifuse and process variation in case of the voltage driving without current constraint in programming. Therefore, there is a problem that program voltage can be increased relatively due to resistive voltage drop on supply voltage VPP. And so loading current can be made to flow constantly by using the current driving method instead of the voltage driving counterpart in programming. Therefore, program voltage VPP can be lowered from 5.9V to 5.5V when measurement is done on the manufactured wafer. And the sens amplifier circuit is simplified by using the sens amplifier of clocked inverter type instead of the conventional current sent amplifier. The synchronous OTP of 256 bits is designed with Magnachip $0.13{\mu}m$ CMOS process. The layout area if $298.4{\times}314{\mu}m2$.

Dynamic Traffic Assignment Using Genetic Algorithm (유전자 알고리즘을 이용한 동적통행배정에 관한 연구)

  • Park, Kyung-Chul;Park, Chang-Ho;Chon, Kyung-Soo;Rhee, Sung-Mo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.8 no.1 s.15
    • /
    • pp.51-63
    • /
    • 2000
  • Dynamic traffic assignment(DTA) has been a topic of substantial research during the past decade. While DTA is gradually maturing, many aspects of DTA still need improvement, especially regarding its formulation and solution algerian Recently, with its promise for In(Intelligent Transportation System) and GIS(Geographic Information System) applications, DTA have received increasing attention. This potential also implies higher requirement for DTA modeling, especially regarding its solution efficiency for real-time implementation. But DTA have many mathematical difficulties in searching process due to the complexity of spatial and temporal variables. Although many solution algorithms have been studied, conventional methods cannot iud the solution in case that objective function or constraints is not convex. In this paper, the genetic algorithm to find the solution of DTA is applied and the Merchant-Nemhauser model is used as DTA model because it has a nonconvex constraint set. To handle the nonconvex constraint set the GENOCOP III system which is a kind of the genetic algorithm is used in this study. Results for the sample network have been compared with the results of conventional method.

  • PDF

Prediction of Expected Residual Useful Life of Rubble-Mound Breakwaters Using Stochastic Gamma Process (추계학적 감마 확률과정을 이용한 경사제의 기대 잔류유효수명 예측)

  • Lee, Cheol-Eung
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.3
    • /
    • pp.158-169
    • /
    • 2019
  • A probabilistic model that can predict the residual useful lifetime of structure is formulated by using the gamma process which is one of the stochastic processes. The formulated stochastic model can take into account both the sampling uncertainty associated with damages measured up to now and the temporal uncertainty of cumulative damage over time. A method estimating several parameters of stochastic model is additionally proposed by introducing of the least square method and the method of moments, so that the age of a structure, the operational environment, and the evolution of damage with time can be considered. Some features related to the residual useful lifetime are firstly investigated into through the sensitivity analysis on parameters under a simple setting of single damage data measured at the current age. The stochastic model are then applied to the rubble-mound breakwater straightforwardly. The parameters of gamma process can be estimated for several experimental data on the damage processes of armor rocks of rubble-mound breakwater. The expected damage levels over time, which are numerically simulated with the estimated parameters, are in very good agreement with those from the flume testing. It has been found from various numerical calculations that the probabilities exceeding the failure limit are converged to the constraint that the model must be satisfied after lasting for a long time from now. Meanwhile, the expected residual useful lifetimes evaluated from the failure probabilities are seen to be different with respect to the behavior of damage history. As the coefficient of variation of cumulative damage is becoming large, in particular, it has been shown that the expected residual useful lifetimes have significant discrepancies from those of the deterministic regression model. This is mainly due to the effect of sampling and temporal uncertainties associated with damage, by which the first time to failure tends to be widely distributed. Therefore, the stochastic model presented in this paper for predicting the residual useful lifetime of structure can properly implement the probabilistic assessment on current damage state of structure as well as take account of the temporal uncertainty of future cumulative damage.

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.

Measurement of the Early-Age Coefficient of Thermal Expansion and Drying Shrinkage of Concrete Pavement (콘크리트포장의 초기 열팽창계수 및 건조수축 측정 연구)

  • Yoon, Young-Mi;Suh, Young-Chan;Kim, Hyung-Bae
    • International Journal of Highway Engineering
    • /
    • v.10 no.1
    • /
    • pp.117-122
    • /
    • 2008
  • Quality control of the concrete pavement in the early stage of curing is very important because it has a conclusive effect on its life span. Therefore, examining and analyzing the initial behavior of concrete pavement must precede an alternative to control its initial behavior. There are largely two influential factors for the initial behavior of concrete pavement. One is the drying shrinkage, and the other is the heat generated by hydration and thermal change inside the pavement depending on the change in the atmospheric temperature. Thus, the coefficient of thermal expansion and drying shrinkage can be regarded as very important influential factors for the initial behavior of the concrete. It has been a general practice up until now to measure the coefficient of thermal expansion from completely cured concrete. This practice has an inherent limitation in that it does not give us the coefficient of thermal expansion at the initial stage of curing. Additionally, it has been difficult to obtain the measurement of drying shrinkage due to the time constraint. This research examined and analyzed the early drying shrinkage of the concrete and measurements of the thermal expansion coefficients to formulate a plan to control its initial behavior. Additionally, data values for the variables of influence were collected to develop a prediction model for the initial behavior of the concrete pavement and the verification of the proposed model. In this research, thermal expansion coefficients of the concrete in the initial stage of curing ranged between $8.9{\sim}10.8{\times}10^{-6}/^{\circ}C$ Furthermore, the effects of the size and depth of the concrete on the drying shrinkage were analyzed and confirmed.

  • PDF

The MCSTOP Algorithm about the Minimum Cost Spanning Tree and the Optimum Path Generation for the Multicasting Path Assignment (최적 경로 생성 및 최소 비용 신장 트리를 이용한 멀티캐스트 경로 배정 알고리즘 : MCSTOP)

  • Park, Moon-Sung;Kim, Jin-Suk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.4
    • /
    • pp.1033-1043
    • /
    • 1998
  • In this paper, we present an improved multicasting path assignment algorithm based on the minimum cost spanning tree. In the method presented in this paper, a multicasting path is assigned preferentially when a node to be received is found among the next degree nodes of the searching node in the multicasting path assignment of the constrained steiner tree (CST). If nodes of the legacy group exist between nodes of the new group, a new path among the nodes of new group is assigned as long as the nodes may be excluded from the new multicasting path assignment taking into consideration characteristics of nodes in the legacy group. In assigning the multicasting path additionally, where the source and destination nodes which can be set for the new multicasting path exist in the domain of identical network (local area network) and conditions for degree constraint are satisfied, a method of producing and assigning a new multicasting path is used. The results of comparison of CST with MCSTOP, MCSTOp algorithm enhanced performance capabilities about the communication cost, the propagation delay, and the computation time for the multicasting assignment paths more than CST algorithm. Further to this, research activities need study for the application of the international standard protocol(multicasting path assignment technology in the multipoint communication service (MCS) of the ITU-T T.120).

  • PDF

Generalization of error decision rules in a grammar checker using Korean WordNet, KorLex (명사 어휘의미망을 활용한 문법 검사기의 문맥 오류 결정 규칙 일반화)

  • So, Gil-Ja;Lee, Seung-Hee;Kwon, Hyuk-Chul
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.405-414
    • /
    • 2011
  • Korean grammar checkers typically detect context-dependent errors by employing heuristic rules that are manually formulated by a language expert. These rules are appended each time a new error pattern is detected. However, such grammar checkers are not consistent. In order to resolve this shortcoming, we propose new method for generalizing error decision rules to detect the above errors. For this purpose, we use an existing thesaurus KorLex, which is the Korean version of Princeton WordNet. KorLex has hierarchical word senses for nouns, but does not contain any information about the relationships between cases in a sentence. Through the Tree Cut Model and the MDL(minimum description length) model based on information theory, we extract noun classes from KorLex and generalize error decision rules from these noun classes. In order to verify the accuracy of the new method in an experiment, we extracted nouns used as an object of the four predicates usually confused from a large corpus, and subsequently extracted noun classes from these nouns. We found that the number of error decision rules generalized from these noun classes has decreased to about 64.8%. In conclusion, the precision of our grammar checker exceeds that of conventional ones by 6.2%.

A Study on the Construction Cost Index for Calculating Conceptual Estimation : 1970-1999 (개략공사비 산출을 위한 공사비 지수 연구 : 1970-1999)

  • Nam, Song Hyun;Park, Hyung Keun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.5
    • /
    • pp.527-534
    • /
    • 2020
  • A significant factor in construction work is cost. At early- and advanced-stage design, costs should be calculated to derive realistic cost estimates according to unit price calculation. Based on these estimates, the economic feasibility of construction work is assessed, and whether to proceed is determined. Through the Korea Institute of Civil Engineering and Building Technology, the construction cost index has been calculated by indirect methods after both the producer price index and construction market labor have been reprocessed to easily adjust the price changes of construction costs in Korea, and the Institute has announced it since 2004. As of January 2000, however, the construction cost index was released, and this has a time constraint on the correction and use of past construction cost data to the present moment. Variables were calculated to compute a rough construction cost that utilized past construction costs through surveys of the producer price index and the construction market labor force consisting of the construction cost index. After significant independent variables among the many variables were selected through correlation analysis, the construction cost index from 1970 to 1999 was calculated and presented through multiple regression analysis. This study therefore has prominent significance in terms of proposing a method of calculating rough construction costs that utilize construction costs that pre-date the 2000s.

A Methodology for Consistent Design of User Interaction (일관성 있는 사용자 인터랙션 설계를 위한 방법론 개발)

  • Kim, Dong-San;Yoon, Wan-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.961-970
    • /
    • 2009
  • Over the last decade, interactive devices such as mobile phones have become complicated drastically mainly because of feature creep, the tendency for the number of features in a product to rise with each release of the product. One of the ways to reduce the complexity of a multi-functional device is to design it consistently. Although the definition of consistency is elusive and it is sometimes beneficial to be inconsistent, in general, consistently designed systems are easier to learn, easier to remember, and causing less errors. In practice, however, it is often not easy to design the user interaction or interface of a multi-functional device consistently. Since the interaction design of a multi-functional device should deal with a large number of design variables and relations among them, solving this problem might be very time-consuming and error-prone. Therefore, there is a strong need for a well-developed methodology that supports the complex design process. This study has developed an effective and efficient methodology, called CUID (Consistent Design of User Interaction), which focuses on logical consistency rather than physical or visual consistency. CUID deals with three main problems in interaction design: procedure design for each task, decisions of available operations(or functions) for each system state, and the mapping of available operations(functions) and interface controls. It includes a process for interaction design and a software tool for supporting the process. This paper also demonstrates how CUID supports the consistent design of user interaction by presenting a case study. It shows that the logical inconsistencies of a multi-functional device can be resolved by using the CUID methodology.

  • PDF