• Title/Summary/Keyword: Problem structure

Search Result 5,517, Processing Time 0.039 seconds

Development and Validation of the Korean Implementation Fidelity Checklist of Tier 1 School-Wide Positive Behavior Support (KIFC-T1) (한국형 학교차원 긍정적 행동지원 1차 실행충실도 척도(KIFC-T1)의 개발과 타당화)

  • Nam, Dong Mi;Chang, Eun Jin;Won, Sung-Doo;Cho Blair, Kwang-Sun;Song, Wonyoung
    • Korean Journal of School Psychology
    • /
    • v.17 no.3
    • /
    • pp.401-419
    • /
    • 2020
  • The purpose of this study was to validate the Korean Implementation Fidelity Checklist of Tier 1 School-Wide Positive Behavior Support (KIFC-T1) for use in the Korean educational system. Tier 1 support, which is universal supports, within a multi-tiered, school-wide positive behavior support (SWPBS) model, aims to provide support to and prevent problem behaviors among all students in a school. The initial KIFC-T1 consisted of 48 items and 11 factors and was developed based on a literature review. Its content was validated by experts. The validated KIFC-T1 was introduced to 185 special school teachers who had experience implementing SWPBS and who used the instrument to assess the degree to which their schools had implemented Tier 1 support. Based on their responses, the construct validity of the KIFC-T1 was examined using factor, item, and internal consistency reliability analyses. The concurrent validity of the tool was examined using the PBS Evaluation Tool, School Climate Questionnaire, School Discipline Practice Scale, and PBS Effectiveness Scale. The analyses revealed that KIFC-T1 had a stable five-factor structure with 35 items, had good reliability (Cronbach's α=.956, each factor's Cronbach's α=.834-.951), and its results were statistically significantly correlated with those of the PBS Evaluation Tool, School Discipline Practice Scale, and the PBS Effectiveness Scale. However the KIFC-T1's results were not statistically significantly correlated with the results of the School Climate Questionnaire. These results suggest that KIFC-T1 is a reliable and valid tool for assessing the fidelity of universal support implementations.

Development and Validation of the 'Food Safety and Health' Workbook for High School (고등학교 「식품안전과 건강」 워크북 개발 및 타당도 검증)

  • Park, Mi Jeong;Jung, Lan-Hee;Yu, Nan Sook;Choi, Seong-Youn
    • Journal of Korean Home Economics Education Association
    • /
    • v.34 no.1
    • /
    • pp.59-80
    • /
    • 2022
  • The purpose of this study was to develop a workbook that can support the class and evaluation of the subject, 「Food safety and health」 and to verify its validity. The development direction of the workbook was set by analyzing the 「Food safety and health」 curriculum, dietary education materials, and previous studies related to the workbook, and the overall structure was designed by deriving the activity ideas for each area. Based on this, the draft was developed, and the draft went through several rounds of cross-review by the authors and the examination and revision by the Ministry of Food and Drug Safety, before the final edited version was developed. The workbook was finalized with corrections and enhancements based on the advice of 9 experts and 44 home economics teachers. The workbook consists of 4 areas: the 'food selection' area, with 10 learning topics and 36 lessons, the 'food poisoning and food management' area, with 10 learning topics and 36 lessons, the 'cooking' area, with 11 learning topics and 43 lessons, and the 'healthy eating' area, with 11 learning topics and 55 lessons, resulting in a total of 42 learning topics, 170 lessons. The workbook was designed to evenly cultivate practical problem-solving competency, self-reliance capacity, creative thinking capacity, and community capacity. In-depth inquiry-learning is conducted on the content, and the context is structured so that self-diagnosis can be made through evaluation. According to the validity test of the workbook, it was evaluated to be very appropriate for encouraging student-participatory classes and evaluations, and to create a class atmosphere that promotes inquiry by strengthening experiments and practices. In the current situation where the high school credit system is implemented and individual students' learning options are emphasized, the results of this study is expected to help expand the scope of home economics-based elective courses and contribute to realizing student-led classrooms with a focus on inquiry.

Peirce and the Problem of Symbols (퍼스와 상징의 문제)

  • Noh, Yang-jin
    • Journal of Korean Philosophical Society
    • /
    • v.152
    • /
    • pp.59-79
    • /
    • 2019
  • The main purpose of this paper is to critically examine the intractable problems of Peirce's notion of 'symbol' as a higher and perfect mode of sign, and present a more appropriate account of the higher status of symbol from an experientialist perspective. Peirce distinguished between icon, index, and symbol, and suggested symbol to be a higher mode of sign, in that it additionally requires "interpretation." Within Peirce's picture, the matter of interpretation is to be explained in terms of "interpretant," while icon or index are not. However, Peirce's conception of "interpretant" itself remains fraught with intractable opacities, thereby leaving the nature of symbol in a misty conundrum. Drawing largely on the experientialist account of the nature and structure of symbolic experience, I try to explicate the complexity of symbol in terms of "the symbolic mapping." According to experientialism, our experience consists of two levels, i.e., physical and symbolic. Physical experience can be extended to symbolic level largely by means of "symbolic mapping," and yet is strongly constrained by physical experience. Symbolic mapping is the way in which we map part of certain physical experience onto some other area, thereby understanding the other area in terms of the mapped part of the physical experience. According to this account, all the signs, icon, index, and symbol a la Peirce, are constructed by way of symbolic mapping. While icon and index are constructed by mapping physical level experience onto some signifier(i.e. Peirce's "representamen"), symbol is constructed by mapping abstract level experience onto some signifier. Considering the experientialist account that abstract level of experience is constructed by way of symbolic mapping of physical level of experience, the symbolic mapping of abstract level of experience onto some other area is a secondary one. Thus, symbol, being constructed by way of secondary or more times mapping, becomes a higher level sign. This analysis is based on the idea that explaining the nature of sign is a matter of explaining that symbolic experience, leaving behind Peirce's realist conception of sign as a matter of an event or state of affairs out there. In conclusion, I suggest that this analysis will open up new possibilities for a more appropriate account of the nature of signs, beyond Peirce's complicated riddles.

Characteristics and Changes in Scientific Empathy during Students' Productive Disciplinary Engagement in Science (학생들의 생산적 과학 참여에서 발현되는 과학공감의 특성과 변화 분석)

  • Heesun, Yang;Seong-Joo, Kang
    • Journal of The Korean Association For Science Education
    • /
    • v.44 no.1
    • /
    • pp.11-27
    • /
    • 2024
  • This study aimed to investigate the role of scientific empathy in influencing students' productive disciplinary engagement in scientific activities and analyze the key factors of scientific empathy that manifest during this process. Twelve fifth-grade students were divided into three subgroups based on their general empathic abilities. Lessons promoting productive disciplinary engagement, integrating design thinking processes, were conducted. Subgroup discourse analysis during idea generation and prototype stages, two of five problem-solving steps, enabled observation of scientific empathy and practice aspects. The results showed that applying scientific empathy effectively through design thinking facilitated students' productive disciplinary engagement in science. In the idea generation stage, we observed an initial increase followed by a decrease in scientific empathy and practice utterances, while during the prototyping stage, utterance frequency increased, particularly in the later part. However, subgroups with lower empathic abilities displayed decreased discourse frequency in scientific empathy and practice during the prototype stage due to a lack of collaborative communication. Across all empathic ability levels, the students articulated all five key factors of scientific empathy through their utterances in situations involving productive science engagement. In the high empathic ability subgroup, empathic understanding and concern were emphasized, whereas in the low empathic ability subgroup, sensitivity, scientific imagination, and situational interest, factors of empathizing with the research object, were prominent. These results indicate that experiences of scientific empathy with research objects, beyond general empathetic abilities, serve as a distinct and crucial factor in stimulating diverse participation and sustaining students' productive engagement in scientific activities during science classes. By suggesting the potential multidimensional impact of scientific empathy on productive disciplinary engagement, this study contributes to discussions on the theoretical structure and stability of scientific empathy in science education.

Spatial Distribution of Aging District in Taejeon Metropolitan City (대전광역시 노령화 지구의 공간적 분포 패턴)

  • Jeong, Hwan-Yeong;Ko, Sang-Im
    • Journal of the Korean association of regional geographers
    • /
    • v.6 no.2
    • /
    • pp.1-19
    • /
    • 2000
  • This study is to investigate and analyze regional patterns of aging in Taejeon Metropolitan city-the overpopulated area of Choong-Cheong Province-by cohort analysis method. According to the population structure transition caused by rapid social and economic changes, Korea has made a rapid progress in population aging since 1970. This trend is so rapid that we should prepare for and cope with aging society. It is not only slow to cope with it in our society, but also there are few studies on population aging of the geographical field in Korea. The data of this study are the reports of Population and Housing Censuses in 1975 and 1985 and General Population and Housing Censuses with 10% sample survey in 1995 taken by National Statistical Office. The research method is to sample as the aging district the area with high aged population rate where the populations over 60 reside among total population during the years of 1975, 1985, 1995 and to sample the special districts of decreasing population where the population decreases very much and the special districts of increasing population in which the population increases greatly, presuming that the reason why aged population rate increases is that non-elderly population high in mobility moves out. It is then verified and ascertained whether it is true or not with cohort analysis method by age. Finally regional patterns in the city are found through the classification and modeling by type based on the aging district, the special districts of decreasing population, and the special districts of increasing population. The characteristics of the regional patterns show that there is social population transition and that non-elderly population moves out. The aging district with the high aged population rate is divided into high-level keeping-up type, relative falling type below the average of Taejeon city in aging progress, and relative rising type above the average of the city. This district can be found at both the central area of the city and the suburbs because Taejeon city has the characteristic of over-bounded city. But it cannot be found at the new built-up area with the in-migration of large population. The special districts of decreasing population where the population continues to decrease can be said to be the population doughnuts found at the CBD and its neighboring inner area. On the other hand, the special districts of increasing population where the population continues to increase are located at the new built-up area of the northern part in Taejeon city. The special districts of decreasing population are overlapping with the aging district and higher in aged population rate by the out-migration of non-elderly population. The special districts of increasing population are not overlapping with the aging district and lower in aged population rate by the in-migration of non-elderly population. To clarify the distribution map of the aging district, the special districts of decreasing and increasing population and the aging district are divided into four groups such as the special districts of decreasing population group-the same one as the aging district, the special districts of decreasing population group, the special districts of increasing population group, and the other district. With the cohort analysis method by age used to investigate the definite increase and decrease of aging population through population transition of each group, it is found that the progress of population aging is closely related to the social population fluctuation, especially that aged population rate is higher with the out-migration of non-elderly population. This is to explain each model of CBD, inner area, and the suburbs after modeling the aging district, the special districts of decreasing population, and the special districts of increasing population in Taejeon city. On the assumption that the city area is a concentric circle, it is possible to divide it into three areas such as CBD(A), the inner area(B), and the suburbs(C). The special districts of increasing and decreasing population in the city are divided into three districts-the special districts of decreasing population(a), the special districts of increasing population(b), and the others(c). The aging district of this city is divided into the aging district($\alpha$) and the others($\beta$). And then modeling these districts, it is probable to find regional patterns in the city. $Aa{\alpha}$ and $Ac{\beta}$ patterns are found in the CBD, in which $Aa{\alpha}$ is the special district of decreasing population and is higher in aged population rate because of aged population low in mobility staying behind and out-migration of non-elderly population. $Ba{\alpha}$, $Ba{\beta}$, $Bb{\beta}$, and $Bc{\beta}$ patterns are found in the inner area, in which neighboring area $Ba{\alpha}$ pattern is located. $Bb{\beta}$ pattern is located at the new developing area of newly built apartment complex. $Cb{\beta}$, $Cc{\alpha}$, and $Cc{\beta}$ patterns are found in the suburbs, among which $Cc{\alpha}$ pattern is highest in population aging. It is likely that the $Cc{\beta}$ under housing land readjustment on a large scale will be the $Cb{\beta}$ pattern. As analyzed above, marriage and out-migration of new family, non-elderly population, with house purchase are main factors in accelerating population aging in the central area of the city. Population aging is responsible for the great increase of aged population with longer life expectancy by the low death rate, the out-migration of non-elderly population, and the age group of new aged population in the suburbs. It is necessary to investigate and analyze the regional patterns of population aging at the time when population problems caused by aging as well as longer life expectancy are now on the increase. I hope that this will help the future study on population aging of the geographical field in Korea. As in the future population aging will be a major problem in our society, local autonomy should make a plan for the problem to the extent that population aging progresses by regional groups and inevitably prepare for it.

  • PDF

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Medical Information Dynamic Access System in Smart Mobile Environments (스마트 모바일 환경에서 의료정보 동적접근 시스템)

  • Jeong, Chang Won;Kim, Woo Hong;Yoon, Kwon Ha;Joo, Su Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • Recently, the environment of a hospital information system is a trend to combine various SMART technologies. Accordingly, various smart devices, such as a smart phone, Tablet PC is utilized in the medical information system. Also, these environments consist of various applications executing on heterogeneous sensors, devices, systems and networks. In these hospital information system environment, applying a security service by traditional access control method cause a problems. Most of the existing security system uses the access control list structure. It is only permitted access defined by an access control matrix such as client name, service object method name. The major problem with the static approach cannot quickly adapt to changed situations. Hence, we needs to new security mechanisms which provides more flexible and can be easily adapted to various environments with very different security requirements. In addition, for addressing the changing of service medical treatment of the patient, the researching is needed. In this paper, we suggest a dynamic approach to medical information systems in smart mobile environments. We focus on how to access medical information systems according to dynamic access control methods based on the existence of the hospital's information system environments. The physical environments consist of a mobile x-ray imaging devices, dedicated mobile/general smart devices, PACS, EMR server and authorization server. The software environment was developed based on the .Net Framework for synchronization and monitoring services based on mobile X-ray imaging equipment Windows7 OS. And dedicated a smart device application, we implemented a dynamic access services through JSP and Java SDK is based on the Android OS. PACS and mobile X-ray image devices in hospital, medical information between the dedicated smart devices are based on the DICOM medical image standard information. In addition, EMR information is based on H7. In order to providing dynamic access control service, we classify the context of the patients according to conditions of bio-information such as oxygen saturation, heart rate, BP and body temperature etc. It shows event trace diagrams which divided into two parts like general situation, emergency situation. And, we designed the dynamic approach of the medical care information by authentication method. The authentication Information are contained ID/PWD, the roles, position and working hours, emergency certification codes for emergency patients. General situations of dynamic access control method may have access to medical information by the value of the authentication information. In the case of an emergency, was to have access to medical information by an emergency code, without the authentication information. And, we constructed the medical information integration database scheme that is consist medical information, patient, medical staff and medical image information according to medical information standards.y Finally, we show the usefulness of the dynamic access application service based on the smart devices for execution results of the proposed system according to patient contexts such as general and emergency situation. Especially, the proposed systems are providing effective medical information services with smart devices in emergency situation by dynamic access control methods. As results, we expect the proposed systems to be useful for u-hospital information systems and services.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF