• Title/Summary/Keyword: sources of complexity

Search Result 128, Processing Time 0.025 seconds

Geometrically and Topographically Consistent Map Conflation for Federal and Local Governments (Geometry 및 Topology측면에서 일관성을 유지한 방법을 이용한 연방과 지방정부의 공간데이터 융합)

  • Kang, Ho-Seok
    • Journal of the Korean Geographical Society
    • /
    • v.39 no.5 s.104
    • /
    • pp.804-818
    • /
    • 2004
  • As spatial data resources become more abundant, the potential for conflict among them increases. Those conflicts can exist between two or many spatial datasets covering the same area and categories. Therefore, it becomes increasingly important to be able to effectively relate these spatial data sources with others then create new spatial datasets with matching geometry and topology. One extensive spatial dataset is US Census Bureau's TIGER file, which includes census tracts, block groups, and blocks. At present, however, census maps often carry information that conflicts with municipally-maintained detailed spatial information. Therefore, in order to fully utilize census maps and their valuable demographic and economic information, the locational information of the census maps must be reconciled with the more accurate municipally-maintained reference maps and imagery. This paper formulates a conceptual framework and two map models of map conflation to make geometrically and topologically consistent source maps according to the reference maps. The first model is based on the cell model of map in which a map is a cell complex consisting of 0-cells, 1-cells, and 2-cells. The second map model is based on a different set of primitive objects that remain homeomorphic even after map generalization. A new hierarchical based map conflation is also presented to be incorporated with physical, logical, and mathematical boundary and to reduce the complexity and computational load. Map conflation principles with iteration are formulated and census maps are used as a conflation example. They consist of attribute embedding, find meaning node, cartographic 0-cell match, cartographic 1-cell match, and map transformation.

Process Performance Measurement Model Based on Event for an efficient Decision-Making (효율적인 의사결정을 위한 이벤트 기반의 프로세스 성과측정을 위한 모델)

  • Park, Jae-Won;Choi, Jae-Hyun;Cho, Poong-Youn;Lee, Nam-Yong
    • The KIPS Transactions:PartD
    • /
    • v.17D no.4
    • /
    • pp.259-270
    • /
    • 2010
  • Information systems nowadays are heterogeneous and distributed which integrate the enterprise information by processes. They are also very complex, because they are linked together by processes. It aims to integrate the systems so that these systems work as one system. A process is a framework which contains all of the business activities in an enterprise, and has a lot of information which is needed for measuring performance. A process consists of activities, and an activity contains events which can be considered information sources. In most cases, it is very valuable to determine if a process is meaningful, but it is difficult because of the complexity in measuring performance, and also because finding relationships between business factors and events is not a simple problem. So it would reduce operation cost and allow efficient process execution if I could evaluate the process before it ends. In this paper we propose an event based process measurement model. First, we propose the concept of process performance measurement, and a model for selecting process and activity indexes from the events which are collected from information systems. Second, we propose at methodologies and data schema that can store and manage the selected process indexes, the mapping methods between indexes and events. Finally, we propose a process Performance measurement model using the collected events which gives users a valuable managerial information.

Pollutant Loading Estimate from Yongdam Watershed Using BASINS/HSPF (BASINS/HSPF를 이용한 용담댐 유역의 오염부하량 산정)

  • Jang, Jae-Ho;Jung, Kwang-Wook;Jeon, Ji-Hong;Yoon, Chun-Gyeong
    • Korean Journal of Ecology and Environment
    • /
    • v.39 no.2 s.116
    • /
    • pp.187-197
    • /
    • 2006
  • A mathematical modeling program called Hydrological Simulation Program-FORTRAN (HSPF) developed by the United States Environmental Protection Agency(EPA) was applied to the Yongdam Watershed to examine its applicability for loading estimates in watershed scale. It was run under BASINS (Better Assessment Science for Integrating point and Nonpoint Sources) program, and the model was validated using monitoring data of 2002 ${\sim}$ 2003. The model efficiency of runoff was high in comparison between simulated and observed data, while it was relatively low in the water quality parameters. But its reliability and performance were within the expectation considering complexity of the watershed and pollutant sources and land uses intermixed in the watershed. The estimated pollutant load from Yongdam watershed for BOD, T-N and T-P was 1,290,804 kg $yr{-1}$, 3,753,750 kg $yr{-1}$ and 77,404 kg $yr{-1}$,respectively. Non-point source (NPS) contribution was high showing BOD 57.2%, T-N 92.0% and T-P 60.2% of the total annual loading in the study area. The NPS loading during the monsoon rainy season (June to September) was about 55 ${\sim}$ 72% of total NPS loading, and runoff volume was also in a similar rate (69%). However, water quality was not necessarily high during the rainy season, and showed a decreasing trend with increasing water flow. Overall, the BASINS/HSPF was applied to the Yongdam watershed successfully without difficulty, and it was found that the model could be used conveniently to assess watershed characteristics and to estimate pollutant loading in watershed scale.

Photoimmunology -Past, Present and Future-

  • Daynes, Raymond A.;Chung, Hun-Taeg;Roberts, Lee K.
    • The Journal of the Korean Society for Microbiology
    • /
    • v.21 no.3
    • /
    • pp.311-329
    • /
    • 1986
  • The experimental exposure of animals to sources of ultraviolet radiation (UVR) which emit their energy primarily in the UVB region (280-320nm) is known to result in a number of well-described changes in the recipient's immune competence. Two such changes include a depressed capacity to effectively respond immunologically to transplants of syngeneic UVR tumors and a markedly reduced responsiveness to known inducers of delayedtype (DTH) and contact hypersensitivity (CH) reactions. The results of experiments that were designed to elucidate the mechanisms responsible for UVR-induced immunomodulation have implicated: 1) an altered pattern of lymphocyte recirculation, 2) suppressor T cells(Ts), 3) deviations in systemic antigen presenting cell (APC) potential. 4) changes in the production of interleukin-1-like molecules, and 5) the functional inactivation of epidermal Langerhans cells in this process. The exposure of skin to UVR, therefore, causes a number of both local and systemic alterations to the normal host immune system. In spite of this seeming complexity and diversity of responses, our recent studies have established that each of the UVR-mediated changes is probably of equal importance to creating the UVR-induced immunocompromised state. Normal animals were exposed to low dose UVR radiation on their dorsal surfaces under conditions where a $3.0\;cm^2$ area of skin was physically protected from the light energy. Contact sensitization of these animals with DNFB, to either the irradiated or protected back skin, resulted in markedly reduced CH responses. This was observed in spite of a normal responsiveness following the skin sensitization to ventral surfaces of the UVR-exposed animals. Systemic treatment of the low dose UVR recipients with the drug indomethacin (1-3 micrograms/day) during the UVR exposures resulted in a complete reversal of the depressions observed following DNFB sensitization to "protected" dorsal skin while the altered responsiveness found in the group exposed to the skin reactive chemical through directly UVR-exposed sites was maintained. These studies implicate the importance of EC as effective APC in the skin and also suggest that some of the systemic influences caused by UVR exposure involve the production of prostaglandins. This concept was further supported by finding that indomethacin treatment was also capable of totally reversing the systemic depressions in CH responsiveness caused by high dose UVR exposure (30K joules/$m^2$) of mice. Attempts to analyze the cellular mechanisms responsible established that the spleens of all animals which demonstrated altered CH responses, regardless of whether sensitization was through a normal or an irradiated skin site, contained suppressor cells. Interestingly, we also found normal levels of T effector cells in the peripheral lymph nodes of the UVR-exposed mice that were contact sensitized through normal skin. No effector cells were found when skin sensitization took place through irradiated skin sites. In spite of such an apparent paradox, insight into the probable mechanisms responsible for these observations was provided by establishing that UVR exposure of skin results in a striking and dose-dependent blockade of the efferent lymphatic vessels in all peripheral lymph nodes. Therefore, the afferent phases of immune responses can apparently take place normally in UVR exposed animals when antigen is applied to normal skin. The final effector responses, however, appear to be inhibited in the UVR-exposed animals by an apparent block of effector cell mobility. This contrasts with findings in the normal animals. Following contact sensitization, normal animals were also found to simultaneously contain both antigen specific suppressor T cells and lymph node effector cells. However, these normal animals were fully capable of mobilizing their effector cells into the systemic circulation, thereby allowing a localization of these cells to peripheral sites of antigen challenge. Our results suggest that UVR is probably not a significant inducer of suppressor T-cell activity to topically applied antigens. Rather, UVR exposure appears to modify the normal relationship which exists between effector and regulatory immune responses in vivo. It does so by either causing a direct reduction in the skin's APC function, a situation which results in an absence of effector cell generation to antigens applied to UVR-exposed skin sites, inhibiting the capacity of effector cells to gain access to skin sites of antigen challenge or by sequestering the lymphocytes with effector cell potential into the draining peripheral lymph nodes. Each of these situations result in a similar effect on the UVR-exposed host, that being a reduced capacity to elicit a CH response. We hypothesize that altered DTH responses, altered alloresponses, and altered graft-versus-host responses, all of which have been observed in UVR exposed animals, may result from similar mechanisms.

  • PDF

Wearable Computers

  • Cho, Gil-Soo;Barfield, Woodrow;Baird, Kevin
    • Fiber Technology and Industry
    • /
    • v.2 no.4
    • /
    • pp.490-508
    • /
    • 1998
  • One of the latest fields of research in the area of output devices is tactual display devices [13,31]. These tactual or haptic devices allow the user to receive haptic feedback output from a variety of sources. This allows the user to actually feel virtual objects and manipulate them by touch. This is an emerging technology and will be instrumental in enhancing the realism of wearable augmented environments for certain applications. Tactual displays have previously been used for scientific visualization in virtual environments by chemists and engineers to improve perception and understanding of force fields and of world models populated with the impenetrable. In addition to tactual displays, the use of wearable audio displays that allow sound to be spatialized are being developed. With wearable computers, designers will soon be able to pair spatialized sound to virtual representations of objects when appropriate to make the wearable computer experience even more realistic to the user. Furthermore, as the number and complexity of wearable computing applications continues to grow, there will be increasing needs for systems that are faster, lighter, and have higher resolution displays. Better networking technology will also need to be developed to allow all users of wearable computers to have high bandwidth connections for real time information gathering and collaboration. In addition to the technology advances that make users need to wear computers in everyday life, there is also the desire to have users want to wear their computers. In order to do this, wearable computing needs to be unobtrusive and socially acceptable. By making wearables smaller and lighter, or actually embedding them in clothing, users can conceal them easily and wear them comfortably. The military is currently working on the development of the Personal Information Carrier (PIC) or digital dog tag. The PIC is a small electronic storage device containing medical information about the wearer. While old military dog tags contained only 5 lines of information, the digital tags may contain volumes of multi-media information including medical history, X-rays, and cardiograms. Using hand held devices in the field, medics would be able to call this information up in real time for better treatment. A fully functional transmittable device is still years off, but this technology once developed in the military, could be adapted tp civilian users and provide ant information, medical or otherwise, in a portable, not obstructive, and fashionable way. Another future device that could increase safety and well being of its users is the nose on-a-chip developed by the Oak Ridge National Lab in Tennessee. This tiny digital silicon chip about the size of a dime, is capable of 'smelling' natural gas leaks in stoves, heaters, and other appliances. It can also detect dangerous levels of carbon monoxide. This device can also be configured to notify the fire department when a leak is detected. This nose chip should be commercially available within 2 years, and is inexpensive, requires low power, and is very sensitive. Along with gas detection capabilities, this device may someday also be configured to detect smoke and other harmful gases. By embedding this chip into workers uniforms, name tags, etc., this could be a lifesaving computational accessory. In addition to the future safety technology soon to be available as accessories are devices that are for entertainment and security. The LCI computer group is developing a Smartpen, that electronically verifies a user's signature. With the increase in credit card use and the rise in forgeries, is the need for commercial industries to constantly verify signatures. This Smartpen writes like a normal pen but uses sensors to detect the motion of the pen as the user signs their name to authenticate the signature. This computational accessory should be available in 1999, and would bring increased peace of mind to consumers and vendors alike. In the entertainment domain, Panasonic is creating the first portable hand-held DVD player. This device weight less than 3 pounds and has a screen about 6' across. The color LCD has the same 16:9 aspect ratio of a cinema screen and supports a high resolution of 280,000 pixels and stereo sound. The player can play standard DVD movies and has a hour battery life for mobile use. To summarize, in this paper we presented concepts related to the design and use of wearable computers with extensions to smart spaces. For some time, researchers in telerobotics have used computer graphics to enhance remote scenes. Recent advances in augmented reality displays make it possible to enhance the user's local environment with 'information'. As shown in this paper, there are many application areas for this technology such as medicine, manufacturing, training, and recreation. Wearable computers allow a much closer association of information with the user. By embedding sensors in the wearable to allow it to see what the user sees, hear what the user hears, sense the user's physical state, and analyze what the user is typing, an intelligent agent may be able to analyze what the user is doing and try to predict the resources he will need next or in the near future. Using this information, the agent may download files, reserve communications bandwidth, post reminders, or automatically send updates to colleagues to help facilitate the user's daily interactions. This intelligent wearable computer would be able to act as a personal assistant, who is always around, knows the user's personal preferences and tastes, and tries to streamline interactions with the rest of the world.

  • PDF

Bankruptcy Prediction Modeling Using Qualitative Information Based on Big Data Analytics (빅데이터 기반의 정성 정보를 활용한 부도 예측 모형 구축)

  • Jo, Nam-ok;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.33-56
    • /
    • 2016
  • Many researchers have focused on developing bankruptcy prediction models using modeling techniques, such as statistical methods including multiple discriminant analysis (MDA) and logit analysis or artificial intelligence techniques containing artificial neural networks (ANN), decision trees, and support vector machines (SVM), to secure enhanced performance. Most of the bankruptcy prediction models in academic studies have used financial ratios as main input variables. The bankruptcy of firms is associated with firm's financial states and the external economic situation. However, the inclusion of qualitative information, such as the economic atmosphere, has not been actively discussed despite the fact that exploiting only financial ratios has some drawbacks. Accounting information, such as financial ratios, is based on past data, and it is usually determined one year before bankruptcy. Thus, a time lag exists between the point of closing financial statements and the point of credit evaluation. In addition, financial ratios do not contain environmental factors, such as external economic situations. Therefore, using only financial ratios may be insufficient in constructing a bankruptcy prediction model, because they essentially reflect past corporate internal accounting information while neglecting recent information. Thus, qualitative information must be added to the conventional bankruptcy prediction model to supplement accounting information. Due to the lack of an analytic mechanism for obtaining and processing qualitative information from various information sources, previous studies have only used qualitative information. However, recently, big data analytics, such as text mining techniques, have been drawing much attention in academia and industry, with an increasing amount of unstructured text data available on the web. A few previous studies have sought to adopt big data analytics in business prediction modeling. Nevertheless, the use of qualitative information on the web for business prediction modeling is still deemed to be in the primary stage, restricted to limited applications, such as stock prediction and movie revenue prediction applications. Thus, it is necessary to apply big data analytics techniques, such as text mining, to various business prediction problems, including credit risk evaluation. Analytic methods are required for processing qualitative information represented in unstructured text form due to the complexity of managing and processing unstructured text data. This study proposes a bankruptcy prediction model for Korean small- and medium-sized construction firms using both quantitative information, such as financial ratios, and qualitative information acquired from economic news articles. The performance of the proposed method depends on how well information types are transformed from qualitative into quantitative information that is suitable for incorporating into the bankruptcy prediction model. We employ big data analytics techniques, especially text mining, as a mechanism for processing qualitative information. The sentiment index is provided at the industry level by extracting from a large amount of text data to quantify the external economic atmosphere represented in the media. The proposed method involves keyword-based sentiment analysis using a domain-specific sentiment lexicon to extract sentiment from economic news articles. The generated sentiment lexicon is designed to represent sentiment for the construction business by considering the relationship between the occurring term and the actual situation with respect to the economic condition of the industry rather than the inherent semantics of the term. The experimental results proved that incorporating qualitative information based on big data analytics into the traditional bankruptcy prediction model based on accounting information is effective for enhancing the predictive performance. The sentiment variable extracted from economic news articles had an impact on corporate bankruptcy. In particular, a negative sentiment variable improved the accuracy of corporate bankruptcy prediction because the corporate bankruptcy of construction firms is sensitive to poor economic conditions. The bankruptcy prediction model using qualitative information based on big data analytics contributes to the field, in that it reflects not only relatively recent information but also environmental factors, such as external economic conditions.

A Review on Ultimate Lateral Capacity Prediction of Rigid Drilled Shafts Installed in Sand (사질토에 설치된 강성현장타설말뚝의 극한수평지지력 예측에 관한 재고)

  • Cho Nam Jun;Kulhawy F.H
    • Journal of the Korean Geotechnical Society
    • /
    • v.21 no.2
    • /
    • pp.113-120
    • /
    • 2005
  • An understanding of soil-structure interaction is the key to rational and economical design for laterally loaded drilled shafts. It is very difficult to formulate the ultimate lateral capacity into a general equation because of the inherent soil nonlincarity, nonhomogeneity, and complexity enhanced by the three dimensional and asymmetric nature of the problem though extensive research works on the behavior of deep foundations subjected to lateral loads have been conducted for several decades. This study reviews the four most well known methods (i.e., Reese, Broms, Hansen, and Davidson) among many design methods according to the specific site conditions, the drilled shaft geometric characteristics (D/B ratios), and the loading conditions. And the hyperbolic lateral capacities (H$_h$) interpreted by the hyperbolic transformation of the load-displacement curves obtained from model tests carried out as a part of this research have been compared with the ultimate lateral capacities (Hu) predicted by the four methods. The H$_u$ / H$_h$ ratios from Reese's and Hansen's methods are 0.966 and 1.015, respectively, which shows both the two methods yield results very close to the test results. Whereas the H$_u$ predicted by Davidson's method is larger than H$_h$ by about $30\%$, the C.0.V. of the predicted lateral capacities by Davidson is the smallest among the four. Broms' method, the simplest among the few methods, gives H$_u$ / H$_h$ : 0.896, which estimates the ultimate lateral capacity smaller than the others because some other resisting sources against lateral loading are neglected in this method. But it results in one of the most reliable methods with the smallest S.D. in predicting the ultimate lateral capacity. Conclusively, none of the four can be superior to the others in a sense of the accuracy of predicting the ultimate lateral capacity. Also, regardless of how sophisticated or complicated the calculating procedures are, the reliability in the lateral capacity predictions seems to be a different issue.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.