• Title/Summary/Keyword: information and support needs

Search Result 996, Processing Time 0.026 seconds

The research and Development trends of Telecommunications of the End of the 20th Century(Present) and the Beginning of the 21st Century(Future) (20세기 말과 21세기 초의 전기통신의 연구개발동향)

  • 조규심
    • Journal of the Korean Professional Engineers Association
    • /
    • v.29 no.2
    • /
    • pp.15-23
    • /
    • 1996
  • With the ever-increasing importance of high-speed information in society as we move towards the 21 st century, telecommunication laboratories of advanced nations are pressing forward with research and development aimed at implementing its W & P(Visual Intelligent and Personal) services and construction of a new network to support them. In legals to the former, based on a long-term view of technological and market trends, those laboratories are researching and developing services that will make possible an effective progression from the development of services that answer to potential needs towards the full-scale implementation of VI & P services. In regards to the latter, these laboratories are responding in a flexible manner to the increasing diversity and disposal of the communications environment by separating the network into a transmission system and a versatile information control/conversion -ion system and laboratories are working at enhancing the performance of both. Within these board aims, the laboratories are currently focusing our attention in three areas : the technology for a high-speed broadband transmission system featuring optical frequency multiplexing and ATM techniques, network and software technologies for advanced information control and conversion, and technology for constructing a new access network that can provide a comprehensive range of multimedia services. This article describes the laboratories' concept of how VI & P services will develop in the future, and the latest trends in the field of communications. It also describes the ideal configuration of the new network and discusses the important technological aspects of how it is to be constructed. Finally, it presents the results of the laboratories'recent research which include some innovative work, point out the areas requiring future investigation.

  • PDF

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Reliability of Self-Reported Information by Farmers on Pesticide Use (일부 농업인에서 자기 기입식 농약 노출 설문에 대한 신뢰도 연구)

  • Lee, Yo-Han;Cha, Eun-Shil;Moon, Eun-Kyeong;Kong, Kyoung-Ae;Koh, Sang-Baek;Lee, Yun-Keun;Lee, Won-Jin
    • Journal of Preventive Medicine and Public Health
    • /
    • v.43 no.6
    • /
    • pp.535-542
    • /
    • 2010
  • Objectives: Exposure assessment is a major challenge faced by studies that evaluate the association between pesticide exposure and adverse health outcomes. The objective of this study was to investigate the reliability of information that farmers self-report regarding their pesticide use. Methods: Twenty five items based upon existing questionnaires were designed to focus on pesticide exposure. In 2009, a selfadministrated survey was conducted on two occasions four weeks apart among 205 farmers residing in Gyeonggi and Gangwon provinces. For a reliability measure, we calculated the percentage agreement, the kappa statistics and the intraclass correlation coefficient (ICC) between the two reports according to the characteristics of the subjects. Results: Agreement for ever-never use of any pesticide was 96.4% (kappa 0.61). For both 'years used' and 'age at the first use' of overall pesticides, high agreement was obtained (ICC: 0.88 and, 0.78, respectively), whereas those of 'days used' and 'hours used' were relatively low (ICC: 0.42 and, 0.66, respectively). The kappa value for the use of personal protective equipment ranged from 0.46 to 0.59, and hygiene activities came out at 0.19 to 0.37. The agreement for individual pesticide use ranged widely and there was relatively low agreement due to the low response rates. The reliability scores did not significantly vary according to gender, age, the education level, the types of crop or the years of farming. Conclusions: Our results support that carefully designed, self-reported information on ever-never pesticide use among farmers is reliable. However, the reliability of data on individual pesticide exposure may be unstable due to low response rates and needs to be refined.

Analyzing Landscape Ecological Characteristics of Biotope Types in Rural Eco-Villages - Focusing on Eco-Villages of Chonnam Region Designated by Ministry of Environment - (비오톱유형에 의한 농촌생태마을의 경관생태학적 특성분석 -환경부지정 생태마을 중 전남 일부 지역을 대상으로-)

  • Kim, Keun-Ho;Cho, Tong-Buhm;Kim, Mi-Hyang
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.9 no.6
    • /
    • pp.63-77
    • /
    • 2006
  • The research aim is to classify biotope types of rural eco-villages designed by ministry of environment and analyze landscape ecological characteristics of them. This information would provide information on eco-villages' potential and specific needs to improve landscape ecological structure of eco-villages. Two eco-villages, designated by ministry of environment, in Yoocheon-ri and Sanduk-ri were selected and the landscape ecological metrics used in this study were Area, Shannon diversity index, Shape index, Distance index. The results are as follows. 1) There were five biotope types in large-scale classification, 13 biotope types m Sanduk-ri and 9 biotope types in Yoocheon-ri in middle-scale classification, 31 biotope types in Sanduk-ri and 24 biotope types in Yoocheon-ri in small-scale classification. 2) In the case of area, artificial biotope types, such as artificial forest, agricultural irrigation canal, wet paddy, dry paddy and residential area, covered more than 80% of total area. However, natural biotope types, such as natural forest, river, reservoir, covered just more than 10% of total area. In details, an orchard (26.69%) was the dominant biotope type, followed by artificial forest (19.10%) in Sanduk-ri and the first most abundant biotope type was artificial forest (49.71%), followed by wet paddy (15.95%) in Yoocheon-ri. 3) The result of Shannon diversity index indicated that Sanduk-ri (2.158) had more heterogeneity landscape, rather than Yoocheon-ri (2.051). 4) In the case of shape index, road (13.09) had more complex and irregular shape than either agricultural irrigation canal (3.35) or artificial forest (2.46) in Sanduk-ri. Road (6.52) was also the most irregular biotope shape, followed by river (5.70) and agricultural irrigation canal (4.78) in Yoocheon-ri. 5) Mean Nearest-neighbour Distance (MND) was smallest in wet paddy and dry paddy biotope types in the two study area, suggesting that these biotope types were concentrated within these study areas. From the result, this research suggested information to protect and improve biotopes of eco-villages in the landscape ecological terms. To achieve this improvement plan, there should be strong support by ministry of environment and local governments.

Development of Traffic Safety Monitoring Technique by Detection and Analysis of Hazardous Driving Events in V2X Environment (V2X 환경에서 위험운전이벤트 검지 및 분석을 통한 교통안전 모니터링기법 개발)

  • Jeong, Eunbi;Oh, Cheol;Kang, Kyeongpyo;Kang, Younsoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.6
    • /
    • pp.1-14
    • /
    • 2012
  • Traffic management centers (TMC) collect real-time traffic data from the field and have powerful databases for analysing, recording, and archiving the data. Recent advanced sensor and communication technologies have been widely applied to intelligent transportation systems (ITS). Regarding sensors, various in-vehicle sensors, in addition to global positioning system (GPS) receiver, are capable of providing high resolution data representing vehicle maneuverings. Regarding communication technologies, advanced wireless communication technologies including vehicle-to-vehicle (V2V) and vehicle-to-vehicle infrastructure (V2I), which are generally referred to as V2X, have been widely used for traffic information and operations (references). The V2X environment considers the transportation system as a network in which each element, such as the vehicles, infrastructure, and drivers, communicates and reacts systematically to acquire information without any time and/or place restrictions. This study is motivated by needs of exploiting aforementioned cutting-edge technologies for developing smarter transportation services. The proposed system has been implemented in the field and discussed in this study. The proposed system is expected to be used effectively to support the development of various traffic information control strategies for the purpose of enhancing traffic safety on highways.

A Study on the Systematic Integration of WASP5 Water Quality Model with a GIS (GIS와 WASP5 수질모델의 유기적 통합에 관한 연구)

  • 최성규;김계현
    • Spatial Information Research
    • /
    • v.9 no.2
    • /
    • pp.291-307
    • /
    • 2001
  • In today's environmental engineering practice, many technologies such as GIS have been adopted to analyze chemical and biological process in water bodies and pollutants movements on the land surface. However, the linkage between spatially represented land surface pollutants and the in-stream processes has been relatively weak. This lack of continuity needs to develop a method in order to link the spatially-based pollutant source characterization with the water quality modeling. The objective of this thesis was to develop a two-way(forward and backward) link between ArcView GIS software and the USEPA water quality model, WASP5. This thesis includes a literature review, the determination of the point source and non-point source loadings from WASP5 modeling, and the linkage of a GIS with WASP5 model. The GIS and model linkage includes pre-processing of the input data within a GIS to provide necessary information for running a model in the forms of external input files. The model results has been post-processed and stored in the GIS database to be reviewed in a user defined form such as a chart, or a table. The interface developed from this study would provide efficient environment to support the easier decision making form water quality management.

  • PDF

A Study on the Effects of the Institutional Pressure on the Process of Implementation and Appropriation of System: M-EMRS in Hospital Organization (시스템의 도입과 전유 과정에 영향을 미치는 제도적 압력에 관한 연구: 병원조직의 모바일 전자의무기록 시스템을 대상으로)

  • Lee, Zoon-Ky;Shin, Ho-Kyoung;Choi, Hee-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.95-116
    • /
    • 2009
  • Increasingly the institutional theory has been an important theoretical view of decision making process and IT adoption in many academic researches. This study used the institutional theory as a lens through which we can understand the factors that enable the effective appropriation of advanced information technology. It posits that mimetic, coercive, and normative pressures existing in an institutionalized environment could influence the participation of top managers or decision makers and the involvement of users toward an effective use of IT in their tasks. Since the introduction of IT, organizational members have been using IT in their daily tasks, creating and recreating rules and resources according to their own methods and needs. That is to say, the adaptation process of the IT and outcomes are different among organizations. The previous studies on a diverse use of IT refer to the appropriation of technology from the social technology view. Users appropriate IT through not only technology itself, but also in terms of how they use it or how they make the social practice in their use of it. In this study, the concepts of institutional pressure, appropriation, participation of decision makers, and involvement of users toward the appropriation are explored in the context of the appropriation of the mobile electronic medical record system (M-EMRS) in particularly a hospital setting. Based on the conceptual definition of institutional pressure, participation and involvement, operational measures are reconstructed. Furthermore, the concept of appropriation is measured in the aspect of three sub-constructs-consensus on appropriation, faithful appropriation, and attitude of use. Grounded in the relevant theories to appropriation of IT, we developed a research framework in which the effects of institutional pressure, participation and involvement on the appropriation of IT are analyzed. Within this theoretical framework, we formulated several hypotheses. We developed a second order institutional pressure and appropriation construct. After establishing its validity and reliability, we tested the hypotheses with empirical data from 101 users in 3 hospitals which had adopted and used the M-EMRS. We examined the mediating effect of the participation of decision makers and the involvement of users on the appropriation and empirically validated their relationships. The results show that the mimetic, coercive, and normative institutional pressure has an effect on the participation of decision makers and the involvement of users in the appropriation of IT while the participation of decision makers and the involvement of users have an effect on the appropriation of IT. The results also suggest that the institutional pressure and the participation of decision makers influence the involvement of users toward an appropriation of IT. Our results emphasize the mediating effect of the institutional pressure on the appropriation of IT. Namely, the higher degree of the participation of decision makers and the involvement of users, the more effective appropriation users will represent. These results provide strong support for institutional-based variables as predictors of appropriation. These findings also indicate that organizations should focus on the role of participation of decision makers and the involvement of users for the purpose of effective appropriation, and these are the practical implications of our study. The theoretical contribution of this study is lies in the integrated model of the effect of institutional pressure on the appropriation of IT. The results are consistent with the institutional theory and support previous studies on adaptive structuration theory.

Network Performance Verification for Next-Generation Power Distribution Management System Using FRTU Simulator (FRTU 시뮬레이터를 이용한 차세대 배전지능화시스템 네트워크 성능검증)

  • Yeo, Sang-Uk;Son, Sung-Yong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.6
    • /
    • pp.523-529
    • /
    • 2020
  • Power distribution management system is essential for the efficient management and operation of power distribution networks. The power distribution system is a system that manages the distribution network based on IT, and has been evolving along with the development of the power industry. The current power distribution system is designed to operate at a relatively low network transmission speed based on the independent operation of the main equipment. However, due to distributed resources such as photovoltaic or energy storage devices, which are rapidly increasing in popularity in recent years, the operation of future distribution environments is becoming more complex, and various information needs to be collected in real time. In this study, the requirements of the next-generation power distribution system were derived to overcome the limitations of the existing power distribution system, and based on this, the communication network system and performance requirements for the distribution system were defined. In order to verify the performance of the designed system, a software-based terminal device simulator was developed because it takes excessive time and cost to introduce a large-scale system such as a power distribution system. Using the simulator, a test environment similar to the actual operation was established, and the number of terminal devices was increased up to 1,000. The proposed system was shown to satisfy the requirements to support the functions of the next-generation power distribution system, recording less than 10 % of the communication network bandwidth.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

A Study on the Component Design for Water Network Analysis (상수도 관망해석 컴포넌트 설계에 관한 연구)

  • Kim, Kye-Hyun;Kim, Jun-Chul;Park, Tae-Og
    • Journal of Korea Spatial Information System Society
    • /
    • v.2 no.2 s.4
    • /
    • pp.75-84
    • /
    • 2000
  • GIS has been building for various application fields with the aids of NGIS project, especially numerous municipal governments are building a UIS in the level of local governments' informatization. Although there are some difference between municipal governments' business, still many things are in common. So far, individual municipal governments have developed a UIS for their own use, which lead to duplicated development of the UIS. The component technology has been introduced to remove such duplicated efforts and it enabled maximizing the reusablilty of the UIS already developed. This paper proposes a component design for network analysis of the drinking water to calculate the amount of flow and the head loss. This component design provides the initial water amount to estimate the amount of the network flow and the head loss, thereby supports the decision making such as installation or extension of the pipe network. The process of the component design accompanies the business reengineering to support the standardized business work flow. Also, the design of the network analysis component uses the algorithms induced with UML specification. Based on the component design, the component development has been progressing and the network analysis system would be followed. In the near future, another component to integrate the network analysis and the business related to the drinking water needs to be developed.

  • PDF