• Title/Summary/Keyword: 서버

Search Result 9,427, Processing Time 0.037 seconds

Design and Implementation of Quality Broker Architecture to Web Service Selection based on Autonomic Feedback (자율적 피드백 기반 웹 서비스 선정을 위한 품질 브로커 아키텍처의 설계 및 구현)

  • Seo, Young-Jun;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.15D no.2
    • /
    • pp.223-234
    • /
    • 2008
  • Recently the web service area provides the efficient integrated environment of the internal and external of corporation and enterprise that wants the introduction of it is increasing. Also the web service develops and the new business model appears, the domestic enterprise environment and e-business environment are changing caused by web service. The web service which provides the similar function increases, most the method which searches the suitable service in demand of the user is more considered seriously. When it needs to choose one among the similar web services, service consumer generally needs quality information of web service. The problem, however, is that the advertised QoS information of a web service is not always trustworthy. A service provider may publish inaccurate QoS information to attract more customers, or the published QoS information may be out of date. Allowing current customers to rate the QoS they receive from a web service, and making these ratings public, can provide new customers with valuable information on how to rank services. This paper suggests the agent-based quality broker architecture which helps to find a service providing the optimum quality that the consumer needs in a position of service consumer. It is able to solve problem which modify quality requirements of the consumer from providing the architecture it selects a web service to consumer dynamically. Namely, the consumer is able to search the service which provides the optimal quality criteria through UDDI browser which is connected in quality broker server. To quality criteria value decision of each service the user intervention is excluded the maximum. In the existing selection architecture, the objective evaluation was difficult in subjective class of service selecting of the consumer. But the proposal architecture is able to secure an objectivity with the quality criteria value decision where the agent monitors binding information in consumer location. Namely, it solves QoS information of service which provider does not provide with QoS information sharing which is caused by with feedback of consumer side agents.

A Study on Usability of Open Source Software for Developing Records System : A Case of ICA AtoM (공개 소프트웨어를 이용한 기록시스템 구축가능성 연구 ICA AtoM을 중심으로)

  • Lee, Bo-Ram;Hwang, Jin-Hyun;Park, Min-Yung;Kim, Hyung-Hee;Choi, Dong-Woon;Choi, Yun-Jin;Yim, Jin-Hee
    • The Korean Journal of Archival Studies
    • /
    • no.39
    • /
    • pp.193-228
    • /
    • 2014
  • In recent years, as well as management of public records, interest in the private archive of large and small is growing. Dedicated archive has various types. In addition, lack of personnel and budget, personnel records management professional because the absence, that help you maintain these records in a systematic manner is not easy. Request to the system have continued to rise, but the budget and professionals in order to solve this problem are missing. As breakthrough of the burden to the system with archive dedicated, it introduces the trends and meaning of public recording system, and was examined in detail AtoM function. AtoM is public land can be made by a method that requires a Web service, the database server. Without restrictions, including the advantage of being available free of charge, by the application or operating system specific, installation and operation is convenient. In addition, compatibility, and is highly scalable, AtoM use and convenient archive of private experiencing a shortage of personnel and budget. Because in terms of data management, and excellent interoperability and search share, and use, it is possible in the future, it favors also documentary use through a network of inter-agency archives and private. In addition, Enhancements exhibition services through cooperation with Omeka, long-term storage through Archivematica, many discussion is needed. Public centered around the private area of the recording management spilling expanded, open-source software allows to balance the recording system will be able to play an important role. In addition, the efforts of academia and in the field, close collaboration between the open source recording system through a user study should be continued. Furthermore, co-operation and sharing of private archives expect come true.

The Effectiveness of Fiscal Policies for R&D Investment (R&D 투자 촉진을 위한 재정지원정책의 효과분석)

  • Song, Jong-Guk;Kim, Hyuk-Joon
    • Journal of Technology Innovation
    • /
    • v.17 no.1
    • /
    • pp.1-48
    • /
    • 2009
  • Recently we have found some symptoms that R&D fiscal incentives might not work well what it has intended through the analysis of current statistics of firm's R&D data. Firstly, we found that the growth rate of R&D investment in private sector during the recent decade has been slowdown. The average of growth rate (real value) of R&D investment is 7.1% from 1998 to 2005, while it was 13.9% from 1980 to 1997. Secondly, the relative share of R&D investment of SME has been decreased to 21%('05) from 29%('01), even though the tax credit for SME has been more beneficial than large size firm, Thirdly, The R&D expenditure of large size firms (besides 3 leading firms) has not been increased since late of 1990s. We need to find some evidence whether fiscal incentives are effective in increasing firm's R&D investment. To analyse econometric model we use firm level unbalanced panel data for 4 years (from 2002 to 2005) derived from MOST database compiled from the annual survey, "Report on the Survey of Research and Development in Science and Technology". Also we use fixed effect model (Hausman test results accept fixed effect model with 1% of significant level) and estimate the model for all firms, large firms and SME respectively. We have following results from the analysis of econometric model. For large firm: i ) R&D investment responds elastically (1.20) to sales volume. ii) government R&D subsidy induces R&D investment (0.03) not so effectively. iii) Tax price elasticity is almost unity (-0.99). iv) For large firm tax incentive is more effective than R&D subsidy For SME: i ) Sales volume increase R&D investment of SME (0.043) not so effectively. ii ) government R&D subsidy is crowding out R&D investment of SME not seriously (-0.0079) iii) Tax price elasticity is very inelastic (-0.054) To compare with other studies, Koga(2003) has a similar result of tax price elasticity for Japanese firm (-1.0036), Hall((l992) has a unit tax price elasticity, Bloom et al. (2002) has $-0.354{\sim}-0.124$ in the short run. From the results of our analysis we recommend that government R&D subsidy has to focus on such an areas like basic research and public sector (defense, energy, health etc.) not overlapped private R&D sector. For SME government has to focus on establishing R&D infrastructure. To promote tax incentive policy, we need to strengthen the tax incentive scheme for large size firm's R&D investment. We recommend tax credit for large size film be extended to total volume of R&D investment.

  • PDF

Records Management and Archives in Korea : Its Development and Prospects (한국 기록관리행정의 변천과 전망)

  • Nam, Hyo-Chai
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.1 no.1
    • /
    • pp.19-35
    • /
    • 2001
  • After almost one century of discontinuity in the archival tradition of Chosun dynasty, Korea entered the new age of records and archival management by legislating and executing the basic laws (The Records and Archives Management of Public Agencies Ad of 1999). Annals of Chosun dynasty recorded major historical facts of the five hundred years of national affairs. The Annals are major accomplishment in human history and rare in the world. It was possible because the Annals were composed of collected, selected and complied records of primary sources written and compiled by generations of historians, As important public records are needed to be preserved in original forms in modern archives, we had to develop and establish a modern archival system to appraise and select important national records for archival preservation. However, the colonialization of Korea deprived us of the opportunity to do the task, and our fine archival tradition was not succeeded. A centralized archival system began to develop since the establishment of GARS under the Ministry of Government Administration in 1969. GARS built a modem repository in Pusan in 1984 succeeding to the tradition of History Archives of Chosun dynasty. In 1998, GARS moved its headquarter to Taejon Government Complex and acquired state-of-the-art audio visual archives preservation facilities. From 1996, GARS introduced an automated archival management system to remedy the manual registration and management system complementing the preservation microfilming. Digitization of the holdings was the key project to provided the digital images of archives to users. To do this, the GARS purchased new computer/server systems and developed application softwares. Parallel to this direction, GARS drastically renovated its manpower composition toward a high level of professionalization by recruiting more archivists with historical and library science backgrounds. Conservators and computer system operators were also recruited. The new archival laws has been in effect from January 1, 2000. The new laws made following new changes in the field of records and archival administration in Korea. First, the laws regulate the records and archives of all public agencies including the Legislature, the Judiciary, the Administration, the constitutional institutions, Army, Navy, Air Force, and National Intelligence Service. A nation-wide unified records and archives management system became available. Second, public archives and records centers are to be established according to the level of the agency; a central archives at national level, special archives for the National Assembly and the Judiciary, local government archives for metropolitan cities and provinces, records center or special records center for administrative agencies. A records manager will be responsible for the records management of each administrative divisions. Third, the records in the public agencies are registered in the computer system as they are produced. Therefore, the records are traceable and will be searched or retrieved easily through internet or computer network. Fourth, qualified records managers and archivists who are professionally trained in the field of records management and archival science will be assigned mandatorily to guarantee the professional management of records and archives. Fifth, the illegal treatment of public records and archives constitutes a punishable crime. In the future, the public records find archival management will develop along with Korean government's 'Electronic Government Project.' Following changes are in prospect. First, public agencies will digitize paper records, audio-visual records, and publications as well as electronic documents, thus promoting administrative efficiency and productivity. Second, the National Assembly already established its Special Archives. The judiciary and the National Intelligence Service will follow it. More archives will be established at city and provincial levels. Third, the more our society develop into a knowledge-based information society, the more the records management function will become one of the important national government functions. As more universities, academic associations, and civil societies participate in promoting archival awareness and in establishing archival science, and more people realize the importance of the records and archives management up to the level of national public campaign, the records and archival management in Korea will develop significantly distinguishable from present practice.

System Development for Measuring Group Engagement in the Art Center (공연장에서 다중 몰입도 측정을 위한 시스템 개발)

  • Ryu, Joon Mo;Choi, Il Young;Choi, Lee Kwon;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.45-58
    • /
    • 2014
  • The Korean Culture Contents spread out to Worldwide, because the Korean wave is sweeping in the world. The contents stand in the middle of the Korean wave that we are used it. Each country is ongoing to keep their Culture industry improve the national brand and High added value. Performing contents is important factor of arousal in the enterprise industry. To improve high arousal confidence of product and positive attitude by populace is one of important factor by advertiser. Culture contents is the same situation. If culture contents have trusted by everyone, they will give information their around to spread word-of-mouth. So, many researcher study to measure for person's arousal analysis by statistical survey, physiological response, body movement and facial expression. First, Statistical survey has a problem that it is not possible to measure each person's arousal real time and we cannot get good survey result after they watched contents. Second, physiological response should be checked with surround because experimenter sets sensors up their chair or space by each of them. Additionally it is difficult to handle provided amount of information with real time from their sensor. Third, body movement is easy to get their movement from camera but it difficult to set up experimental condition, to measure their body language and to get the meaning. Lastly, many researcher study facial expression. They measures facial expression, eye tracking and face posed. Most of previous studies about arousal and interest are mostly limited to reaction of just one person and they have problems with application multi audiences. They have a particular method, for example they need room light surround, but set limits only one person and special environment condition in the laboratory. Also, we need to measure arousal in the contents, but is difficult to define also it is not easy to collect reaction by audiences immediately. Many audience in the theater watch performance. We suggest the system to measure multi-audience's reaction with real-time during performance. We use difference image analysis method for multi-audience but it weaks a dark field. To overcome dark environment during recoding IR camera can get the photo from dark area. In addition we present Multi-Audience Engagement Index (MAEI) to calculate algorithm which sources from sound, audience' movement and eye tracking value. Algorithm calculates audience arousal from the mobile survey, sound value, audience' reaction and audience eye's tracking. It improves accuracy of Multi-Audience Engagement Index, we compare Multi-Audience Engagement Index with mobile survey. And then it send the result to reporting system and proposal an interested persons. Mobile surveys are easy, fast, and visitors' discomfort can be minimized. Also additional information can be provided mobile advantage. Mobile application to communicate with the database, real-time information on visitors' attitudes focused on the content stored. Database can provide different survey every time based on provided information. The example shown in the survey are as follows: Impressive scene, Satisfied, Touched, Interested, Didn't pay attention and so on. The suggested system is combine as 3 parts. The system consist of three parts, External Device, Server and Internal Device. External Device can record multi-Audience in the dark field with IR camera and sound signal. Also we use survey with mobile application and send the data to ERD Server DB. The Server part's contain contents' data, such as each scene's weights value, group audience weights index, camera control program, algorithm and calculate Multi-Audience Engagement Index. Internal Device presents Multi-Audience Engagement Index with Web UI, print and display field monitor. Our system is test-operated by the Mogencelab in the DMC display exhibition hall which is located in the Sangam Dong, Mapo Gu, Seoul. We have still gotten from visitor daily. If we find this system audience arousal factor with this will be very useful to create contents.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.