• Title/Summary/Keyword: Software service

Search Result 2,457, Processing Time 0.033 seconds

A Study on Usability of Open Source Software for Developing Records System : A Case of ICA AtoM (공개 소프트웨어를 이용한 기록시스템 구축가능성 연구 ICA AtoM을 중심으로)

  • Lee, Bo-Ram;Hwang, Jin-Hyun;Park, Min-Yung;Kim, Hyung-Hee;Choi, Dong-Woon;Choi, Yun-Jin;Yim, Jin-Hee
    • The Korean Journal of Archival Studies
    • /
    • no.39
    • /
    • pp.193-228
    • /
    • 2014
  • In recent years, as well as management of public records, interest in the private archive of large and small is growing. Dedicated archive has various types. In addition, lack of personnel and budget, personnel records management professional because the absence, that help you maintain these records in a systematic manner is not easy. Request to the system have continued to rise, but the budget and professionals in order to solve this problem are missing. As breakthrough of the burden to the system with archive dedicated, it introduces the trends and meaning of public recording system, and was examined in detail AtoM function. AtoM is public land can be made by a method that requires a Web service, the database server. Without restrictions, including the advantage of being available free of charge, by the application or operating system specific, installation and operation is convenient. In addition, compatibility, and is highly scalable, AtoM use and convenient archive of private experiencing a shortage of personnel and budget. Because in terms of data management, and excellent interoperability and search share, and use, it is possible in the future, it favors also documentary use through a network of inter-agency archives and private. In addition, Enhancements exhibition services through cooperation with Omeka, long-term storage through Archivematica, many discussion is needed. Public centered around the private area of the recording management spilling expanded, open-source software allows to balance the recording system will be able to play an important role. In addition, the efforts of academia and in the field, close collaboration between the open source recording system through a user study should be continued. Furthermore, co-operation and sharing of private archives expect come true.

Design and Implementation of Transmission Scheduler for Terrestrial UHD Contents (지상파 UHD 콘텐츠 전송 스케줄러 설계 및 구현)

  • Paik, Jong-Ho;Seo, Minjae;Yu, Kyung-A
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.118-131
    • /
    • 2019
  • In order to provide 8K UHD contents of terrestrial broadcasting with a large capacity, the terrestrial broadcasting system has various problems such as limited bandwidth and so on. To solve these problems, UHD contents transmission technology has been actively studied, and an 8K UHD broadcasting system using terrestrial broadcasting network and communication network has been proposed. The proposed technique is to solve the limited bandwidth problem of terrestrial broadcasting network by segmenting 8K UHD contents and transmitting them to heterogeneous networks through hierarchical separation. Through the terrestrial broadcasting network, the base layer corresponding to FHD and the additional enhancement layer data for 4K UHD are transmitted, and the additional enhancement layer data corresponding to 8K UHD is transmitted through the communication network. When 8K UHD contents are provided in such a way, user can receive up to 4K UHD broadcasting by terrestrial channels, and also can receive up to 8K UHD additional communication networks. However, in order to transmit the 4K UHD contents within the allocated bit rate of the domestic terrestrial UHD broadcasting, the compression rate is increased, so a certain level of image deterioration occurs inevitably. Due to the nature of UHD contents, video quality should be considered as a top priority over other factors, so that video quality should be guaranteed even within a limited bit rate. This requires packet scheduling of content generators in the broadcasting system. Since the multiplexer sends out the packets received from the content generator in order, it is very important to make the transmission time and the transmission rate of the process from the content generator to the multiplexer constant and accurate. Therefore, we propose a variable transmission scheduler between the content generator and the multiplexer to guarantee the image quality of a certain level of UHD contents in this paper.

Open Digital Textbook for Smart Education (스마트교육을 위한 오픈 디지털교과서)

  • Koo, Young-Il;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.177-189
    • /
    • 2013
  • In Smart Education, the roles of digital textbook is very important as face-to-face media to learners. The standardization of digital textbook will promote the industrialization of digital textbook for contents providers and distributers as well as learner and instructors. In this study, the following three objectives-oriented digital textbooks are looking for ways to standardize. (1) digital textbooks should undertake the role of the media for blended learning which supports on-off classes, should be operating on common EPUB viewer without special dedicated viewer, should utilize the existing framework of the e-learning learning contents and learning management. The reason to consider the EPUB as the standard for digital textbooks is that digital textbooks don't need to specify antoher standard for the form of books, and can take advantage od industrial base with EPUB standards-rich content and distribution structure (2) digital textbooks should provide a low-cost open market service that are currently available as the standard open software (3) To provide appropriate learning feedback information to students, digital textbooks should provide a foundation which accumulates and manages all the learning activity information according to standard infrastructure for educational Big Data processing. In this study, the digital textbook in a smart education environment was referred to open digital textbook. The components of open digital textbooks service framework are (1) digital textbook terminals such as smart pad, smart TVs, smart phones, PC, etc., (2) digital textbooks platform to show and perform digital contents on digital textbook terminals, (3) learning contents repository, which exist on the cloud, maintains accredited learning, (4) App Store providing and distributing secondary learning contents and learning tools by learning contents developing companies, and (5) LMS as a learning support/management tool which on-site class teacher use for creating classroom instruction materials. In addition, locating all of the hardware and software implement a smart education service within the cloud must have take advantage of the cloud computing for efficient management and reducing expense. The open digital textbooks of smart education is consdered as providing e-book style interface of LMS to learners. In open digital textbooks, the representation of text, image, audio, video, equations, etc. is basic function. But painting, writing, problem solving, etc are beyond the capabilities of a simple e-book. The Communication of teacher-to-student, learner-to-learnert, tems-to-team is required by using the open digital textbook. To represent student demographics, portfolio information, and class information, the standard used in e-learning is desirable. To process learner tracking information about the activities of the learner for LMS(Learning Management System), open digital textbook must have the recording function and the commnincating function with LMS. DRM is a function for protecting various copyright. Currently DRMs of e-boook are controlled by the corresponding book viewer. If open digital textbook admitt DRM that is used in a variety of different DRM standards of various e-book viewer, the implementation of redundant features can be avoided. Security/privacy functions are required to protect information about the study or instruction from a third party UDL (Universal Design for Learning) is learning support function for those with disabilities have difficulty in learning courses. The open digital textbook, which is based on E-book standard EPUB 3.0, must (1) record the learning activity log information, and (2) communicate with the server to support the learning activity. While the recording function and the communication function, which is not determined on current standards, is implemented as a JavaScript and is utilized in the current EPUB 3.0 viewer, ths strategy of proposing such recording and communication functions as the next generation of e-book standard, or special standard (EPUB 3.0 for education) is needed. Future research in this study will implement open source program with the proposed open digital textbook standard and present a new educational services including Big Data analysis.

Medical Information Dynamic Access System in Smart Mobile Environments (스마트 모바일 환경에서 의료정보 동적접근 시스템)

  • Jeong, Chang Won;Kim, Woo Hong;Yoon, Kwon Ha;Joo, Su Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • Recently, the environment of a hospital information system is a trend to combine various SMART technologies. Accordingly, various smart devices, such as a smart phone, Tablet PC is utilized in the medical information system. Also, these environments consist of various applications executing on heterogeneous sensors, devices, systems and networks. In these hospital information system environment, applying a security service by traditional access control method cause a problems. Most of the existing security system uses the access control list structure. It is only permitted access defined by an access control matrix such as client name, service object method name. The major problem with the static approach cannot quickly adapt to changed situations. Hence, we needs to new security mechanisms which provides more flexible and can be easily adapted to various environments with very different security requirements. In addition, for addressing the changing of service medical treatment of the patient, the researching is needed. In this paper, we suggest a dynamic approach to medical information systems in smart mobile environments. We focus on how to access medical information systems according to dynamic access control methods based on the existence of the hospital's information system environments. The physical environments consist of a mobile x-ray imaging devices, dedicated mobile/general smart devices, PACS, EMR server and authorization server. The software environment was developed based on the .Net Framework for synchronization and monitoring services based on mobile X-ray imaging equipment Windows7 OS. And dedicated a smart device application, we implemented a dynamic access services through JSP and Java SDK is based on the Android OS. PACS and mobile X-ray image devices in hospital, medical information between the dedicated smart devices are based on the DICOM medical image standard information. In addition, EMR information is based on H7. In order to providing dynamic access control service, we classify the context of the patients according to conditions of bio-information such as oxygen saturation, heart rate, BP and body temperature etc. It shows event trace diagrams which divided into two parts like general situation, emergency situation. And, we designed the dynamic approach of the medical care information by authentication method. The authentication Information are contained ID/PWD, the roles, position and working hours, emergency certification codes for emergency patients. General situations of dynamic access control method may have access to medical information by the value of the authentication information. In the case of an emergency, was to have access to medical information by an emergency code, without the authentication information. And, we constructed the medical information integration database scheme that is consist medical information, patient, medical staff and medical image information according to medical information standards.y Finally, we show the usefulness of the dynamic access application service based on the smart devices for execution results of the proposed system according to patient contexts such as general and emergency situation. Especially, the proposed systems are providing effective medical information services with smart devices in emergency situation by dynamic access control methods. As results, we expect the proposed systems to be useful for u-hospital information systems and services.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

Usefulness of applying Macro for Brain SPECT Processing (Brain SPECT Processing에 있어서 Macro Program 사용의 유용성)

  • Kim, Gye-Hwan;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyeon-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.35-39
    • /
    • 2009
  • Purpose: Diagnostic and functional imaging softwares in Nuclear Medicine have been developed significantly. But, there are some limitations which like take a lot of time. In this article, we introduced that the basic concept of macro to help understanding macro and its application to Brain SPECT processing. We adopted macro software to SPM processing and PACS verify processing of Brain SPECT processing. Materials and Methods: In Brain SPECT, we choose SPM processing and two PACS works which have large portion of a work. SPM is the software package to analyze neuroimaging data. And purpose of SPM is quantitative analysis between groups. Results are made by complicated process such as realignment, normalization, smoothing and mapping. We made this process to be more simple by using macro program. After sending image to PACS, we directly input coordinates of mouse using simple macro program for processes of color mapping, adjustment of gray scale, copy, cut and match. So we compared time for making result by hand with making result by macro program. Finally, we got results by applying times to number of studies in 2007. Results: In 2007, the number of SPM studies were 115 and the number of PACS studies were 834 according to Diamox study. It was taken 10 to 15 minutes for SPM work by hand according to expertness and 5 minutes and a half was uniformly needed using Macro. After applying needed time to the number of studies, we calculated an average time per a year. When using SPM work by hand according to expertness, 1150 to 1725 minutes (19 to 29 hours) were needed and 632 seconds (11 hours) were needed for using Macro. When using PACS work by hand, 2 to 3 minutes were needed and for using Macro, 45 seconds were needed. After applying theses time to the number of studies, when working by hand, 1668 to 2502 minutes (28 to 42 hours) were needed and for using Macro, 625 minutes (10 hours) were needed. Following by these results, it was shown that 1043 to 1877 (17 to 31 hours were saved. Therefore, we could save 45 to 63% for SPM, 62 to 75% for PACS work and 55 to 70% for total brain SPECT processing in 2007. Conclusions: On the basis of the number of studies, there was significant time saved when we applied Macro to brain SPECT processing and also it was shown that even though work is taken a little time, there is a possibility to save lots of time according to the number of studies. It gives time on technologist's side which makes radiological technologist more concentrate for patients and reduce probability of mistake. Appling Macro to brain SPECT processing helps for both of radiological technologists and patients and contribute to improve quality of hospital service.

  • PDF

The Relationship between Internet Search Volumes and Stock Price Changes: An Empirical Study on KOSDAQ Market (개별 기업에 대한 인터넷 검색량과 주가변동성의 관계: 국내 코스닥시장에서의 산업별 실증분석)

  • Jeon, Saemi;Chung, Yeojin;Lee, Dongyoup
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.81-96
    • /
    • 2016
  • As the internet has become widespread and easy to access everywhere, it is common for people to search information via online search engines such as Google and Naver in everyday life. Recent studies have used online search volume of specific keyword as a measure of the internet users' attention in order to predict disease outbreaks such as flu and cancer, an unemployment rate, and an index of a nation's economic condition, and etc. For stock traders, web search is also one of major information resources to obtain data about individual stock items. Therefore, search volume of a stock item can reflect the amount of investors' attention on it. The investor attention has been regarded as a crucial factor influencing on stock price but it has been measured by indirect proxies such as market capitalization, trading volume, advertising expense, and etc. It has been theoretically and empirically proved that an increase of investors' attention on a stock item brings temporary increase of the stock price and the price recovers in the long run. Recent development of internet environment enables to measure the investor attention directly by the internet search volume of individual stock item, which has been used to show the attention-induced price pressure. Previous studies focus mainly on Dow Jones and NASDAQ market in the United States. In this paper, we investigate the relationship between the individual investors' attention measured by the internet search volumes and stock price changes of individual stock items in the KOSDAQ market in Korea, where the proportion of the trades by individual investors are about 90% of the total. In addition, we examine the difference between industries in the influence of investors' attention on stock return. The internet search volume of stocks were gathered from "Naver Trend" service weekly between January 2007 and June 2015. The regression model with the error term with AR(1) covariance structure is used to analyze the data since the weekly prices in a stock item are systematically correlated. The market capitalization, trading volume, the increment of trading volume, and the month in which each trade occurs are included in the model as control variables. The fitted model shows that an abnormal increase of search volume of a stock item has a positive influence on the stock return and the amount of the influence varies among the industry. The stock items in IT software, construction, and distribution industries have shown to be more influenced by the abnormally large internet search volume than the average across the industries. On the other hand, the stock items in IT hardware, manufacturing, entertainment, finance, and communication industries are less influenced by the abnormal search volume than the average. In order to verify price pressure caused by investors' attention in KOSDAQ, the stock return of the current week is modelled using the abnormal search volume observed one to four weeks ahead. On average, the abnormally large increment of the search volume increased the stock return of the current week and one week later, and it decreased the stock return in two and three weeks later. There is no significant relationship with the stock return after 4 weeks. This relationship differs among the industries. An abnormal search volume brings particularly severe price reversal on the stocks in the IT software industry, which are often to be targets of irrational investments by individual investors. An abnormal search volume caused less severe price reversal on the stocks in the manufacturing and IT hardware industries than on average across the industries. The price reversal was not observed in the communication, finance, entertainment, and transportation industries, which are known to be influenced largely by macro-economic factors such as oil price and currency exchange rate. The result of this study can be utilized to construct an intelligent trading system based on the big data gathered from web search engines, social network services, and internet communities. Particularly, the difference of price reversal effect between industries may provide useful information to make a portfolio and build an investment strategy.

Permanent Preservation and Use of Historical Archives : Preservation Issues Digitization of Historical Collection (역사기록물(Archives)의 항구적인 보존화 이용 : 보존전략과 디지털정보화)

  • Lee, Sang-min
    • The Korean Journal of Archival Studies
    • /
    • no.1
    • /
    • pp.23-76
    • /
    • 2000
  • In this paper, I examined what have been researched and determined about preservation strategy and selection of preservation media in the western archival community. Archivists have primarily been concerned with 'preservation' and 'use' of archival materials worth of being preserved permanently. In the new information era, preservation and use of archival materials were faced with new challenge. Life expectancy of paper records was shortened due to acidification and brittleness of the modem papers. Also emergence of information technology affects the traditional way of preservation and use of archival materials. User expectations are becoming so high technology-oriented and so complicated as to make archivists act like information managers using computer technology rather than traditional archival handicraft. Preservation strategy plays an important role in archival management as well as information management. For a cost-effective management of archives and archival institutions, preservation strategy is a must. The preservation strategy encompasses all aspects of archival preservation process and practices, from selection of archives, appraisal, inventorying, arrangement, description, conservation, microfilming or digitization, archival buildings, and access service. Those archival functions should be considered in their relations to each other to ensure proper preservation of archival materials. In the integrated preservation strategy, 'preservation' and 'use' should be combined and fulfilled without sacrificing the other. Preservation strategy planning is essential to determine the policies of archives to preserve their holdings safe and provide people with a maximum access in most effective ways. Preservation microfilming is to ensure permanent preservation of information held in important archival materials. To do this, a detailed standardization has been developed to guarantee the permanence of microfilm as well as its product quality. Silver gelatin film can last up to 500 years in the optimum storage environment and the most viable option for permanent preservation media. ISO and ANIS developed such standards for the quality of microfilms and microfilming technology. Preservation microfilming guidelines was also developed to ensure effective archival management and picture quality of microfilms. It is essential to assess the need of preservation microfilming. Limit in resources always put a restraint on preservation management. Appraisal (and selection) of what to be preserved was the most important part of preservation microfilming. In addition, microfilms with standard quality can be scanned to produce quality digital images for instant use through internet. As information technology develops, archivists began to utilize information technology to make preservation easier and more economical, and to promote use of archival materials through computer communication network. Digitization was introduced to provide easy and universal access to unique archives, and its large capacity of preserving archival data seems very promising. However, digitization, i.e., transferring images of records to electronic codes, still, needs to be standardized. Digitized data are electronic records, and st present electronic records are very unstable and not to be preserved permanently. Digital media including optical disks materials have not been proved as reliable media for permanent preservation. Due to their chemical coating and physical character using light, they are not stable and can be preserved at best 100 years in the optimum storage environment. Most CD-R can last only 20 years. Furthermore, obsolescence of hardware and software makes hard to reproduce digital images made from earlier versions. Even if when reformatting is possible, the cost of refreshing or upgrading of digital images is very expensive and the very process has to be done at least every five to ten years. No standard for this obsolescence of hardware and software has come into being yet. In short, digital permanence is not a fact, but remains to be uncertain possibility. Archivists must consider in their preservation planning both risk of introducing new technology and promising possibility of new technology at the same time. In planning digitization of historical materials, archivists should incorporate planning for maintaining digitized images and reformatting them in the coming generations of new applications. Without the comprehensive planning, future use of the expensive digital images will become unavailable. And that is a loss of information, and a final failure of both 'preservation' and 'use' of archival materials. As peter Adelstein said, it is wise to be conservative when considerations of conservations are involved.