• Title/Summary/Keyword: real-time systems

Search Result 6,601, Processing Time 0.038 seconds

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Analyzing Contextual Polarity of Unstructured Data for Measuring Subjective Well-Being (주관적 웰빙 상태 측정을 위한 비정형 데이터의 상황기반 긍부정성 분석 방법)

  • Choi, Sukjae;Song, Yeongeun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.83-105
    • /
    • 2016
  • Measuring an individual's subjective wellbeing in an accurate, unobtrusive, and cost-effective manner is a core success factor of the wellbeing support system, which is a type of medical IT service. However, measurements with a self-report questionnaire and wearable sensors are cost-intensive and obtrusive when the wellbeing support system should be running in real-time, despite being very accurate. Recently, reasoning the state of subjective wellbeing with conventional sentiment analysis and unstructured data has been proposed as an alternative to resolve the drawbacks of the self-report questionnaire and wearable sensors. However, this approach does not consider contextual polarity, which results in lower measurement accuracy. Moreover, there is no sentimental word net or ontology for the subjective wellbeing area. Hence, this paper proposes a method to extract keywords and their contextual polarity representing the subjective wellbeing state from the unstructured text in online websites in order to improve the reasoning accuracy of the sentiment analysis. The proposed method is as follows. First, a set of general sentimental words is proposed. SentiWordNet was adopted; this is the most widely used dictionary and contains about 100,000 words such as nouns, verbs, adjectives, and adverbs with polarities from -1.0 (extremely negative) to 1.0 (extremely positive). Second, corpora on subjective wellbeing (SWB corpora) were obtained by crawling online text. A survey was conducted to prepare a learning dataset that includes an individual's opinion and the level of self-report wellness, such as stress and depression. The participants were asked to respond with their feelings about online news on two topics. Next, three data sources were extracted from the SWB corpora: demographic information, psychographic information, and the structural characteristics of the text (e.g., the number of words used in the text, simple statistics on the special characters used). These were considered to adjust the level of a specific SWB. Finally, a set of reasoning rules was generated for each wellbeing factor to estimate the SWB of an individual based on the text written by the individual. The experimental results suggested that using contextual polarity for each SWB factor (e.g., stress, depression) significantly improved the estimation accuracy compared to conventional sentiment analysis methods incorporating SentiWordNet. Even though literature is available on Korean sentiment analysis, such studies only used only a limited set of sentimental words. Due to the small number of words, many sentences are overlooked and ignored when estimating the level of sentiment. However, the proposed method can identify multiple sentiment-neutral words as sentiment words in the context of a specific SWB factor. The results also suggest that a specific type of senti-word dictionary containing contextual polarity needs to be constructed along with a dictionary based on common sense such as SenticNet. These efforts will enrich and enlarge the application area of sentic computing. The study is helpful to practitioners and managers of wellness services in that a couple of characteristics of unstructured text have been identified for improving SWB. Consistent with the literature, the results showed that the gender and age affect the SWB state when the individual is exposed to an identical queue from the online text. In addition, the length of the textual response and usage pattern of special characters were found to indicate the individual's SWB. These imply that better SWB measurement should involve collecting the textual structure and the individual's demographic conditions. In the future, the proposed method should be improved by automated identification of the contextual polarity in order to enlarge the vocabulary in a cost-effective manner.

An Energy Efficient Cluster Management Method based on Autonomous Learning in a Server Cluster Environment (서버 클러스터 환경에서 자율학습기반의 에너지 효율적인 클러스터 관리 기법)

  • Cho, Sungchul;Kwak, Hukeun;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.6
    • /
    • pp.185-196
    • /
    • 2015
  • Energy aware server clusters aim to reduce power consumption at maximum while keeping QoS(Quality of Service) compared to energy non-aware server clusters. They adjust the power mode of each server in a fixed or variable time interval to let only the minimum number of servers needed to handle current user requests ON. Previous studies on energy aware server cluster put efforts to reduce power consumption further or to keep QoS, but they do not consider energy efficiency well. In this paper, we propose an energy efficient cluster management based on autonomous learning for energy aware server clusters. Using parameters optimized through autonomous learning, our method adjusts server power mode to achieve maximum performance with respect to power consumption. Our method repeats the following procedure for adjusting the power modes of servers. Firstly, according to the current load and traffic pattern, it classifies current workload pattern type in a predetermined way. Secondly, it searches learning table to check whether learning has been performed for the classified workload pattern type in the past. If yes, it uses the already-stored parameters. Otherwise, it performs learning for the classified workload pattern type to find the best parameters in terms of energy efficiency and stores the optimized parameters. Thirdly, it adjusts server power mode with the parameters. We implemented the proposed method and performed experiments with a cluster of 16 servers using three different kinds of load patterns. Experimental results show that the proposed method is better than the existing methods in terms of energy efficiency: the numbers of good response per unit power consumed in the proposed method are 99.8%, 107.5% and 141.8% of those in the existing static method, 102.0%, 107.0% and 106.8% of those in the existing prediction method for banking load pattern, real load pattern, and virtual load pattern, respectively.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.

THE CURRENT STATUS OF BIOMEDICAL ENGINEERING IN THE USA

  • Webster, John G.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1992 no.05
    • /
    • pp.27-47
    • /
    • 1992
  • Engineers have developed new instruments that aid in diagnosis and therapy Ultrasonic imaging has provided a nondamaging method of imaging internal organs. A complex transducer emits ultrasonic waves at many angles and reconstructs a map of internal anatomy and also velocities of blood in vessels. Fast computed tomography permits reconstruction of the 3-dimensional anatomy and perfusion of the heart at 20-Hz rates. Positron emission tomography uses certain isotopes that produce positrons that react with electrons to simultaneously emit two gamma rays in opposite directions. It locates the region of origin by using a ring of discrete scintillation detectors, each in electronic coincidence with an opposing detector. In magnetic resonance imaging, the patient is placed in a very strong magnetic field. The precessing of the hydrogen atoms is perturbed by an interrogating field to yield two-dimensional images of soft tissue having exceptional clarity. As an alternative to radiology image processing, film archiving, and retrieval, picture archiving and communication systems (PACS) are being implemented. Images from computed radiography, magnetic resonance imaging (MRI), nuclear medicine, and ultrasound are digitized, transmitted, and stored in computers for retrieval at distributed work stations. In electrical impedance tomography, electrodes are placed around the thorax. 50-kHz current is injected between two electrodes and voltages are measured on all other electrodes. A computer processes the data to yield an image of the resistivity of a 2-dimensional slice of the thorax. During fetal monitoring, a corkscrew electrode is screwed into the fetal scalp to measure the fetal electrocardiogram. Correlations with uterine contractions yield information on the status of the fetus during delivery To measure cardiac output by thermodilution, cold saline is injected into the right atrium. A thermistor in the right pulmonary artery yields temperature measurements, from which we can calculate cardiac output. In impedance cardiography, we measure the changes in electrical impedance as the heart ejects blood into the arteries. Motion artifacts are large, so signal averaging is useful during monitoring. An intraarterial blood gas monitoring system permits monitoring in real time. Light is sent down optical fibers inserted into the radial artery, where it is absorbed by dyes, which reemit the light at a different wavelength. The emitted light travels up optical fibers where an external instrument determines O2, CO2, and pH. Therapeutic devices include the electrosurgical unit. A high-frequency electric arc is drawn between the knife and the tissue. The arc cuts and the heat coagulates, thus preventing blood loss. Hyperthermia has demonstrated antitumor effects in patients in whom all conventional modes of therapy have failed. Methods of raising tumor temperature include focused ultrasound, radio-frequency power through needles, or microwaves. When the heart stops pumping, we use the defibrillator to restore normal pumping. A brief, high-current pulse through the heart synchronizes all cardiac fibers to restore normal rhythm. When the cardiac rhythm is too slow, we implant the cardiac pacemaker. An electrode within the heart stimulates the cardiac muscle to contract at the normal rate. When the cardiac valves are narrowed or leak, we implant an artificial valve. Silicone rubber and Teflon are used for biocompatibility. Artificial hearts powered by pneumatic hoses have been implanted in humans. However, the quality of life gradually degrades, and death ensues. When kidney stones develop, lithotripsy is used. A spark creates a pressure wave, which is focused on the stone and fragments it. The pieces pass out normally. When kidneys fail, the blood is cleansed during hemodialysis. Urea passes through a porous membrane to a dialysate bath to lower its concentration in the blood. The blind are able to read by scanning the Optacon with their fingertips. A camera scans letters and converts them to an array of vibrating pins. The deaf are able to hear using a cochlear implant. A microphone detects sound and divides it into frequency bands. 22 electrodes within the cochlea stimulate the acoustic the acoustic nerve to provide sound patterns. For those who have lost muscle function in the limbs, researchers are implanting electrodes to stimulate the muscle. Sensors in the legs and arms feed back signals to a computer that coordinates the stimulators to provide limb motion. For those with high spinal cord injury, a puff and sip switch can control a computer and permit the disabled person operate the computer and communicate with the outside world.

  • PDF

Patients Setup Verification Tool for RT (PSVTS) : DRR, Simulation, Portal and Digital images (방사선치료 시 환자자세 검증을 위한 분석용 도구 개발)

  • Lee Suk;Seong Jinsil;Kwon Soo I1;Chu Sung Sil;Lee Chang Geol;Suh Chang Ok
    • Radiation Oncology Journal
    • /
    • v.21 no.1
    • /
    • pp.100-106
    • /
    • 2003
  • Purpose : To develop a patients' setup verification tool (PSVT) to verify the alignment of the machine and the target isocenters, and the reproduclbility of patients' setup for three dimensional conformal radiotherapy (3DCRT) and intensity modulated radiotherapy (IMRT). The utilization of this system is evaluated through phantom and patient case studies. Materials and methods : We developed and clinically tested a new method for patients' setup verification, using digitally reconstructed radiography (DRR), simulation, porial and digital images. The PSVT system was networked to a Pentium PC for the transmission of the acquired images to the PC for analysis. To verify the alignment of the machine and target isocenters, orthogonal pairs of simulation images were used as verification images. Errors in the isocenter alignment were measured by comparing the verification images with DRR of CT Images. Orthogonal films were taken of all the patients once a week. These verification films were compared with the DRR were used for the treatment setup. By performing this procedure every treatment, using humanoid phantom and patient cases, the errors of localization can be analyzed, with adjustments made from the translation. The reproducibility of the patients' setup was verified using portal and digital images. Results : The PSVT system was developed to verify the alignment of the machine and the target isocenters, and the reproducibility of the patients' setup for 3DCRT and IMRT. The results show that the localization errors are 0.8$\pm$0.2 mm (AP) and 1.0$\pm$0.3 mm (Lateral) in the cases relating to the brain and 1.1$\pm$0.5 mm (AP) and 1.0$\pm$0.6 mm (Lateral) in the cases relating to the pelvis. The reproducibility of the patients' setup was verified by visualization, using real-time image acquisition, leading to the practical utilization of our software Conclusions : A PSVT system was developed for the verification of the alignment between machine and the target isocenters, and the reproduclbility of the patients' setup in 3DCRT and IMRT. With adjustment of the completed GUI-based algorithm, and a good quality DRR image, our software may be used for clinical applications.

Cigarette Smoking and Polymorphism of the Paraoxonase 1 Gene as Risk factors for Lung Cancer (폐암발생의 위험인자로서 흡연과 Paraoxonase 1 유전자 다형성)

  • Lee, Chul-Ho;Lee, Kye Young;Hong, Yun-Chul;Choe, Kang-Hyeon;Kim, Yong-Dae;Kang, Jong-Won;Kim, Heon;Hong, Jang Soo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.58 no.5
    • /
    • pp.490-497
    • /
    • 2005
  • Background : The paraoxonase enzyme plays a significant role in the detoxification of various organophosphorous compounds in mammals, and paraoxonase (PON) 1 is one of the endogenous free-radical scavenging systems in the human body. In this study, we investigated the interaction between cigarette smoking and the genetic polymorphism of PON1 with lung cancer in Korean males. Methods : Three hundred thirty five patients with lung cancer and an equal number of age-matched controls were enrolled in this study. Every subject was asked to complete a questionnaire concerning their smoking habits and alcohol drinking habits. A 5' exonuclease assay (TaqMan) was used to genotype the PON1 Q192R polymorphism. The effects of smoking habits and drinking habits, the PON1 Q192R polymorphism and their interactions were statistically analyzed. Results : Cigarette smoking and the Q/Q genotype of PON1 were significant risk factors for lung cancer. Individuals carrying the Q/Q genotype of PON1 were at a higher risk for lung cancer as compared with those individuals carrying the Q/R or R/R genotype (odds ratio, 2.84; 95% confidence interval, 1.69 - 4.79). When the groups were further stratified by the smoking status, the Q/Q PON1 was associated with lung cancer among the current or ex-smokers (odds ratio, 2.56; 95% confidence interval, 1.52 - 4.31). Current smokers or ex-smokers who had the Q/Q genotype showed an elevated risk for lung cancer (odds ratio: 15.50, 95% confidence interval: 6.76 - 35.54) as compared with the group of subjects who never smoked, and had the Q/R or R/R genotype. The odds ratios (95% confidence interval) of smokers with the PON1 Q/Q type compared to the nonsmokers with the PON1 Q/R or R/R type were 53.77 (6.55 - 441.14) for squamous cell carcinoma, 6.25 (1.38 - 28.32) for adenocarcinoma, and 59.94 (4.66 - 770.39) for small cell carcinoma, and these results were statistically significant. Conclusion : These results suggest that cigarette smoking and the PON1 Q/Q genotype are risk factors for lung cancer. The combination of cigarette smoking and the PON1 Q/Q genotype significantly increased the lung cancer risk irrespective of the histologic type of cancer.

Development of Neural Network Based Cycle Length Design Model Minimizing Delay for Traffic Responsive Control (실시간 신호제어를 위한 신경망 적용 지체최소화 주기길이 설계모형 개발)

  • Lee, Jung-Youn;Kim, Jin-Tae;Chang, Myung-Soon
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.3 s.74
    • /
    • pp.145-157
    • /
    • 2004
  • The cycle length design model of the Korean traffic responsive signal control systems is devised to vary a cycle length as a response to changes in traffic demand in real time by utilizing parameters specified by a system operator and such field information as degrees of saturation of through phases. Since no explicit guideline is provided to a system operator, the system tends to include ambiguity in terms of the system optimization. In addition, the cycle lengths produced by the existing model have yet been verified if they are comparable to the ones minimizing delay. This paper presents the studies conducted (1) to find shortcomings embedded in the existing model by comparing the cycle lengths produced by the model against the ones minimizing delay and (2) to propose a new direction to design a cycle length minimizing delay and excluding such operator oriented parameters. It was found from the study that the cycle lengths from the existing model fail to minimize delay and promote intersection operational conditions to be unsatisfied when traffic volume is low, due to the feature of the changed target operational volume-to-capacity ratio embedded in the model. The 64 different neural network based cycle length design models were developed based on simulation data surrogating field data. The CORSIM optimal cycle lengths minimizing delay were found through the COST software developed for the study. COST searches for the CORSIM optimal cycle length minimizing delay with a heuristic searching method, a hybrid genetic algorithm. Among 64 models, the best one producing cycle lengths close enough to the optimal was selected through statistical tests. It was found from the verification test that the best model designs a cycle length as similar pattern to the ones minimizing delay. The cycle lengths from the proposed model are comparable to the ones from TRANSYT-7F.

Limitations on Exclusive Rights of Authors for Library Reprography : A Comparative Examination of the Draft Revision of Korean Copyright Law with the New American Copyright Act of 1976 (저작권법에 준한 도서관봉사에 관한 연구 -미국과 한국의 저자재산권의 제한규정을 중시으로-)

  • 김향신
    • Journal of Korean Library and Information Science Society
    • /
    • v.11
    • /
    • pp.69-99
    • /
    • 1984
  • A dramatic development in the new technology of copying materials has presented us with massive problems on reconciling the conflicts between copyright owners and potential users of copyrighted materials. The adaptation to this changing condition led some countries to revise their copyright laws such as in the U. S. in 1976 and in Korea in 1984 for merging with the international or universal copyright conventions in the future. Copyright defined as exclusive rights given to copyright owners aims to secure a fair return for an author's creative labor and to stimulate artistic creativity for the general public good. The exclusive rights on copyrightable matters, generally for reproduction, preparation of derivative works, public distribution, public performance, and public display, are limited by fair use for scholarship and criticism and by library reproduction for its preservation and interlibrary loan. These limitations on the exclusive rights are concerned with all aspects of library services and cause a great burden on librarian's daily duty to provide balance between the rights of creators and the needs of library patrons. The fair use as one of the limitations on it has been coupled with enormous growth of a new technology and extended from xerography to online database systems. The implementation of the fair use and library reprography in Korean law to the local practices is examined on the basis of the new American copyright act of 1976. Under the draft revision of Korean law, librarians will face many potential problems as summarized below. 1. Because the new provision of 'life time plus 50 years' will tie up substantial bodies of material longer than the old law, until that date librarians would need permissions from the owners and should pay attention to the author's death date. 2. Because the copyright can be sold, distributed, given to the heirs, donated, as a whole or a part, librarians should chase down the heirs and other second owners. In case of a derivative work, this is a real problem. 3. Since a work has its protection from the moment of its creation, the coverage of copyrightable matter would be extended to the published or the unpublished works and librarian's work load would be heavier. Without copyright registration, no one can be certain that a work is in the public domain. Therefore, librarians will need to check with an authority. 4. For implementation of limitations on exclusive rights, fair use and library reproduction for interlibrary loan, there can be no substantial aggregate use and there can be no systematic distribution of multicopies. Therefore, librarians should not substitute reproductions for subscriptions or purchases. 5. For the interlibrary loan by photocopying, librarians should understand the procedure of royalty payment. 6. Compulsory licenses should be understood by librarians. 7. Because the draft revision of Korean law is a reciprocal treaty, librarians should take care of other countries' copyright law to protect foreign authors from Korean law. In order to solve the above problems, some suggestions are presented below. 1. That copyright clearinghouse or central agency as a centralized royalty payment mechanism be established. 2. That the Korean Library Association establish a committee on copyright. 3. That the Korean Library Association propose guidelines for each occasion, e.g. for interlibrary loan, books and periodicals and music, etc. 4. That the Korean government establish a copyright office or an official organization for copyright control other than the copyright committee already organized by the government. 5. That the Korean Library Association establish educational programs on copyright for librarians through seminars or articles written in its magazines. 6. That individual libraries provide librarian's copyright kits. 7. That school libraries distribute subject bibliographies on copyright law to teachers. However, librarians should keep in mind that limitations on exclusive rights are not for an exemption from library reprography but as a convenient access to library resources.

  • PDF

A standardized procedure on building spectral library for hazardous chemicals mixed in river flow using hyperspectral image (초분광 영상을 활용한 하천수 혼합 유해화학물질 표준 분광라이브러리 구축 방안)

  • Gwon, Yeonghwa;Kim, Dongsu;You, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.10
    • /
    • pp.845-859
    • /
    • 2020
  • Climate change and recent heat waves have drawn public attention toward other environmental issues, such as water pollution in the form of algal blooms, chemical leaks, and oil spills. Water pollution by the leakage of chemicals may severely affect human health as well as contaminate the air, water, and soil and cause discoloration or death of crops that come in contact with these chemicals. Chemicals that may spill into water streams are often colorless and water-soluble, which makes it difficult to determine whether the water is polluted using the naked eye. When a chemical spill occurs, it is usually detected through a simple contact detection device by installing sensors at locations where leakage is likely to occur. The drawback with the approach using contact detection sensors is that it relies heavily on the skill of field workers. Moreover, these sensors are installed at a limited number of locations, so spill detection is not possible in areas where they are not installed. Recently hyperspectral images have been used to identify land cover and vegetation and to determine water quality by analyzing the inherent spectral characteristics of these materials. While hyperspectral sensors can potentially be used to detect chemical substances, there is currently a lack of research on the detection of chemicals in water streams using hyperspectral sensors. Therefore, this study utilized remote sensing techniques and the latest sensor technology to overcome the limitations of contact detection technology in detecting the leakage of hazardous chemical into aquatic systems. In this study, we aimed to determine whether 18 types of hazardous chemicals could be individually classified using hyperspectral image. To this end, we obtained hyperspectral images of each chemical to establish a spectral library. We expect that future studies will expand the spectral library database for hazardous chemicals and that verification of its application in water streams will be conducted so that it can be applied to real-time monitoring to facilitate rapid detection and response when a chemical spill has occurred.