• Title/Summary/Keyword: Domain term

Search Result 415, Processing Time 0.027 seconds

Novel LTE based Channel Estimation Scheme for V2V Environment (LTE 기반 V2V 환경에서 새로운 채널 추정 기법)

  • Chu, Myeonghun;Moon, Sangmi;Kwon, Soonho;Lee, Jihye;Bae, Sara;Kim, Hanjong;Kim, Cheolsung;Kim, Daejin;Hwang, Intae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.3
    • /
    • pp.3-9
    • /
    • 2017
  • Recently, in 3rd Generation Partnership Project(3GPP), there is a study of the Long Term Evolution(LTE) based vehicle communication which has been actively conducted to provide a transport efficiency, telematics and infortainment. Because the vehicle communication is closely related to the safety, it requires a reliable communication. Because vehicle speed is very fast, unlike the movement of the user, radio channel is rapidly changed and generate a number of problems such as transmission quality degradation. Therefore, we have to continuously updates the channel estimates. There are five types of conventional channel estimation scheme. Least Square(LS) is obtained by pilot symbol which is known to transmitter and receiver. Decision Directed Channel Estimation(DDCE) scheme uses the data signal for channel estimation. Constructed Data Pilot(CDP) scheme uses the correlation characteristic between adjacent two data symbols. Spectral Temporal Averaging(STA) scheme uses the frequency-time domain average of the channel. Smoothing scheme reduces the peak error value of data decision. In this paper, we propose the novel channel estimation scheme in LTE based Vehicle-to-Vehicle(V2V) environment. In our Hybrid Reliable Channel Estimation(HRCE) scheme, DDCE and Smoothing schemes are combined and finally the Linear Minimum Mean Square Error(LMMSE) scheme is applied to minimize the channel estimation error. Therefore it is possible to detect the reliable data. In simulation results, overall performance can be improved in terms of Normalized Mean Square Error(NMSE) and Bit Error Rate(BER).

Bankruptcy Prediction Modeling Using Qualitative Information Based on Big Data Analytics (빅데이터 기반의 정성 정보를 활용한 부도 예측 모형 구축)

  • Jo, Nam-ok;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.33-56
    • /
    • 2016
  • Many researchers have focused on developing bankruptcy prediction models using modeling techniques, such as statistical methods including multiple discriminant analysis (MDA) and logit analysis or artificial intelligence techniques containing artificial neural networks (ANN), decision trees, and support vector machines (SVM), to secure enhanced performance. Most of the bankruptcy prediction models in academic studies have used financial ratios as main input variables. The bankruptcy of firms is associated with firm's financial states and the external economic situation. However, the inclusion of qualitative information, such as the economic atmosphere, has not been actively discussed despite the fact that exploiting only financial ratios has some drawbacks. Accounting information, such as financial ratios, is based on past data, and it is usually determined one year before bankruptcy. Thus, a time lag exists between the point of closing financial statements and the point of credit evaluation. In addition, financial ratios do not contain environmental factors, such as external economic situations. Therefore, using only financial ratios may be insufficient in constructing a bankruptcy prediction model, because they essentially reflect past corporate internal accounting information while neglecting recent information. Thus, qualitative information must be added to the conventional bankruptcy prediction model to supplement accounting information. Due to the lack of an analytic mechanism for obtaining and processing qualitative information from various information sources, previous studies have only used qualitative information. However, recently, big data analytics, such as text mining techniques, have been drawing much attention in academia and industry, with an increasing amount of unstructured text data available on the web. A few previous studies have sought to adopt big data analytics in business prediction modeling. Nevertheless, the use of qualitative information on the web for business prediction modeling is still deemed to be in the primary stage, restricted to limited applications, such as stock prediction and movie revenue prediction applications. Thus, it is necessary to apply big data analytics techniques, such as text mining, to various business prediction problems, including credit risk evaluation. Analytic methods are required for processing qualitative information represented in unstructured text form due to the complexity of managing and processing unstructured text data. This study proposes a bankruptcy prediction model for Korean small- and medium-sized construction firms using both quantitative information, such as financial ratios, and qualitative information acquired from economic news articles. The performance of the proposed method depends on how well information types are transformed from qualitative into quantitative information that is suitable for incorporating into the bankruptcy prediction model. We employ big data analytics techniques, especially text mining, as a mechanism for processing qualitative information. The sentiment index is provided at the industry level by extracting from a large amount of text data to quantify the external economic atmosphere represented in the media. The proposed method involves keyword-based sentiment analysis using a domain-specific sentiment lexicon to extract sentiment from economic news articles. The generated sentiment lexicon is designed to represent sentiment for the construction business by considering the relationship between the occurring term and the actual situation with respect to the economic condition of the industry rather than the inherent semantics of the term. The experimental results proved that incorporating qualitative information based on big data analytics into the traditional bankruptcy prediction model based on accounting information is effective for enhancing the predictive performance. The sentiment variable extracted from economic news articles had an impact on corporate bankruptcy. In particular, a negative sentiment variable improved the accuracy of corporate bankruptcy prediction because the corporate bankruptcy of construction firms is sensitive to poor economic conditions. The bankruptcy prediction model using qualitative information based on big data analytics contributes to the field, in that it reflects not only relatively recent information but also environmental factors, such as external economic conditions.

Two Cases of Long-Term Changes in the Retinal Nerve Fiber Layer Thickness after Intravitreal Bevacizumab for Diabetic Papillopathy (당뇨병유두병증에서 유리체강내 베바시주맙 주입술 후 망막시경섬유층 두께의 장기간 변화 2예)

  • Kim, Jong Jin;Im, Jong Chan;Shin, Jae Pil;Kim, In Taek;Park, Dong Ho
    • Journal of The Korean Ophthalmological Society
    • /
    • v.54 no.9
    • /
    • pp.1445-1451
    • /
    • 2013
  • Purpose: To report long-term changes in the average retinal nerve fiber layer (RNFL) thickness in 2 patients who had intravitreal bevacizumab (IVB) injection for diabetic papillopathy. Case summary: A 36-year-old patient with diabetes complained of decreased visual acuity (20/200) in the right eye. The fundus examination showed optic disc swelling in both eyes. The average RNFL thickness based on optical coherence tomography (OCT) increased to $278{\mu}m$ and Goldmann perimetry showed nasal visual field defect in the right eye. The IVB was injected into the right eye. Three weeks after the IVB injection, RNFL thickness decreased to $135{\mu}m$ and visual acuity improved to 20/25 in the right eye. However, RNFL thickness increased from 126 to $207{\mu}m$ and visual acuity decreased to 20/32 in the left eye. Thus, IVB was injected into the left eye. In week 3, RNFL thickness decreased to $147{\mu}m$ and visual acuity improved to 20/20 in the left eye. At 12 months after IVB injection, RNFL thickness was $87{\mu}m$ in the right eye and $109{\mu}m$ in the left eye. A 57-year-old patient with diabetes complained of decreased visual acuity (20/200) and showed optic disc swelling in the right eye. The average RNFL thickness increased to $252{\mu}m$ and Goldmann perimetry showed an enlarged blind spot in the right eye. IVB was injected into the right eye. After 3 weeks, RNFL thickness decreased to $136{\mu}m$ and visual acuity improved to 20/70 in the right eye. Six months after IVB injection, RNFL thickness was $83{\mu}m$ in the right eye. Conclusions: Visual acuity progressively improved within 3 weeks and RNFL thickness measured by spectral domain OCT showed progressive thickness reduction in 2 cases of diabetic papillopathy patients who had IVB injections.

Long-term Results of Taking Anti-oxidant Nutritional Supplement in Intermediate Age-related Macular Degeneration (중기 나이관련황반변성 환자에서 항산화영양제 복용 후 장기 관찰 결과)

  • Bang, Seul Ki;Kim, Eung Suk;Kim, Jong Woo;Shin, Jae Pil;Lee, Ji Eun;Yu, Hyeong Gon;Huh, Kuhl;Yu, Seung-Young
    • Journal of The Korean Ophthalmological Society
    • /
    • v.59 no.12
    • /
    • pp.1152-1159
    • /
    • 2018
  • Purpose: We prospectively investigated clinical changes and long-term outcomes after administration of the drugs recommended by the Age-Related Eye Disease Study-2 to patients with intermediate age-related macular degeneration (AMD). Methods: This prospective multicenter study enrolled 79 eyes of 55 patients taking lutein and zeaxanthin. The primary endpoint was contrast sensitivity; this was checked every 12 months for a total of 36 months after treatment commenced. The secondary endpoints were visual acuity, central macular thickness, and drusen volume; the latter two parameters were assessed using spectral domain optical coherence tomography. Results: The mean patient age was $72.46{\pm}7.16years$. Contrast sensitivity gradually improved at both three and six cycles per degree. The corrected visual acuity was $0.13{\pm}0.14logMAR$ and did not change significantly over the 36 months. Neither the central macular thickness nor drusen volume changed significantly. Conclusions: Contrast sensitivity markedly improved after treatment, improving vision and patient satisfaction. Visual acuity, central retinal thickness, and drusen volume did not deteriorate. Therefore, progression of AMD and visual function deterioration were halted.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

Comparison of Cognitive Loads between Koreans and Foreigners in the Reading Process

  • Im, Jung Nam;Min, Seung Nam;Cho, Sung Moon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.35 no.4
    • /
    • pp.293-305
    • /
    • 2016
  • Objective: This study aims to measure cognitive load levels by analyzing the EEG of Koreans and foreigners, when they read a Korean text with care selected by level from the grammar and vocabulary aspects, and compare the cognitive load levels through quantitative values. The study results can be utilized as basic data for more scientific approach, when Korean texts or books are developed, and an evaluation method is built, when the foreigners encounter them for learning or an assignment. Background: Based on 2014, the number of the foreign students studying in Korea was 84,801, and they increase annually. Most of them are from Asian region, and they come to Korea to enter a university or a graduate school in Korea. Because those foreign students aim to learn within Universities in Korea, they receive Korean education from their preparation for study in Korea. To enter a university in Korea, they must acquire grade 4 or higher level in the Test of Proficiency in Korean (TOPIK), or they need to complete a certain educational program at each university's affiliated language institution. In such a program, the learners of the Korean language receive Korean education based on texts, except speaking domain, and the comprehension of texts can determine their academic achievements in studying after they enter their desired schools (Jeon, 2004). However, many foreigners, who finish a language course for the short-term, and need to start university study, cannot properly catch up with university classes requiring expertise with the vocabulary and grammar levels learned during the language course. Therefore, reading education, centered on a strategy to understand university textbooks regarded as top level reading texts to the foreigners, is necessary (Kim and Shin, 2015). This study carried out an experiment from a perspective that quantitative data on the readers of the main player of reading education and teaching materials need to be secured to back up the need for reading education for university study learners, and scientifically approach educational design. Namely, this study grasped the difficulty level of reading through the measurement of cognitive loads indicated in the reading activity of each text by dividing the difficulty of a teaching material (book) into eight levels, and the main player of reading into Koreans and foreigners. Method: To identify cognitive loads indicated upon reading Korean texts with care by Koreans and foreigners, this study recruited 16 participants (eight Koreans and eight foreigners). The foreigners were limited to the language course students studying the intermediate level Korean course at university-affiliated language institutions within Seoul Metropolitan Area. To identify cognitive load, as they read a text by level selected from the Korean books (difficulty: eight levels) published by King Sejong Institute (Sejonghakdang.org), the EEG sensor was attached to the frontal love (Fz) and occipital lobe (Oz). After the experiment, this study carried out a questionnaire survey to measure subjective evaluation, and identified the comprehension and difficulty on grammar and words. To find out the effects on schema that may affect text comprehension, this study controlled the Korean texts, and measured EEG and subjective satisfaction. Results: To identify brain's cognitive load, beta band was extracted. As a result, interactions (Fz: p =0.48; Oz: p =0.00) were revealed according to Koreans and foreigners, and difficulty of the text. The cognitive loads of Koreans, the readers whose mother tongue is Korean, were lower in reading Korean texts than those of the foreigners, and the foreigners' cognitive loads became higher gradually according to the difficulty of the texts. From the text four, which is intermediate level in difficulty, remarkable differences started to appear in comparison of the Koreans and foreigners in the beginner's level text. In the subjective evaluation, interactions were revealed according to the Koreans and foreigners and text difficulty (p =0.00), and satisfaction was lower, as the difficulty of the text became higher. Conclusion: When there was background knowledge in reading, namely schema was formed, the comprehension and satisfaction of the texts were higher, although higher levels of vocabulary and grammar were included in the texts than those of the readers. In the case of a text in which the difficulty of grammar was felt high in the subjective evaluation, foreigners' cognitive loads were also high, which shows the result of the loads' going up higher in proportion to the increase of difficulty. This means that the grammar factor functions as a stress factor to the foreigners' reading comprehension. Application: This study quantitatively evaluated the cognitive loads of Koreans and foreigners through EEG, based on readers and the text difficulty, when they read Korean texts. The results of this study can be used for making Korean teaching materials or Korean education content and topic selection for foreigners. If research scope is expanded to reading process using an eye-tracker, the reading education program and evaluation method for foreigners can be developed on the basis of quantitative values.

PRC Maritime Operational Capability and the Task for the ROK Military (중국군의 해양작전능력과 한국군의 과제)

  • Kim, Min-Seok
    • Strategy21
    • /
    • s.33
    • /
    • pp.65-112
    • /
    • 2014
  • Recent trends show that the PRC has stepped aside its "army-centered approach" and placed greater emphasis on its Navy and Air Force for a wider range of operations, thereby reducing its ground force and harnessing its economic power and military technology into naval development. A quantitative growth of the PLA Navy itself is no surprise as this is not a recent phenomenon. Now is the time to pay closer attention to the level of PRC naval force's performance and the extent of its warfighting capacity in the maritime domain. It is also worth asking what China can do with its widening naval power foundation. In short, it is time to delve into several possible scenarios I which the PRC poses a real threat. With this in mind, in Section Two the paper seeks to observe the construction progress of PRC's naval power and its future prospects up to the year 2020, and categorize time frame according to its major force improvement trends. By analyzing qualitative improvements made over time, such as the scale of investment and the number of ships compared to increase in displacement (tonnage), this paper attempts to identify salient features in the construction of naval power. Chapter Three sets out performance evaluation on each type of PRC naval ships as well as capabilities of the Navy, Air Force, the Second Artillery (i.e., strategic missile forces) and satellites that could support maritime warfare. Finall, the concluding chapter estimates the PRC's maritime warfighting capability as anticipated in respective conflict scenarios, and considers its impact on the Korean Peninsula and proposes the directions ROK should steer in response. First of all, since the 1980s the PRC navy has undergone transitions as the focus of its military strategic outlook shifted from ground warfare to maritime warfare, and within 30 years of its effort to construct naval power while greatly reducing the size of its ground forces, the PRC has succeeded in building its naval power next to the U.S.'s in the world in terms of number, with acquisition of an aircraft carrier, Chinese-version of the Aegis, submarines and so on. The PRC also enjoys great potentials to qualitatively develop its forces such as indigenous aircraft carriers, next-generation strategic submarines, next-generation destroyers and so forth, which is possible because the PRC has accumulated its independent production capabilities in the process of its 30-year-long efforts. Secondly, one could argue that ROK still has its chances of coping with the PRC in naval power since, despite its continuous efforts, many estimate that the PRC naval force is roughly ten or more years behind that of superpowers such as the U.S., on areas including radar detection capability, EW capability, C4I and data-link systems, doctrines on force employment as well as tactics, and such gap cannot be easily overcome. The most probable scenarios involving the PRC in sea areas surrounding the Korean Peninsula are: first, upon the outbreak of war in the peninsula, the PRC may pursue military intervention through sea, thereby undermining efforts of the ROK-U.S. combined operations; second, ROK-PRC or PRC-Japan conflicts over maritime jurisdiction or ownership over the Senkaku/Diaoyu islands could inflict damage to ROK territorial sovereignty or economic gains. The PRC would likely attempt to resolve the conflict employing blitzkrieg tactics before U.S. forces arrive on the scene, while at the same time delaying and denying access of the incoming U.S. forces. If this proves unattainable, the PRC could take a course of action adopting "long-term attrition warfare," thus weakening its enemy's sustainability. All in all, thiss paper makes three proposals on how the ROK should respond. First, modern warfare as well as the emergent future warfare demonstrates that the center stage of battle is no longer the domestic territory, but rather further away into the sea and space. In this respect, the ROKN should take advantage of the distinct feature of battle space on the peninsula, which is surrounded by the seas, and obtain capabilities to intercept more than 50 percent of the enemy's ballistic missiles, including those of North Korea. In tandem with this capacity, employment of a large scale of UAV/F Carrier for Kill Chain operations should enhance effectiveness. This is because conditions are more favorable to defend from sea, on matters concerning accuracy rates against enemy targets, minimized threat of friendly damage, and cost effectiveness. Second, to maintain readiness for a North Korean crisis where timely deployment of US forces is not possible, the ROKN ought to obtain capabilities to hold the enemy attack at bay while deterring PRC naval intervention. It is also argued that ROKN should strengthen its power so as to protect national interests in the seas surrounding the peninsula without support from the USN, should ROK-PRC or ROK-Japan conflict arise concerning maritime jurisprudence. Third, the ROK should fortify infrastructures for independent construction of naval power and expand its R&D efforts, and for this purpose, the ROK should make the most of the advantages stemming from the ROK-U.S. alliance inducing active support from the United States. The rationale behind this argument is that while it is strategically effective to rely on alliance or jump on the bandwagon, the ultimate goal is always to acquire an independent response capability as much as possible.

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.

An Quantitative Analysis of Severity Classification and Burn Severity for the Large Forest Fire Areas using Normalized Burn Ratio of Landsat Imagery (Landsat 영상으로부터 정규탄화지수 추출과 산불피해지역 및 피해강도의 정량적 분석)

  • Won, Myoung-Soo;Koo, Kyo-Sang;Lee, Myung-Bo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.3
    • /
    • pp.80-92
    • /
    • 2007
  • Forest fire is the dominant large-scale disturbance mechanism in the Korean temperate forest, and it strongly influences forest structure and function. Moreover burn severity incorporates both short- and long-term post-fire effects on the local and regional environment. Burn severity is defined by the degree to which an ecosystem has changed owing to the fire. Vegetation rehabilitation may specifically vary according to burn severity after fire. To understand burn severity and process of vegetation rehabilitation at the damaged area after large-fire is required a lot of man powers and budgets. However the analysis of burn severity in the forest area using satellite imagery can acquire rapidly information and more objective results remotely in the large-fire area. Space and airbone sensors have been used to map area burned, assess characteristics of active fires, and characterize post-fire ecological effects. For classifying fire damaged area and analyzing burn severity of Samcheok fire area occurred in 2000, Cheongyang fire in 2002, and Yangyang fire in 2005 we utilized Normalized Burn Ratio(NBR) technique. The NBR is temporally differenced between pre- and post-fire datasets to determine the extent and degree of change detected from burning. In this paper we use pre- and post-fire imagery from the Landsat TM and ETM+ imagery to compute the NBR and evaluate large-scale patterns of burn severity at 30m spatial resolution. 65% in the Samcheok fire area, 91% in the Cheongyang fire area and 65% in the Yangyang fire area were corresponded to burn severity class above 'High'. Therefore the use of a remotely sensed Differenced Normalized Burn Ratio(${\Delta}NBR$) by RS and GIS allows for the burn severity to be quantified spatially by mapping damaged domain and burn severity across large-fire area.

  • PDF

Study on the Storage of Chestnut (밤 저장(貯藏)에 관(關)한 연구(硏究))

  • Yim, Ho;Kim, Choung-Ok;Shin, Dang-Wha;Suh, Kee- Bong
    • Korean Journal of Food Science and Technology
    • /
    • v.12 no.3
    • /
    • pp.170-175
    • /
    • 1980
  • A mass production of chestnut necessiates the development of economic long-term storage method. The main objective of this study was to confirm the technical aspect of the chestnut storage method which was developed by two year project and to review the method of commercial application. The chestnut used for the experiments were separated in brine $(5.5{\sim}6.0^{\circ}\:B{\acute{a}}ume)$ into matured and unmatured lots and fumigated with $CS_2$ at a 5 $lb/27\;m^3$ level for $25{\sim}30\;hrs.$ The chestnuts were packed in wooden boxes with sawdust (50% moisture) in the ratio of 1 : 1 by volume. The boxes were stored in the cold room $(1{\pm}1^{\circ}C,\;85{\sim}95%\;RH)$ and the cellar ($0{\sim}10^{\circ}C$, controlled only by circulating night cool air). The results obtained were as follows: 1. Fully matured chestnut could be successfully preserved $8{\sim}9\;months$ at a l0% decay level in the cold room and $4{\sim}5\;months$ months in cellar. 2. Immatured chestnuts wire inferior to the matured in storage stability. At the maximum storage period, its storage life was two months shorter. 3. The heat transfer equation of piled chestnuts with sawdust can be suggested as $T_{\infty}-T_0=(T_{\infty}-T_0){\cdot}10^{-t/320}$ and j and $f_h$ values were 1 and 320 min, respectively. 4. The chestnuts in the package of storage unit had longer shelf life than naked chestnut during the retail distribution at ambient temperature.

  • PDF