• Title/Summary/Keyword: 데이터 처리량

Search Result 2,565, Processing Time 0.034 seconds

Growth and Useful Component of Angelica gigas Nakai under High Temperature Stress (고온 스트레스에 따른 참당귀의 생육 및 유용성분 특성)

  • Jeong, Dae Hui;Kim, Ki Yoon;Park, Sung Hyuk;Jung, Chung Ryul;Jeon, Kwon Seok;Park, Hong Woo
    • Korean Journal of Plant Resources
    • /
    • v.34 no.4
    • /
    • pp.287-296
    • /
    • 2021
  • Recently, the pace of global climate change has tremendously increased, causing extreme damage to crop production. Here, we aimed to examine the growth characteristics and useful components of Angelica gigas under extreme heat stress, providing fundamental data for its efficient cultivation. Plants were exposed to various experimental temperatures (28℃, 34℃, and 40℃), and their growth characteristics and content of useful components were analyzed. At the experimental site, the ambient and soil temperature were 19.38℃ and 21.34℃, ambient and soil humidity were 81.3 % and 0.18 m3/m3, solar radiation was 162.05 W/m2. Moreover, the soil was sandy-clay-loam (pH 6.65), with 2.66% organic matter, 868.52 mg/kg soil available phosphate, and 0.14% nitrogen. Values of most growth characteristics, including the survival rate (85%), plant height (38.66cm), and fresh and dry weight (41.3 g and 14.24 g), were the highest at 28℃. Although the highest content of useful components was observed at 34℃ (3.24%), there were no significant differences across temperatures. Growth characteristics varied across temperatures due to detrimental effects of heat stress, such as accelerated tissue aging, reduced photosynthesis, and delay of growth. Similar content of useful components across temperatures may be due to poor accumulation of anabolic products caused by impaired growth at extremely high temperatures.

Analysis of Soil Changes in Vegetable LID Facilities (식생형 LID 시설의 내부 토양 변화 분석)

  • Lee, Seungjae;Yoon, Yeo-jin
    • Journal of Wetlands Research
    • /
    • v.24 no.3
    • /
    • pp.204-212
    • /
    • 2022
  • The LID technique began to be applied in Korea after 2009, and LID facilities are installed and operated for rainwater management in business districts such as the Ministry of Environment, the Ministry of Land, Infrastructure and Transport, and LH Corporation, public institutions, commercial land, housing, parks, and schools. However, looking at domestic cases, the application cases and operation periods are insufficient compared to those outside the country, so appropriate design standards and measures for operation and maintenance are insufficient. In particular, LID facilities constructed using LID techniques need to maintain the environment inside LID facilities because hydrological and environmental effects are expressed by material circulation and energy flow. The LID facility is designed with the treatment capacity planned for the water circulation target, and the proper maintenance, vegetation, and soil conditions are periodically identified, and the efficiency is maintained as much as possible. In other words, the soil created in LID is a very important design element because LID facilities are expected to have effects such as water pollution reduction, flood reduction, water resource acquisition, and temperature reduction while increasing water storage and penetration capacity through water circulation construction. In order to maintain and manage the functions of LID facilities accurately, the current state of the facilities and the cycle of replacement and maintenance should be accurately known through various quantitative data such as soil contamination, snow removal effects, and vegetation criteria. This study was conducted to investigate the current status of LID facilities installed in Korea from 2009 to 2020, and analyze soil changes through the continuity and current status of LID facilities applied over the past 10 years after collecting soil samples from the soil layer. Through analysis of Saturn, organic matter, hardness, water contents, pH, electrical conductivity, and salt, some vegetation-type LID facilities more than 5 to 7 years after construction showed results corresponding to the lower grade of landscape design. Facilities below the lower level can be recognized as a point of time when maintenance is necessary in a state that may cause problems in soil permeability and vegetation growth. Accordingly, it was found that LID facilities should be managed through soil replacement and replacement.

Finite Element Method Modeling for Individual Malocclusions: Development and Application of the Basic Algorithm (유한요소법을 이용한 환자별 교정시스템 구축의 기초 알고리즘 개발과 적용)

  • Shin, Jung-Woog;Nahm, Dong-Seok;Kim, Tae-Woo;Lee, Sung Jae
    • The korean journal of orthodontics
    • /
    • v.27 no.5 s.64
    • /
    • pp.815-824
    • /
    • 1997
  • The purpose of this study is to develop the basic algorithm for the finite element method modeling of individual malocclusions. Usually, a great deal of time is spent in preprocessing. To reduce the time required, we developed a standardized procedure for measuring the position of each tooth and a program to automatically preprocess. The following procedures were carried to complete this study. 1. Twenty-eight teeth morphologies were constructed three-dimensionally for the finite element analysis and saved as separate files. 2. Standard brackets were attached so that the FA points coincide with the center of the brackets. 3. The study model of a patient was made. 4. Using the study model, the crown inclination, angulation, and the vertical distance from the tip of a tooth was measured by using specially designed tools. 5. The arch form was determined from a picture of the model with an image processing technique. 6. The measured data were input as a rotational matrix. 7. The program provides an output file containing the necessary information about the three-dimensional position of teeth, which is applicable to several finite element programs commonly used. The program for a basic algorithm was made with Turbo-C and the subsequent outfile was applied to ANSYS. This standardized model measuring procedure and the program reduce the time required, especially for preprocessing and can be applied to other malocclusions easily.

  • PDF

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

Development of a Small Animal Positron Emission Tomography Using Dual-layer Phoswich Detector and Position Sensitive Photomultiplier Tube: Preliminary Results (두층 섬광결정과 위치민감형광전자증배관을 이용한 소동물 양전자방출단층촬영기 개발: 기초실험 결과)

  • Jeong, Myung-Hwan;Choi, Yong;Chung, Yong-Hyun;Song, Tae-Yong;Jung, Jin-Ho;Hong, Key-Jo;Min, Byung-Jun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.5
    • /
    • pp.338-343
    • /
    • 2004
  • Purpose: The purpose of this study was to develop a small animal PET using dual layer phoswich detector to minimize parallax error that degrades spatial resolution at the outer part of field-of-view (FOV). Materials and Methods: A simulation tool GATE (Geant4 Application for Tomographic Emission) was used to derive optimal parameters of small PET, and PET was developed employing the parameters. Lutetium Oxyorthosilicate (LSO) and Lutetium-Yttrium Aluminate-Perovskite(LuYAP) was used to construct dual layer phoswitch crystal. $8{\times}8$ arrays of LSO and LuYAP pixels, $2mm{\times}2mm{\times}8mm$ in size, were coupled to a 64-channel position sensitive photomultiplier tube. The system consisted of 16 detector modules arranged to one ring configuration (ring inner diameter 10 cm, FOV of 8 cm). The data from phoswich detector modules were fed into an ADC board in the data acquisition and preprocessing PC via sockets, decoder block, FPGA board, and bus board. These were linked to the master PC that stored the events data on hard disk. Results: In a preliminary test of the system, reconstructed images were obtained by using a pair of detectors and sensitivity and spatial resolution were measured. Spatial resolution was 2.3 mm FWHM and sensitivity was 10.9 $cps/{\mu}Ci$ at the center of FOV. Conclusion: The radioactivity distribution patterns were accurately represented in sinograms and images obtained by PET with a pair of detectors. These preliminary results indicate that it is promising to develop a high performance small animal PET.

Design of a Crowd-Sourced Fingerprint Mapping and Localization System (군중-제공 신호지도 작성 및 위치 추적 시스템의 설계)

  • Choi, Eun-Mi;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.595-602
    • /
    • 2013
  • WiFi fingerprinting is well known as an effective localization technique used for indoor environments. However, this technique requires a large amount of pre-built fingerprint maps over the entire space. Moreover, due to environmental changes, these maps have to be newly built or updated periodically by experts. As a way to avoid this problem, crowd-sourced fingerprint mapping attracts many interests from researchers. This approach supports many volunteer users to share their WiFi fingerprints collected at a specific environment. Therefore, crowd-sourced fingerprinting can automatically update fingerprint maps up-to-date. In most previous systems, however, individual users were asked to enter their positions manually to build their local fingerprint maps. Moreover, the systems do not have any principled mechanism to keep fingerprint maps clean by detecting and filtering out erroneous fingerprints collected from multiple users. In this paper, we present the design of a crowd-sourced fingerprint mapping and localization(CMAL) system. The proposed system can not only automatically build and/or update WiFi fingerprint maps from fingerprint collections provided by multiple smartphone users, but also simultaneously track their positions using the up-to-date maps. The CMAL system consists of multiple clients to work on individual smartphones to collect fingerprints and a central server to maintain a database of fingerprint maps. Each client contains a particle filter-based WiFi SLAM engine, tracking the smartphone user's position and building each local fingerprint map. The server of our system adopts a Gaussian interpolation-based error filtering algorithm to maintain the integrity of fingerprint maps. Through various experiments, we show the high performance of our system.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Dynamic Virtual Ontology using Tags with Semantic Relationship on Social-web to Support Effective Search (효율적 자원 탐색을 위한 소셜 웹 태그들을 이용한 동적 가상 온톨로지 생성 연구)

  • Lee, Hyun Jung;Sohn, Mye
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.19-33
    • /
    • 2013
  • In this research, a proposed Dynamic Virtual Ontology using Tags (DyVOT) supports dynamic search of resources depending on user's requirements using tags from social web driven resources. It is general that the tags are defined by annotations of a series of described words by social users who usually tags social information resources such as web-page, images, u-tube, videos, etc. Therefore, tags are characterized and mirrored by information resources. Therefore, it is possible for tags as meta-data to match into some resources. Consequently, we can extract semantic relationships between tags owing to the dependency of relationships between tags as representatives of resources. However, to do this, there is limitation because there are allophonic synonym and homonym among tags that are usually marked by a series of words. Thus, research related to folksonomies using tags have been applied to classification of words by semantic-based allophonic synonym. In addition, some research are focusing on clustering and/or classification of resources by semantic-based relationships among tags. In spite of, there also is limitation of these research because these are focusing on semantic-based hyper/hypo relationships or clustering among tags without consideration of conceptual associative relationships between classified or clustered groups. It makes difficulty to effective searching resources depending on user requirements. In this research, the proposed DyVOT uses tags and constructs ontologyfor effective search. We assumed that tags are extracted from user requirements, which are used to construct multi sub-ontology as combinations of tags that are composed of a part of the tags or all. In addition, the proposed DyVOT constructs ontology which is based on hierarchical and associative relationships among tags for effective search of a solution. The ontology is composed of static- and dynamic-ontology. The static-ontology defines semantic-based hierarchical hyper/hypo relationships among tags as in (http://semanticcloud.sandra-siegel.de/) with a tree structure. From the static-ontology, the DyVOT extracts multi sub-ontology using multi sub-tag which are constructed by parts of tags. Finally, sub-ontology are constructed by hierarchy paths which contain the sub-tag. To create dynamic-ontology by the proposed DyVOT, it is necessary to define associative relationships among multi sub-ontology that are extracted from hierarchical relationships of static-ontology. The associative relationship is defined by shared resources between tags which are linked by multi sub-ontology. The association is measured by the degree of shared resources that are allocated into the tags of sub-ontology. If the value of association is larger than threshold value, then associative relationship among tags is newly created. The associative relationships are used to merge and construct new hierarchy the multi sub-ontology. To construct dynamic-ontology, it is essential to defined new class which is linked by two more sub-ontology, which is generated by merged tags which are highly associative by proving using shared resources. Thereby, the class is applied to generate new hierarchy with extracted multi sub-ontology to create a dynamic-ontology. The new class is settle down on the ontology. So, the newly created class needs to be belong to the dynamic-ontology. So, the class used to new hyper/hypo hierarchy relationship between the class and tags which are linked to multi sub-ontology. At last, DyVOT is developed by newly defined associative relationships which are extracted from hierarchical relationships among tags. Resources are matched into the DyVOT which narrows down search boundary and shrinks the search paths. Finally, we can create the DyVOT using the newly defined associative relationships. While static data catalog (Dean and Ghemawat, 2004; 2008) statically searches resources depending on user requirements, the proposed DyVOT dynamically searches resources using multi sub-ontology by parallel processing. In this light, the DyVOT supports improvement of correctness and agility of search and decreasing of search effort by reduction of search path.

Test Bed Studies with Highly Efficient Amine CO2 Solvent (KoSol-4) (고효율 습식 아민 CO2 흡수제(KoSol-4)를 적용한 Test bed 성능시험)

  • Lee, Ji Hyun;Kwak, No-Sang;Lee, In Young;Jang, Kyung Ryoung;Jang, Se Gyu;Lee, Kyung Ja;Han, Gwang Su;Oh, Dong-Hun;Shim, Jae-Goo
    • Korean Chemical Engineering Research
    • /
    • v.51 no.2
    • /
    • pp.267-271
    • /
    • 2013
  • Test bed studies with highly efficient amine $CO_2$ solvent (KoSol-4) developed by KEPCO research institute were performed. For the first time in Korea, evaluation of post-combustion $CO_2$ capture technology to capture 2 ton $CO_2$/day from a slipstream of the flue gas from a coal-fired power station was performed. Also the analysis of solvent regeneration energy was conducted to suggest the reliable performance data of the KoSol-4 solvent. For this purpose, we have tested 5 campaigns changing the operating conditions of the solvent flow rate and the stripper pressure. The overall results of these campaigns showed that the $CO_2$ removal rate met the technical guideline ($CO_2$ removal rate: 90%) suggested by IEA-GHG and that the regeneration energy of the KoSol-4 showed about 3.0~3.2 GJ/$tCO_2$ which was, compared to that of the commercial solvent MEA (Monoethanolamine), about 25% reduction of regeneration energy. Based on these results, we could confirm the good performance of the KoSol-4 solvent and the $CO_2$ capture process developed by KEPCO research institute. And also it was expected that the cost of $CO_2$ avoided could be reduced drastically if the KoSol-4 is applied to the commercial scale $CO_2$ capture plant.

The Study of Land Surface Change Detection Using Long-Term SPOT/VEGETATION (장기간 SPOT/VEGETATION 정규화 식생지수를 이용한 지면 변화 탐지 개선에 관한 연구)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, In-Hwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.4
    • /
    • pp.111-124
    • /
    • 2010
  • To monitor the environment of land surface change is considered as an important research field since those parameters are related with land use, climate change, meteorological study, agriculture modulation, surface energy balance, and surface environment system. For the change detection, many different methods have been presented for distributing more detailed information with various tools from ground based measurement to satellite multi-spectral sensor. Recently, using high resolution satellite data is considered the most efficient way to monitor extensive land environmental system especially for higher spatial and temporal resolution. In this study, we use two different spatial resolution satellites; the one is SPOT/VEGETATION with 1 km spatial resolution to detect coarse resolution of the area change and determine objective threshold. The other is Landsat satellite having high resolution to figure out detailed land environmental change. According to their spatial resolution, they show different observation characteristics such as repeat cycle, and the global coverage. By correlating two kinds of satellites, we can detect land surface change from mid resolution to high resolution. The K-mean clustering algorithm is applied to detect changed area with two different temporal images. When using solar spectral band, there are complicate surface reflectance scattering characteristics which make surface change detection difficult. That effect would be leading serious problems when interpreting surface characteristics. For example, in spite of constant their own surface reflectance value, it could be changed according to solar, and sensor relative observation location. To reduce those affects, in this study, long-term Normalized Difference Vegetation Index (NDVI) with solar spectral channels performed for atmospheric and bi-directional correction from SPOT/VEGETATION data are utilized to offer objective threshold value for detecting land surface change, since that NDVI has less sensitivity for solar geometry than solar channel. The surface change detection based on long-term NDVI shows improved results than when only using Landsat.