• Title/Summary/Keyword: 처리시스템

Search Result 27,723, Processing Time 0.059 seconds

Individual Thinking Style leads its Emotional Perception: Development of Web-style Design Evaluation Model and Recommendation Algorithm Depending on Consumer Regulatory Focus (사고가 시각을 바꾼다: 조절 초점에 따른 소비자 감성 기반 웹 스타일 평가 모형 및 추천 알고리즘 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.171-196
    • /
    • 2018
  • With the development of the web, two-way communication and evaluation became possible and marketing paradigms shifted. In order to meet the needs of consumers, web design trends are continuously responding to consumer feedback. As the web becomes more and more important, both academics and businesses are studying consumer emotions and satisfaction on the web. However, some consumer characteristics are not well considered. Demographic characteristics such as age and sex have been studied extensively, but few studies consider psychological characteristics such as regulatory focus (i.e., emotional regulation). In this study, we analyze the effect of web style on consumer emotion. Many studies analyze the relationship between the web and regulatory focus, but most concentrate on the purpose of web use, particularly motivation and information search, rather than on web style and design. The web communicates with users through visual elements. Because the human brain is influenced by all five senses, both design factors and emotional responses are important in the web environment. Therefore, in this study, we examine the relationship between consumer emotion and satisfaction and web style and design. Previous studies have considered the effects of web layout, structure, and color on emotions. In this study, however, we excluded these web components, in contrast to earlier studies, and analyzed the relationship between consumer satisfaction and emotional indexes of web-style only. To perform this analysis, we collected consumer surveys presenting 40 web style themes to 204 consumers. Each consumer evaluated four themes. The emotional adjectives evaluated by consumers were composed of 18 contrast pairs, and the upper emotional indexes were extracted through factor analysis. The emotional indexes were 'softness,' 'modernity,' 'clearness,' and 'jam.' Hypotheses were established based on the assumption that emotional indexes have different effects on consumer satisfaction. After the analysis, hypotheses 1, 2, and 3 were accepted and hypothesis 4 was rejected. While hypothesis 4 was rejected, its effect on consumer satisfaction was negative, not positive. This means that emotional indexes such as 'softness,' 'modernity,' and 'clearness' have a positive effect on consumer satisfaction. In other words, consumers prefer emotions that are soft, emotional, natural, rounded, dynamic, modern, elaborate, unique, bright, pure, and clear. 'Jam' has a negative effect on consumer satisfaction. It means, consumer prefer the emotion which is empty, plain, and simple. Regulatory focus shows differences in motivation and propensity in various domains. It is important to consider organizational behavior and decision making according to the regulatory focus tendency, and it affects not only political, cultural, ethical judgments and behavior but also broad psychological problems. Regulatory focus also differs from emotional response. Promotion focus responds more strongly to positive emotional responses. On the other hand, prevention focus has a strong response to negative emotions. Web style is a type of service, and consumer satisfaction is affected not only by cognitive evaluation but also by emotion. This emotional response depends on whether the consumer will benefit or harm himself. Therefore, it is necessary to confirm the difference of the consumer's emotional response according to the regulatory focus which is one of the characteristics and viewpoint of the consumers about the web style. After MMR analysis result, hypothesis 5.3 was accepted, and hypothesis 5.4 was rejected. But hypothesis 5.4 supported in the opposite direction to the hypothesis. After validation, we confirmed the mechanism of emotional response according to the tendency of regulatory focus. Using the results, we developed the structure of web-style recommendation system and recommend methods through regulatory focus. We classified the regulatory focus group in to three categories that promotion, grey, prevention. Then, we suggest web-style recommend method along the group. If we further develop this study, we expect that the existing regulatory focus theory can be extended not only to the motivational part but also to the emotional behavioral response according to the regulatory focus tendency. Moreover, we believe that it is possible to recommend web-style according to regulatory focus and emotional desire which consumers most prefer.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Application and Analysis of Ocean Remote-Sensing Reflectance Quality Assurance Algorithm for GOCI-II (천리안해양위성 2호(GOCI-II) 원격반사도 품질 검증 시스템 적용 및 결과)

  • Sujung Bae;Eunkyung Lee;Jianwei Wei;Kyeong-sang Lee;Minsang Kim;Jong-kuk Choi;Jae Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1565-1576
    • /
    • 2023
  • An atmospheric correction algorithm based on the radiative transfer model is required to obtain remote-sensing reflectance (Rrs) from the Geostationary Ocean Color Imager-II (GOCI-II) observed at the top-of-atmosphere. This Rrs derived from the atmospheric correction is utilized to estimate various marine environmental parameters such as chlorophyll-a concentration, total suspended materials concentration, and absorption of dissolved organic matter. Therefore, an atmospheric correction is a fundamental algorithm as it significantly impacts the reliability of all other color products. However, in clear waters, for example, atmospheric path radiance exceeds more than ten times higher than the water-leaving radiance in the blue wavelengths. This implies atmospheric correction is a highly error-sensitive process with a 1% error in estimating atmospheric radiance in the atmospheric correction process can cause more than 10% errors. Therefore, the quality assessment of Rrs after the atmospheric correction is essential for ensuring reliable ocean environment analysis using ocean color satellite data. In this study, a Quality Assurance (QA) algorithm based on in-situ Rrs data, which has been archived into a database using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Bio-optical Archive and Storage System (SeaBASS), was applied and modified to consider the different spectral characteristics of GOCI-II. This method is officially employed in the National Oceanic and Atmospheric Administration (NOAA)'s ocean color satellite data processing system. It provides quality analysis scores for Rrs ranging from 0 to 1 and classifies the water types into 23 categories. When the QA algorithm is applied to the initial phase of GOCI-II data with less calibration, it shows the highest frequency at a relatively low score of 0.625. However, when the algorithm is applied to the improved GOCI-II atmospheric correction results with updated calibrations, it shows the highest frequency at a higher score of 0.875 compared to the previous results. The water types analysis using the QA algorithm indicated that parts of the East Sea, South Sea, and the Northwest Pacific Ocean are primarily characterized as relatively clear case-I waters, while the coastal areas of the Yellow Sea and the East China Sea are mainly classified as highly turbid case-II waters. We expect that the QA algorithm will support GOCI-II users in terms of not only statistically identifying Rrs resulted with significant errors but also more reliable calibration with quality assured data. The algorithm will be included in the level-2 flag data provided with GOCI-II atmospheric correction.

Smoking Adolescents' Acquisition of Cigarettes and Status of Proof of Age (흡연을 하는 청소년의 담배 구입 경로 및 신분 확인의 유무)

  • Kim, Hee Ra;Kim, Ji Young;Lee, Gee Hyung;Choung, Ji Tae;Park, Sang Hee
    • Clinical and Experimental Pediatrics
    • /
    • v.48 no.4
    • /
    • pp.363-368
    • /
    • 2005
  • Purpose : The aim of this study is to identify where and how adolescents acquire cigarettes and how many were asked for identification while purchasing cigarettes. Methods : This study was conducted in 2003; participants were 2,200 students in middle and high schools, aged from 13 to 18 years old(males 1,098; females 1,102) in Ansan, Korea. The questionnaire assured them of anonymity, and self-administered in school. The data was analyzed with chisquare test for trends. Results : The prevalence of smoking was about 20 percent among respondents, was higher in males than in females, and in older students than in younger students(P<0.001). The most frequent source of cigarettes was purchased from a store(36.3 percent). About 29.2 percent of the students borrowed from friends or family members. By sex, the main sources of cigarettes were purchase from a store and borrowing. Younger students were borrowed more cigarettes; older students purchased more cigarettes from stores. Only 48.8 percent were asked for proof of age during their purchase. Of those asked for proof of age, about 73.3 percent answered that this made it difficult to buy cigarettes(P<0.001), and they thought that it was more difficult when asked for a photo ID than simply being asked their age(P=0.019). Conclusion : So far, there has been no systemic prevention of adolescents' smoking. It is difficult for minors to purchase cigarettes if asked for proof of age, but most minors go to stores to purchase cigarettes. Therefore, prevention efforts should include educating retailers not to sell cigarettes to minors and enforcing existing laws requiring youth to provide proof of age when attempting to buy cigarettes.

Estimation of Optimal Size of the Treatment Facility for Nonpoint Source Pollution due to Watershed Development (비점오염원의 정량화방안에 따른 적정 설계용량결정)

  • Kim, Jin-Kwan
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.8 no.6
    • /
    • pp.149-153
    • /
    • 2008
  • The pollutant capacity occurred before and after the development of a watershed should be quantitatively estimated and controlled for the minimization of water contamination. The Ministry of Environment suggested a guideline for the legal management of nonpoint source from 2006. However, the rational method for the determination of treatment capacity from nonpoint source proposed in the guideline has the problem in the field application because it does not reflect the project based cases and overestimates the pollutant load to be reduced. So, we perform the standard rainfall analysis by analytical probabilistic method for the estimation of an additional pollutant load occurred by a project and suggest a methodology for the estimation of contaminant capacity instead of a simple rational method. The suggested methodology in this study could determine the reasonable capacity and efficiency of a treatment facility through the estimation of pollutant load from nonpoint source and from this we can manage the watershed appropriately. We applied a suggested methodology to the projects of housing land development and a dam construction in the watersheds. When we determine the treatment capacity by a rational method without consideration of the types of projects we should treat the 90% of pollutant capacity occurred by the development and to do so, about 30% of the total cost for the development should be invested for the treatment facility. This requires too big cost and is not realistic. If we use the suggested method the target pollutant capacity to be reduced will be 10 to 30% of the capacity occurred by the development and about 5 to 10% of the total cost can be used. The control of nonpoint source must be performed for the water resources management. However it is not possible to treat the 90% of pollutant load occurred by the development. The proper pollutant capacity from nonpoint source should be estimated and controlled based on various project types and in reality, this is very important for the watershed management. Therefore the results of this study might be more reasonable than the rational method proposed in the Ministry of Environment.

Investigation of the Rice Plant Transfer and the Leaching Characteristics of Copper and Lead for the Stabilization Process with a Pilot Scale Test (논토양 안정화 현장 실증 시험을 통한 납, 구리의 용출 저감 및 벼로의 식물전이 특성 규명)

  • Lee, Ha-Jung;Lee, Min-Hee
    • Economic and Environmental Geology
    • /
    • v.45 no.3
    • /
    • pp.255-264
    • /
    • 2012
  • The stabilization using limestone ($CaCO_3$) and steel making slag as the immobilization amendments for Cu and Pb contaminated farmland soils was investigated by batch tests, continuous column experiments and the pilot scale feasibility study with 4 testing grounds at the contaminated site. From the results of batch experiment, the amendment with the mixture of 3% of limestone and 2% of steel making slag reduced more than 85% of Cu and Pb compared with the soil without amendment. The acryl column (1 m in length and 15 cm in diameter) equipped with valves, tubes and a sprinkler was used for the continuous column experiments. Without the amendment, the Pb concentration of the leachate from the column maintained higher than 0.1 mg/L (groundwater tolerance limit). However, the amendment with 3% limestone and 2% steel making slag reduced more than 60% of Pb leaching concentration within 1 year and the Pb concentration of leachate maintained below 0.04 mg/L. For the testing ground without the amendment, the Pb and Cu concentrations of soil water after 60 days incubation were 0.38 mg/L and 0.69 mg/l, respectively, suggesting that the continuous leaching of Cu and Pb may occur from the site. For the testing ground amended with mixture of 3% of limestone + 2% of steel making slag, no water soluble Pb and Cu were detected after 20 days incubation. For all testing grounds, the ratio of Pb and Cu transfer to plant showed as following: root > leaves(including stem) > rice grain. The amendment with limestone and steel making slag reduced more than 75% Pb and Cu transfer to plant comparing with no amendment. The results of this study showed that the amendment with mixture of limestone and steel making slag decreases not only the leaching of heavy metals but also the plant transfer from the soil.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Quality Characteristics Influenced by Different Packaging Materials in Washed Potatoes through an Integrated Washing System (통합세척시스템 활용시 포장재 종류별 세척감자의 품질 특성)

  • Kim, Su Jeong;Sohn, Hwang Bae;Mekapogu, Manjulata;Kwon, Oh Keun;Hong, Su Young;Nam, Jung Hwan;Jin, Yong Ik;Chang, Dong Chil;Suh, Jong Taek;Jeong, Jin Cheol;Kim, Yul Ho
    • Korean Journal of Food Science and Technology
    • /
    • v.48 no.3
    • /
    • pp.247-255
    • /
    • 2016
  • This study was performed to investigate the effect of packaging materials on quality characteristics of washed potatoes such as Hunter's a value, chlorophyll and potato glycoalkaloids (PGA) content during their storage for 15 days. Packaging methods were evaluated into five ways: no packaging (NP, positive control), paper bag (PB, negative control), onion net (ON), transparent oriented to polypropylene without hole (TP), opaque oriented polypropylene with 4 holes (OP). Hunter's a values of washed potatoes showed minus in NP and TP after 12 days storage, whereas all plus values were observed in those of PB, ON, and OP. Total chlorophyll content of washed potatoes was the highest in no packaging at 15 days after storage. The PGA content of washed potatoes showed low levels in flesh part (below $5mg/100g{\cdot}FW$) as well as in peel part ($4.5-9.3mg/100mg{\cdot}FW$) in all packagings up to 15 days after storage.

ICT Medical Service Provider's Knowledge and level of recognizing how to cope with fire fighting safety (ICT 의료시설 기반에서 종사자의 소방안전 지식과 대처방법 인식수준)

  • Kim, Ja-Sook;Kim, Ja-Ok;Ahn, Young-Joon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.1
    • /
    • pp.51-60
    • /
    • 2014
  • In this study, ICT medical service provider's level of knowledge fire fighting safety and methods on coping with fires in the regions of Gwangju and Jeonam Province of Korea were investigated to determine the elements affecting such levels and provide basic information on the manuals for educating how to cope with the fire fighting safety in medical facilities. The data were analyzed using SPSS Win 14.0. The scores of level of knowledge fire fighting safety of ICT medical service provider's were 7.06(10 point scale), and the scores of level of recognizing how to cope with fire fighting safety were 6.61(11 point scale). level of recognizing how to cope with fire fighting safety were significantly different according to gender(t=4.12, p<.001), age(${\chi}^2$=17.24, p<.001), length of career(${\chi}^2$=22.76, p<.001), experience with fire fighting safety education(t=6.10, p<.001), level of subjective knowledge on fire fighting safety(${\chi}^2$=53.83, p<.001). In order to enhance the level of understanding of fire fighting safety and methods of coping by the ICT medical service providers it is found that: self-directed learning through avoiding the education just conveying knowledge by lecture tailored learning for individuals fire fighting education focused on experiencing actual work by developing various contents emphasizing cooperative learning deploying patients by classification systems using simulations and a study on the implementation of digital anti-fire monitoring system with multipoint communication protocol, a design and development of the smoke detection system using infra-red laser for fire detection in the wide space, video based fire detection algorithm using gaussian mixture mode developing an education manual for coping with fire fighting safety through multi learning approach at the medical facilities are required.

The Study of Land Surface Change Detection Using Long-Term SPOT/VEGETATION (장기간 SPOT/VEGETATION 정규화 식생지수를 이용한 지면 변화 탐지 개선에 관한 연구)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, In-Hwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.4
    • /
    • pp.111-124
    • /
    • 2010
  • To monitor the environment of land surface change is considered as an important research field since those parameters are related with land use, climate change, meteorological study, agriculture modulation, surface energy balance, and surface environment system. For the change detection, many different methods have been presented for distributing more detailed information with various tools from ground based measurement to satellite multi-spectral sensor. Recently, using high resolution satellite data is considered the most efficient way to monitor extensive land environmental system especially for higher spatial and temporal resolution. In this study, we use two different spatial resolution satellites; the one is SPOT/VEGETATION with 1 km spatial resolution to detect coarse resolution of the area change and determine objective threshold. The other is Landsat satellite having high resolution to figure out detailed land environmental change. According to their spatial resolution, they show different observation characteristics such as repeat cycle, and the global coverage. By correlating two kinds of satellites, we can detect land surface change from mid resolution to high resolution. The K-mean clustering algorithm is applied to detect changed area with two different temporal images. When using solar spectral band, there are complicate surface reflectance scattering characteristics which make surface change detection difficult. That effect would be leading serious problems when interpreting surface characteristics. For example, in spite of constant their own surface reflectance value, it could be changed according to solar, and sensor relative observation location. To reduce those affects, in this study, long-term Normalized Difference Vegetation Index (NDVI) with solar spectral channels performed for atmospheric and bi-directional correction from SPOT/VEGETATION data are utilized to offer objective threshold value for detecting land surface change, since that NDVI has less sensitivity for solar geometry than solar channel. The surface change detection based on long-term NDVI shows improved results than when only using Landsat.