• Title/Summary/Keyword: 확산필터링

Search Result 69, Processing Time 0.022 seconds

A Study on the Use of Haar Cascade Filtering to check Wearing Masks and Fever Abnormality (Haar Cascade 필터링을 통한 마스크 착용 여부와 발열 체크)

  • Kim, Eui-Jeong;Kim, In-Jung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.474-477
    • /
    • 2021
  • Recently, in order to prevent the proliferation of COVID-19, which began in earnest in 2020, an increasing number of places have been measuring the temperature and required to wear a mask. However, as wearing a mask and checking the temperature are typically measured directly by a person or by a single individual positioned in front of the machine, standards may vary based on the person's manual measurement method, wasting workforce. While standing in front of a device often measures the maximum temperature of the face, the standard of fever is also unclear. Both approaches can create bottleneck situations when checking large numbers of people. Furthermore, it is unable to conduct periodic measurements and tracking because the measuring machines are generally put only at the entrance. Thus, this study suggests a method for preventing the spread of infectious diseases by automatically identifying and displaying unmasked people and those with fever in real-time using a general camera, a thermal imaging camera, and an artificial intelligence algorithm.

  • PDF

An Expert System for the Estimation of the Growth Curve Parameters of New Markets (신규시장 성장모형의 모수 추정을 위한 전문가 시스템)

  • Lee, Dongwon;Jung, Yeojin;Jung, Jaekwon;Park, Dohyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.17-35
    • /
    • 2015
  • Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase for a certain period of time. Developing precise forecasting models are considered important since corporates can make strategic decisions on new markets based on future demand estimated by the models. Many studies have developed market growth curve models, such as Bass, Logistic, Gompertz models, which estimate future demand when a market is in its early stage. Among the models, Bass model, which explains the demand from two types of adopters, innovators and imitators, has been widely used in forecasting. Such models require sufficient demand observations to ensure qualified results. In the beginning of a new market, however, observations are not sufficient for the models to precisely estimate the market's future demand. For this reason, as an alternative, demands guessed from those of most adjacent markets are often used as references in such cases. Reference markets can be those whose products are developed with the same categorical technologies. A market's demand may be expected to have the similar pattern with that of a reference market in case the adoption pattern of a product in the market is determined mainly by the technology related to the product. However, such processes may not always ensure pleasing results because the similarity between markets depends on intuition and/or experience. There are two major drawbacks that human experts cannot effectively handle in this approach. One is the abundance of candidate reference markets to consider, and the other is the difficulty in calculating the similarity between markets. First, there can be too many markets to consider in selecting reference markets. Mostly, markets in the same category in an industrial hierarchy can be reference markets because they are usually based on the similar technologies. However, markets can be classified into different categories even if they are based on the same generic technologies. Therefore, markets in other categories also need to be considered as potential candidates. Next, even domain experts cannot consistently calculate the similarity between markets with their own qualitative standards. The inconsistency implies missing adjacent reference markets, which may lead to the imprecise estimation of future demand. Even though there are no missing reference markets, the new market's parameters can be hardly estimated from the reference markets without quantitative standards. For this reason, this study proposes a case-based expert system that helps experts overcome the drawbacks in discovering referential markets. First, this study proposes the use of Euclidean distance measure to calculate the similarity between markets. Based on their similarities, markets are grouped into clusters. Then, missing markets with the characteristics of the cluster are searched for. Potential candidate reference markets are extracted and recommended to users. After the iteration of these steps, definite reference markets are determined according to the user's selection among those candidates. Then, finally, the new market's parameters are estimated from the reference markets. For this procedure, two techniques are used in the model. One is clustering data mining technique, and the other content-based filtering of recommender systems. The proposed system implemented with those techniques can determine the most adjacent markets based on whether a user accepts candidate markets. Experiments were conducted to validate the usefulness of the system with five ICT experts involved. In the experiments, the experts were given the list of 16 ICT markets whose parameters to be estimated. For each of the markets, the experts estimated its parameters of growth curve models with intuition at first, and then with the system. The comparison of the experiments results show that the estimated parameters are closer when they use the system in comparison with the results when they guessed them without the system.

The QoS Filtering and Scalable Transmission Scheme of MPEG Data to Adapt Network Bandwidth Variation (통신망 대역폭 변화에 적응하는 MPEG 데이터의 QoS 필터링 기법과 스케일러블 전송 기법)

  • 유우종;김두현;유관종
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.5
    • /
    • pp.479-494
    • /
    • 2000
  • Although the proliferation of real-time multimedia services over the Internet might indicate its successfulness in dealing with heterogeneous environments, it is obvious, on the other hand, that the internet now has to cope with a flood of multimedia data which consumes most of network communication channels due to a great deal of video or audio streams. Therefore, for the purpose of an efficient and appropriate utilization of network resources, it requires to develop and deploy a new scalable transmission technique n consideration of respective network environment and individual clients computing power. Also, we can eliminate the waste effects of storage device and data transmission overhead in that the same video stream duplicated according to QoS. The purpose of this paper is to develop a technology that can adjust the amount of data transmitted as an MPEG video stream according to its given communication bandwidth, and technique that can reflect dynamic bandwidth while playing a video stream. For this purpose, we introduce a media scalable media decomposer working on server side, and a scalable media composer working o n a client side, and then propose a scalable transmission method and a media sender and a media receiver in consideration of dynamic QoS. Those methods proposed her can facilitate an effective use of network resources, and provide multimedia MPEG video services in real-time with respect to individual client computing environment.

  • PDF

Seismic Data Processing For Gas Hydrate using Geobit (Geobit을 이용한 가스 하이드레이트 탐사자료 처리)

  • Jang Seong-Hyung;Suh Sang-Yong;Chung Bu-Heung;Ryu Byung-Jae
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.4
    • /
    • pp.184-190
    • /
    • 1999
  • A study of gas hydrate is a worldwide popular interesting subject as a potential energy source. A seismic survey for gas hydrate have performed over the East sea by the KIGAM since 1997. General indicators of natural submarine gas hydrates in seismic data is commonly inferred from the BSR (Bottom Simulating Reflection) that occurred parallel to the see floor, amplitude decrease at the top of the BSR, amplitude Blanking at the bottom of the BSR, decrease of the interval velocity, and the reflection phase reversal at the BSR. So the seismic data processing for detecting gas hydrates indicators is required the true amplitude recovery processing, a accurate velocity analysis and the AVO (Amplitude Variation with Offset) analysis. In this paper, we had processed the field data to detect the gas hydrate indicators, which had been acquired over the East sea in 1998. Applied processing modules are spherical divergence, band pass filtering, CDP sorting and accurate velocity analysis. The AVO analysis was excluded, since this field data had too short offset to apply the AVO analysis. The accurate velocity analysis was performed by XVA (X-window based Velocity Analysis). This is the method which calculate the velocity spectrum by iterative and interactive. With XVA, we could determine accurate stacking velocity. Geobit 2.9.5 developed by the KIGAM was used for processing data. Processing results say that the BSR occurred parallel to the sea floor were shown at $367\~477m$ depths (two way travel time about 1800 ms) from the sea floor through shot point 1650-1900, the interval velocity decrease around BSR and the reflection phase reversal corresponding to the reflection at the sea floor.

  • PDF

A personalized recommendation procedure with contextual information (상황 정보를 이용한 개인화 추천 방법 개발)

  • Moon, Hyun Sil;Choi, Il Young;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.15-28
    • /
    • 2015
  • As personal devices and pervasive technologies for interacting with networked objects continue to proliferate, there is an unprecedented world of scattered pieces of contextualized information available. However, the explosive growth and variety of information ironically lead users and service providers to make poor decision. In this situation, recommender systems may be a valuable alternative for dealing with these information overload. But they failed to utilize various types of contextual information. In this study, we suggest a methodology for context-aware recommender systems based on the concept of contextual boundary. First, as we suggest contextual boundary-based profiling which reflects contextual data with proper interpretation and structure, we attempt to solve complexity problem in context-aware recommender systems. Second, in neighbor formation with contextual information, our methodology can be expected to solve sparsity and cold-start problem in traditional recommender systems. Finally, we suggest a methodology about context support score-based recommendation generation. Consequently, our methodology can be first step for expanding application of researches on recommender systems. Moreover, as we suggest a flexible model with consideration of new technological development, it will show high performance regardless of their domains. Therefore, we expect that marketers or service providers can easily adopt according to their technical support.

Geographical Name Denoising by Machine Learning of Event Detection Based on Twitter (트위터 기반 이벤트 탐지에서의 기계학습을 통한 지명 노이즈제거)

  • Woo, Seungmin;Hwang, Byung-Yeon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.10
    • /
    • pp.447-454
    • /
    • 2015
  • This paper proposes geographical name denoising by machine learning of event detection based on twitter. Recently, the increasing number of smart phone users are leading the growing user of SNS. Especially, the functions of short message (less than 140 words) and follow service make twitter has the power of conveying and diffusing the information more quickly. These characteristics and mobile optimised feature make twitter has fast information conveying speed, which can play a role of conveying disasters or events. Related research used the individuals of twitter user as the sensor of event detection to detect events that occur in reality. This research employed geographical name as the keyword by using the characteristic that an event occurs in a specific place. However, it ignored the denoising of relationship between geographical name and homograph, it became an important factor to lower the accuracy of event detection. In this paper, we used removing and forecasting, these two method to applied denoising technique. First after processing the filtering step by using noise related database building, we have determined the existence of geographical name by using the Naive Bayesian classification. Finally by using the experimental data, we earned the probability value of machine learning. On the basis of forecast technique which is proposed in this paper, the reliability of the need for denoising technique has turned out to be 89.6%.

Detection for Region of Volcanic Ash Fall Deposits Using NIR Channels of the GOCI (GOCI 근적외선 채널을 활용한 화산재 퇴적지역 탐지)

  • Sun, Jongsun;Lee, Won-Jin;Park, Sun-Cheon;Lee, Duk Kee
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_4
    • /
    • pp.1519-1529
    • /
    • 2018
  • The volcanic ash can spread out over hundreds of kilometers in case of large volcanic eruption. The deposition of volcanic ash may induce damages in urban area and transportation facilities. In order to respond volcanic hazard, it is necessary to estimate efficiently the diffusion area of volcanic ash. The purpose of this study is to compare in-situ volcanic deposition and satellite images of the volcanic eruption case. In this study, we used Near-Infrared (NIR) channels 7 and 8 of Geostationary Ocean Color Imager (GOCI) images for Mt. Aso eruption in 16:40 (UTC) on October 7, 2016. To estimate deposit area clearly, we applied Principal Component Analysis (PCA) and a series of morphology filtering (Eroded, Opening, Dilation, and Closing), respectively. In addition, we compared the field data from the Japan Meteorological Agency (JMA) report about Aso volcano eruption in 2016. From the results, we could extract volcanic ash deposition area of about $380km^2$. In the traditional method, ash deposition area was estimated by human activity such as direct measurement and hearsay evidence, which are inefficient and time consuming effort. Our results inferred that satellite imagery is one of the powerful tools for surface change mapping in case of large volcanic eruption.

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

The Research on Recommender for New Customers Using Collaborative Filtering and Social Network Analysis (협력필터링과 사회연결망을 이용한 신규고객 추천방법에 대한 연구)

  • Shin, Chang-Hoon;Lee, Ji-Won;Yang, Han-Na;Choi, Il Young
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.19-42
    • /
    • 2012
  • Consumer consumption patterns are shifting rapidly as buyers migrate from offline markets to e-commerce routes, such as shopping channels on TV and internet shopping malls. In the offline markets consumers go shopping, see the shopping items, and choose from them. Recently consumers tend towards buying at shopping sites free from time and place. However, as e-commerce markets continue to expand, customers are complaining that it is becoming a bigger hassle to shop online. In the online shopping, shoppers have very limited information on the products. The delivered products can be different from what they have wanted. This case results to purchase cancellation. Because these things happen frequently, they are likely to refer to the consumer reviews and companies should be concerned about consumer's voice. E-commerce is a very important marketing tool for suppliers. It can recommend products to customers and connect them directly with suppliers with just a click of a button. The recommender system is being studied in various ways. Some of the more prominent ones include recommendation based on best-seller and demographics, contents filtering, and collaborative filtering. However, these systems all share two weaknesses : they cannot recommend products to consumers on a personal level, and they cannot recommend products to new consumers with no buying history. To fix these problems, we can use the information which has been collected from the questionnaires about their demographics and preference ratings. But, consumers feel these questionnaires are a burden and are unlikely to provide correct information. This study investigates combining collaborative filtering with the centrality of social network analysis. This centrality measure provides the information to infer the preference of new consumers from the shopping history of existing and previous ones. While the past researches had focused on the existing consumers with similar shopping patterns, this study tried to improve the accuracy of recommendation with all shopping information, which included not only similar shopping patterns but also dissimilar ones. Data used in this study, Movie Lens' data, was made by Group Lens research Project Team at University of Minnesota to recommend movies with a collaborative filtering technique. This data was built from the questionnaires of 943 respondents which gave the information on the preference ratings on 1,684 movies. Total data of 100,000 was organized by time, with initial data of 50,000 being existing customers and the latter 50,000 being new customers. The proposed recommender system consists of three systems : [+] group recommender system, [-] group recommender system, and integrated recommender system. [+] group recommender system looks at customers with similar buying patterns as 'neighbors', whereas [-] group recommender system looks at customers with opposite buying patterns as 'contraries'. Integrated recommender system uses both of the aforementioned recommender systems to recommend movies that both recommender systems pick. The study of three systems allows us to find the most suitable recommender system that will optimize accuracy and customer satisfaction. Our analysis showed that integrated recommender system is the best solution among the three systems studied, followed by [-] group recommended system and [+] group recommender system. This result conforms to the intuition that the accuracy of recommendation can be improved using all the relevant information. We provided contour maps and graphs to easily compare the accuracy of each recommender system. Although we saw improvement on accuracy with the integrated recommender system, we must remember that this research is based on static data with no live customers. In other words, consumers did not see the movies actually recommended from the system. Also, this recommendation system may not work well with products other than movies. Thus, it is important to note that recommendation systems need particular calibration for specific product/customer types.