• Title/Summary/Keyword: smart mining

Search Result 261, Processing Time 0.023 seconds

LSTM-based Power Load Prediction System Design for Store Energy Saving (매장 에너지 절감을 위한 LSTM 기반의 전력부하 예측 시스템 설계)

  • Choi, Jongseok;Shin, Yongtae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.307-313
    • /
    • 2021
  • Most of the stores of small business owners are those that use a large number of electrical devices, and in particular, there are many stores that use a cold storage system. In severe cases, there is a lot of power load on the store, which can cause a loss to the assets in the store as the power supply is cut off. Accordingly, in this paper, an LSTM-based power load prediction system was designed to measure the energy demand rate of stores and to save energy. Since it can be used as a data-based power saving system for small and medium-sized stores, it is expected to be used as a data-based power demand prediction system for small businesses in the future, and to be used in the field of preventing damage due to power load.

An Analysis of Changes in Perception of Metaverse through Big Data - Comparing Before and After COVID-19 - (빅데이터 분석을 통한 메타버스에 대한 인식 변화 분석 - 코로나19 발생 전후 비교를 중심으로 -)

  • Kang, Yu Rim;Kim, Mun Young
    • Fashion & Textile Research Journal
    • /
    • v.24 no.5
    • /
    • pp.593-604
    • /
    • 2022
  • The purpose of this study is to analyze the flow of change in perception of metaverse before and after COVID-19 through big data analysis. This research method used Textom to collect all data, including metaverse for two years before COVID-19 (2018.1.1~2019.11.30) and after COVID-19 outbreak (2020.1.11~2021.12.31), and the collection channels were selected by Naver and Google. The collected data were text mining, and word frequency, TF-IDF, word cloud, network analysis, and emotional analysis were conducted. As a result of the analysis, first, hotels, weddings, and glades were commonly extracted as social issues related to metaverse before and after COVID-19, and keywords such as robots and launches were derived, so the frequency of keywords related to hotels and weddings was high. Second, the association of the pre-COVID-19 metaverse keywords was platform-oriented, content-oriented, economic-oriented, and online promotion-oriented, and post-COVID-19 clusters were event-oriented, ontact sales-oriented, stock-oriented, and new businesses. Third, positive keywords such as likes, interest, and joy before COVID-19 were high, and positive keywords such as likes, joy, and interest after COVID-19. In conclusion, through this study, it was found that metaverse has firmly established itself as a new platform business model that can be used in various fields such as tourism, travel, festivals, and education using smart technology and metaverse.

Optimised neural network prediction of interface bond strength for GFRP tendon reinforced cemented soil

  • Zhang, Genbao;Chen, Changfu;Zhang, Yuhao;Zhao, Hongchao;Wang, Yufei;Wang, Xiangyu
    • Geomechanics and Engineering
    • /
    • v.28 no.6
    • /
    • pp.599-611
    • /
    • 2022
  • Tendon reinforced cemented soil is applied extensively in foundation stabilisation and improvement, especially in areas with soft clay. To solve the deterioration problem led by steel corrosion, the glass fiber-reinforced polymer (GFRP) tendon is introduced to substitute the traditional steel tendon. The interface bond strength between the cemented soil matrix and GFRP tendon demonstrates the outstanding mechanical property of this composite. However, the lack of research between the influence factors and bond strength hinders the application. To evaluate these factors, back propagation neural network (BPNN) is applied to predict the relationship between them and bond strength. Since adjusting BPNN parameters is time-consuming and laborious, the particle swarm optimisation (PSO) algorithm is proposed. This study evaluated the influence of water content, cement content, curing time, and slip distance on the bond performance of GFRP tendon-reinforced cemented soils (GTRCS). The results showed that the ultimate and residual bond strengths were both in positive proportion to cement content and negative to water content. The sample cured for 28 days with 30% water content and 50% cement content had the largest ultimate strength (3879.40 kPa). The PSO-BPNN model was tuned with 3 neurons in the input layer, 10 in the hidden layer, and 1 in the output layer. It showed outstanding performance on a large database comprising 405 testing results. Its higher correlation coefficient (0.908) and lower root-mean-square error (239.11 kPa) were obtained compared to multiple linear regression (MLR) and logistic regression (LR). In addition, a sensitivity analysis was applied to acquire the ranking of the input variables. The results illustrated that the cement content performed the strongest influence on bond strength, followed by the water content and slip displacement.

Advanced Information Data-interactive Learning System Effect for Creative Design Project

  • Park, Sangwoo;Lee, Inseop;Lee, Junseok;Sul, Sanghun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2831-2845
    • /
    • 2022
  • Compared to the significant approach of project-based learning research, a data-driven design project-based learning has not reached a meaningful consensus regarding the most valid and reliable method for assessing design creativity. This article proposes an advanced information data-interactive learning system for creative design using a service design process that combines a design thinking. We propose a service framework to improve the convergence design process between students and advanced information data analysis, allowing students to participate actively in the data visualization and research using patent data. Solving a design problem by discovery and interpretation process, the Advanced information-interactive learning framework allows the students to verify the creative idea values or to ideate new factors and the associated various feasible solutions. The student can perform the patent data according to a business intelligence platform. Most of the new ideas for solving design projects are evaluated through complete patent data analysis and visualization in the beginning of the service design process. In this article, we propose to adapt advanced information data to educate the service design process, allowing the students to evaluate their own idea and define the problems iteratively until satisfaction. Quantitative evaluation results have shown that the advanced information data-driven learning system approach can improve the design project - based learning results in terms of design creativity. Our findings can contribute to data-driven project-based learning for advanced information data that play a crucial role in convergence design in related standards and other smart educational fields that are linked.

Web crawler Improvement and Dynamic process Design and Implementation for Effective Data Collection (효과적인 데이터 수집을 위한 웹 크롤러 개선 및 동적 프로세스 설계 및 구현)

  • Wang, Tae-su;Song, JaeBaek;Son, Dayeon;Kim, Minyoung;Choi, Donggyu;Jang, Jongwook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1729-1740
    • /
    • 2022
  • Recently, a lot of data has been generated according to the diversity and utilization of information, and the importance of big data analysis to collect, store, process and predict data has increased, and the ability to collect only necessary information is required. More than half of the web space consists of text, and a lot of data is generated through the organic interaction of users. There is a crawling technique as a representative method for collecting text data, but many crawlers are being developed that do not consider web servers or administrators because they focus on methods that can obtain data. In this paper, we design and implement an improved dynamic web crawler that can efficiently fetch data by examining problems that may occur during the crawling process and precautions to be considered. The crawler, which improved the problems of the existing crawler, was designed as a multi-process, and the work time was reduced by 4 times on average.

Big Data News Analysis in Healthcare Using Topic Modeling and Time Series Regression Analysis (토픽모델링과 시계열 회귀분석을 활용한 헬스케어 분야의 뉴스 빅데이터 분석 연구)

  • Eun-Jung Kim;Suk-Gwon Chang;Sang-Yong Tom Lee
    • Information Systems Review
    • /
    • v.25 no.3
    • /
    • pp.163-177
    • /
    • 2023
  • This research aims to identify key initiatives and a policy approach to support the industrialization of the sector. The research collected a total of 91,873 news data points relating to healthcare between 2013 to 2022. A total of 20 topics were derived through topic modeling analysis, and as a result of time series regression analysis, 4 hot topics (Healthcare, Biopharmaceuticals, Corporate outlook·Sales, Government·Policy), 3 cold topics (Smart devices, Stocks·Investment, Urban development·Construction) derived a significant topic. The research findings will serve as an important data source for government institutions that are engaged in the formulation and implementation of Korea's policies.

Measuring the Economic Impact of Item Descriptions on Sales Performance (온라인 상품 판매 성과에 영향을 미치는 상품 소개글 효과 측정 기법)

  • Lee, Dongwon;Park, Sung-Hyuk;Moon, Songchun
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.1-17
    • /
    • 2012
  • Personalized smart devices such as smartphones and smart pads are widely used. Unlike traditional feature phones, theses smart devices allow users to choose a variety of functions, which support not only daily experiences but also business operations. Actually, there exist a huge number of applications accessible by smart device users in online and mobile application markets. Users can choose apps that fit their own tastes and needs, which is impossible for conventional phone users. With the increase in app demand, the tastes and needs of app users are becoming more diverse. To meet these requirements, numerous apps with diverse functions are being released on the market, which leads to fierce competition. Unlike offline markets, online markets have a limitation in that purchasing decisions should be made without experiencing the items. Therefore, online customers rely more on item-related information that can be seen on the item page in which online markets commonly provide details about each item. Customers can feel confident about the quality of an item through the online information and decide whether to purchase it. The same is true of online app markets. To win the sales competition against other apps that perform similar functions, app developers need to focus on writing app descriptions to attract the attention of customers. If we can measure the effect of app descriptions on sales without regard to the app's price and quality, app descriptions that facilitate the sale of apps can be identified. This study intends to provide such a quantitative result for app developers who want to promote the sales of their apps. For this purpose, we collected app details including the descriptions written in Korean from one of the largest app markets in Korea, and then extracted keywords from the descriptions. Next, the impact of the keywords on sales performance was measured through our econometric model. Through this analysis, we were able to analyze the impact of each keyword itself, apart from that of the design or quality. The keywords, comprised of the attribute and evaluation of each app, are extracted by a morpheme analyzer. Our model with the keywords as its input variables was established to analyze their impact on sales performance. A regression analysis was conducted for each category in which apps are included. This analysis was required because we found the keywords, which are emphasized in app descriptions, different category-by-category. The analysis conducted not only for free apps but also for paid apps showed which keywords have more impact on sales performance for each type of app. In the analysis of paid apps in the education category, keywords such as 'search+easy' and 'words+abundant' showed higher effectiveness. In the same category, free apps whose keywords emphasize the quality of apps showed higher sales performance. One interesting fact is that keywords describing not only the app but also the need for the app have asignificant impact. Language learning apps, regardless of whether they are sold free or paid, showed higher sales performance by including the keywords 'foreign language study+important'. This result shows that motivation for the purchase affected sales. While item reviews are widely researched in online markets, item descriptions are not very actively studied. In the case of the mobile app markets, newly introduced apps may not have many item reviews because of the low quantity sold. In such cases, item descriptions can be regarded more important when customers make a decision about purchasing items. This study is the first trial to quantitatively analyze the relationship between an item description and its impact on sales performance. The results show that our research framework successfully provides a list of the most effective sales key terms with the estimates of their effectiveness. Although this study is performed for a specified type of item (i.e., mobile apps), our model can be applied to almost all of the items traded in online markets.

Issue tracking and voting rate prediction for 19th Korean president election candidates (댓글 분석을 통한 19대 한국 대선 후보 이슈 파악 및 득표율 예측)

  • Seo, Dae-Ho;Kim, Ji-Ho;Kim, Chang-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.199-219
    • /
    • 2018
  • With the everyday use of the Internet and the spread of various smart devices, users have been able to communicate in real time and the existing communication style has changed. Due to the change of the information subject by the Internet, data became more massive and caused the very large information called big data. These Big Data are seen as a new opportunity to understand social issues. In particular, text mining explores patterns using unstructured text data to find meaningful information. Since text data exists in various places such as newspaper, book, and web, the amount of data is very diverse and large, so it is suitable for understanding social reality. In recent years, there has been an increasing number of attempts to analyze texts from web such as SNS and blogs where the public can communicate freely. It is recognized as a useful method to grasp public opinion immediately so it can be used for political, social and cultural issue research. Text mining has received much attention in order to investigate the public's reputation for candidates, and to predict the voting rate instead of the polling. This is because many people question the credibility of the survey. Also, People tend to refuse or reveal their real intention when they are asked to respond to the poll. This study collected comments from the largest Internet portal site in Korea and conducted research on the 19th Korean presidential election in 2017. We collected 226,447 comments from April 29, 2017 to May 7, 2017, which includes the prohibition period of public opinion polls just prior to the presidential election day. We analyzed frequencies, associative emotional words, topic emotions, and candidate voting rates. By frequency analysis, we identified the words that are the most important issues per day. Particularly, according to the result of the presidential debate, it was seen that the candidate who became an issue was located at the top of the frequency analysis. By the analysis of associative emotional words, we were able to identify issues most relevant to each candidate. The topic emotion analysis was used to identify each candidate's topic and to express the emotions of the public on the topics. Finally, we estimated the voting rate by combining the volume of comments and sentiment score. By doing above, we explored the issues for each candidate and predicted the voting rate. The analysis showed that news comments is an effective tool for tracking the issue of presidential candidates and for predicting the voting rate. Particularly, this study showed issues per day and quantitative index for sentiment. Also it predicted voting rate for each candidate and precisely matched the ranking of the top five candidates. Each candidate will be able to objectively grasp public opinion and reflect it to the election strategy. Candidates can use positive issues more actively on election strategies, and try to correct negative issues. Particularly, candidates should be aware that they can get severe damage to their reputation if they face a moral problem. Voters can objectively look at issues and public opinion about each candidate and make more informed decisions when voting. If they refer to the results of this study before voting, they will be able to see the opinions of the public from the Big Data, and vote for a candidate with a more objective perspective. If the candidates have a campaign with reference to Big Data Analysis, the public will be more active on the web, recognizing that their wants are being reflected. The way of expressing their political views can be done in various web places. This can contribute to the act of political participation by the people.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Analysis of Emerging Geo-technologies and Markets Focusing on Digital Twin and Environmental Monitoring in Response to Digital and Green New Deal (디지털 트윈, 환경 모니터링 등 디지털·그린 뉴딜 정책 관련 지질자원 유망기술·시장 분석)

  • Ahn, Eun-Young;Lee, Jaewook;Bae, Junhee;Kim, Jung-Min
    • Economic and Environmental Geology
    • /
    • v.53 no.5
    • /
    • pp.609-617
    • /
    • 2020
  • After introducing the industry 4.0 policy, Korean government announced 'Digital New Deal' and 'Green New Deal' as 'Korean New Deal' in 2020. We analyzed Korea Institute of Geoscience and Mineral Resources (KIGAM)'s research projects related to that policy and conducted markets analysis focused on Digital Twin and environmental monitoring technologies. Regarding 'Data Dam' policy, we suggested the digital geo-contents with Augmented Reality (AR) & Virtual Reality (VR) and the public geo-data collection & sharing system. It is necessary to expand and support the smart mining and digital oil fields research for '5th generation mobile communication (5G) and artificial intelligence (AI) convergence into all industries' policy. Korean government is suggesting downtown 3D maps for 'Digital Twin' policy. KIGAM can provide 3D geological maps and Internet of Things (IoT) systems for social overhead capital (SOC) management. 'Green New Deal' proposed developing technologies for green industries including resource circulation, Carbon Capture Utilization and Storage (CCUS), and electric & hydrogen vehicles. KIGAM has carried out related research projects and currently conducts research on domestic energy storage minerals. Oil and gas industries are presented as representative applications of digital twin. Many progress is made in mining automation and digital mapping and Digital Twin Earth (DTE) is a emerging research subject. The emerging research subjects are deeply related to data analysis, simulation, AI, and the IoT, therefore KIGAM should collaborate with sensors and computing software & system companies.