• Title/Summary/Keyword: use for information

Search Result 27,363, Processing Time 0.065 seconds

Survey on a Disposal Method of Contact Lenses after Use (콘택트렌즈 사용 후 폐기처분에 대한 실태 조사)

  • Park, Il-nam;Kwon, Min-sun;Park, Ji-woong;Lee, Ki-Seok;Jung, Mi-A;Lee, Hae-Jung
    • The Korean Journal of Vision Science
    • /
    • v.20 no.4
    • /
    • pp.553-560
    • /
    • 2018
  • Purpose : To investigate a disposal method of disposing contact lenses and the recognition of environmental pollution by micro plastics which may be caused by the wrong disposal method of domestic contact lens wearers. Methods : Two hundred sixty one adults(124 males, 137 females, mean age $21.48{\pm}3.14years$) were participated in this study. They were given the questionnaire survey on contact lenses purchasing place, type of contact lenses, duration of wearing contact lenses, the disposal method of disposing contact lenses and the recognition of the occurrence of environmental pollution. Results : It appeared that eyeglass shop(50.0%) and contact lens shop(48.3%) were the main purchasing places, and the most common type of contact lenses were disposable lenses(38.5%) and daily wearing lenses(52.5%). On the duration of wearing contact lenses they answered more than 5 years(29.3%), less than 1 year (26.0%), less than 1 year to less than 3 years (26.0%), and on wearing a contact lens during a week they did 1-2 days (32.0%), 1 week (28.0%), 5-6 days (22.4%) and 3-4 days (17.6%). It was shown "no(78.3%)" and "yes(21.7%)" to the questionnaire of whether they received information or education about a disposal method at the place where the contact lens was purchased, and "no(87.5%)" and "yes(12.5%)" to the questionnaire of whether they received information or education from schools, public institutions or public media such as the internet. As for the disposal methods, landfill waste(45.6%), recycled garbage(29.6%), and drainage(16.8%) from the sink or toilet responded in order. Although men were more educated and informed about disposal than women (t=3.63189, p<0.00001), women were more aware of environmental pollution(t=2.44269, p=0.01605). Conclusion : In order to reduce the environmental pollution issue caused by the contact lens which does not decompose at the sewage treatment facility and become micro plastics, it is urgent to provide information about correct disposal methods after using contact lenses and to educate contact lens wearers.

Appraisal or Re-Appraisal of the Japanese Colonial Archives and the Colonial City Planing Archives in Korea: Theoretical Issues and Practice (일제시기 총독부 기록과 도시계획 기록의 평가 혹은 재평가 - 이론적 쟁점과 평가의 실제 -)

  • Lee, Sang-Min
    • The Korean Journal of Archival Studies
    • /
    • no.14
    • /
    • pp.3-51
    • /
    • 2006
  • In this paper, I applied known theories of appraisal and re-appraisal to the Japanese Colonial Archives and the Colonial City Planing Archives in Korea. The purpose of this application to some of sample archives was to develop a useful and effective approach to appraise the archives which were not appraised before they were determined to be "permanent" archives by the Japanese colonial officials. The colonial archives have lost their context and "chain of custody." A large portion of their volume also disappeared. Only thirty thousands volumes survived. The appraisal theories and related issues applied to and tested on these archives are; "original natures" of archives defined by Sir. Hillary Jenkinson, Schellenburg's information value appraisal theory, the re-appraisal theory based on economy of preservation and prospect for use of the archives, function-based appraisal theory and documentation theory, the special nature of the archives as unique, old and rare colonial archives, the intrinsic value of the archives, especially the city planing maps and drawings, and finally, the determination of the city planing archives as permanent archives according to the contemporary and modern disposal authority. The colonial archives tested were not naturally self-proven authentic and trustworthy records as many other archives are. They lost their chain of custody and they do not guarantee the authenticity and sincerity of the producers. They need to be examined and reviewed critically before they are used as historical evidence or any material which documented the contemporary society. Rapport's re-appraisal theory simply does not fit into these rare historical archives. The colonial archives have intrinsic values. Though these archives represent some aspects of the colonial society, they can not document the colonial society since they are just survived remains or a little part of the whole archives created. The functions and the structure of the Government General of Korea(朝鮮總督府) were not fully studied yet and hardly can be used to determine the archival values of the archives created in some parts of the colonial apparatus. The actual appraisal methods proved to be effective in the case of colonial archives was Schellenburg's information value appraisal theory. The contextual and content information of the colonial archives were analysed and reconstructed. The appraisal works also resulted in full descriptions of the colonial archives which were never described before in terms of archival principles.

A Silk Road Hero: King Chashtana

  • ELMALI, MURAT
    • Acta Via Serica
    • /
    • v.3 no.2
    • /
    • pp.91-106
    • /
    • 2018
  • During the Old Uighur period, many works were translated into Old Uighur under the influence of Buddhism. Among these works, literary works such as $Da{\acute{s}}akarmapath{\bar{a}}vad{\bar{a}}nam{\bar{a}}l{\bar{a}}$ hold an important place. These works were usually translated from Pali to Sanskrit, from Sanskrit to Sogdian, Tocharian and Chinese, and to Old Uighur from these languages. These works which were added to the Old Uighur repertoire by translation indicate that different peoples along the ancient Silk Road had deep linguistic interactions with one another. Aside from these works, other narratives that we have been so far unable to determine whether they were translations, adaptations or original works have also been discovered. The Tale of King Chashtana, which was found in the work titled $Da{\acute{s}}akarmapath{\bar{a}}vad{\bar{a}}nam{\bar{a}}l{\bar{a}}$, is one of the tales we have been unable to classify as a translation or an original work. This tale has never been discovered with this title or this content in the languages of any of the peoples that were exposed to Buddhism along the Silk Road. On the other hand, the person whom the protagonist of this tale was named after has a very important place in the history of India, one of the countries that the Silk Road goes through. Saka Mahakshatrapa Chashtana (or Cashtana), a contemporary of Nahapana, declared himself king in Gujarat. A short time later, Chashtana, having invaded Ujjain and Maharashtra, established a powerful Saka kingdom in the west of India. His descendants reigned in the region for a long time. Another important fact about Chashtana is that coinage minted in his name was used all along the Silk Road. Chashtana, who became a significant historical figure in north western India, inspired the name of the protagonist of a tale in Old Uighur. That it is probable that the tale of King Chashtana is an original Old Uighur tale and not found in any other languages of the Silk Road brings some questions to mind: Who is Chashtana, the hero of the story? Is he related to the Saka king Chashtana in any way? What sort of influence did Chashtana have on the Silk Road and its languages? If this tale which we have never encountered in any other language of the Silk Road is indeed an original tale, why did the Old Uighurs use the name of an important Saka ruler? Is Saka-Uighur contact in question, given tales of this kind? What can we say about the historical and cultural geography of the Silk Road, given the fact that coinage was minted in his name and used along the Silk Road? In this study, I will attempt to answer these questions and share the information we have gleaned about Chashtana the hero of the tale and the Saka king Chashtana. One of the main aim of this study is to reveal the relationship between the narrative hero Chashtana and the Saka king Chashtana according to this information. Another aim of this study is to understand the history of the Saka, the Uighur and the Silk Road and to reveal the relationship between these three important subjects of history. The importance of the Silk Road will be emphasized again with the understanding of these relations. In this way, new information about Chashtana, who is an important name in the history of the India and the Silk Road, will be put forward. The history of the Sakas will be viewed from a different perspective through the Old Uighur Buddhist story.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

SANET-CC : Zone IP Allocation Protocol for Offshore Networks (SANET-CC : 해상 네트워크를 위한 구역 IP 할당 프로토콜)

  • Bae, Kyoung Yul;Cho, Moon Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.87-109
    • /
    • 2020
  • Currently, thanks to the major stride made in developing wired and wireless communication technology, a variety of IT services are available on land. This trend is leading to an increasing demand for IT services to vessels on the water as well. And it is expected that the request for various IT services such as two-way digital data transmission, Web, APP, etc. is on the rise to the extent that they are available on land. However, while a high-speed information communication network is easily accessible on land because it is based upon a fixed infrastructure like an AP and a base station, it is not the case on the water. As a result, a radio communication network-based voice communication service is usually used at sea. To solve this problem, an additional frequency for digital data exchange was allocated, and a ship ad-hoc network (SANET) was proposed that can be utilized by using this frequency. Instead of satellite communication that costs a lot in installation and usage, SANET was developed to provide various IT services to ships based on IP in the sea. Connectivity between land base stations and ships is important in the SANET. To have this connection, a ship must be a member of the network with its IP address assigned. This paper proposes a SANET-CC protocol that allows ships to be assigned their own IP address. SANET-CC propagates several non-overlapping IP addresses through the entire network from land base stations to ships in the form of the tree. Ships allocate their own IP addresses through the exchange of simple requests and response messages with land base stations or M-ships that can allocate IP addresses. Therefore, SANET-CC can eliminate the IP collision prevention (Duplicate Address Detection) process and the process of network separation or integration caused by the movement of the ship. Various simulations were performed to verify the applicability of this protocol to SANET. The outcome of such simulations shows us the following. First, using SANET-CC, about 91% of the ships in the network were able to receive IP addresses under any circumstances. It is 6% higher than the existing studies. And it suggests that if variables are adjusted to each port's environment, it may show further improved results. Second, this work shows us that it takes all vessels an average of 10 seconds to receive IP addresses regardless of conditions. It represents a 50% decrease in time compared to the average of 20 seconds in the previous study. Also Besides, taking it into account that when existing studies were on 50 to 200 vessels, this study on 100 to 400 vessels, the efficiency can be much higher. Third, existing studies have not been able to derive optimal values according to variables. This is because it does not have a consistent pattern depending on the variable. This means that optimal variables values cannot be set for each port under diverse environments. This paper, however, shows us that the result values from the variables exhibit a consistent pattern. This is significant in that it can be applied to each port by adjusting the variable values. It was also confirmed that regardless of the number of ships, the IP allocation ratio was the most efficient at about 96 percent if the waiting time after the IP request was 75ms, and that the tree structure could maintain a stable network configuration when the number of IPs was over 30000. Fourth, this study can be used to design a network for supporting intelligent maritime control systems and services offshore, instead of satellite communication. And if LTE-M is set up, it is possible to use it for various intelligent services.

A Methodology of Multimodal Public Transportation Network Building and Path Searching Using Transportation Card Data (교통카드 기반자료를 활용한 복합대중교통망 구축 및 경로탐색 방안 연구)

  • Cheon, Seung-Hoon;Shin, Seong-Il;Lee, Young-Ihn;Lee, Chang-Ju
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.3
    • /
    • pp.233-243
    • /
    • 2008
  • Recognition for the importance and roles of public transportation is increasing because of traffic problems in many cities. In spite of this paradigm change, previous researches related with public transportation trip assignment have limits in some aspects. Especially, in case of multimodal public transportation networks, many characters should be considered such as transfers. operational time schedules, waiting time and travel cost. After metropolitan integrated transfer discount system was carried out, transfer trips are increasing among traffic modes and this takes the variation of users' route choices. Moreover, the advent of high-technology public transportation card called smart card, public transportation users' travel information can be recorded automatically and this gives many researchers new analytical methodology for multimodal public transportation networks. In this paper, it is suggested that the methodology for establishment of brand new multimodal public transportation networks based on computer programming methods using transportation card data. First, we propose the building method of integrated transportation networks based on bus and urban railroad stations in order to make full use of travel information from transportation card data. Second, it is offered how to connect the broken transfer links by computer-based programming techniques. This is very helpful to solve the transfer problems that existing transportation networks have. Lastly, we give the methodology for users' paths finding and network establishment among multi-modes in multimodal public transportation networks. By using proposed methodology in this research, it becomes easy to build multimodal public transportation networks with existing bus and urban railroad station coordinates. Also, without extra works including transfer links connection, it is possible to make large-scaled multimodal public transportation networks. In the end, this study can contribute to solve users' paths finding problem among multi-modes which is regarded as an unsolved issue in existing transportation networks.

A Study on the Current Rotation System of Hunting Ground (현행(現行) 순환수렵장(循環狩獵場) 제도(制度)에 관(關)한 연구(硏究))

  • Byun, Woo Hyuk
    • Journal of Korean Society of Forest Science
    • /
    • v.74 no.1
    • /
    • pp.47-55
    • /
    • 1986
  • During the past 4 years, I have made a careful analysis of the present rotating system of hunting areas, on the one hand, by asking a group of hunters to fill out a questionnaire, and on the other hand, by referring to the written documents on the subject. And, as a result, it is concluded that this system, by varying the hunting grounds each year, contains in itself several problems as follows. 1. The hunters find it quite inconvenient to use a different hunting ground year after year and they also complain that the present hunting ground charge is more than it is worth. Therefore, it is expected that the number of hunters will explosively increase in the future with the betterment of hunting conditions. 2. The hunters have almost no information about game and they are, as a whole, lacking in the ethics of hunting. 3. The allotment of time in hunting training courses is not so sufficient that it is next to impossible to improve the quality of hunters. 4. As a rule, the population density of wildlife is so sparse that it falls short of the proper standard of it. 5. The present hunting system does not seem to contribute to the advancement of tourism. 6. It is absolutely necessary to make a general survey of the situation of wildlife for the legal protection of it. Besides, the interests of hunters are so closely tied up with those of farmers and foresters that dreastic measures should be taken to settle their conflicting differences. For the purpose of solving the above-mentioned problems and at the same time, of developing sound hunting practices in the long run, I hereby make two suggestions. 1. The Establishment of the Hunting License Test System It is desirable to issue a license to a prospective hunter after he has met a special qualification and then passed a test so that he may have bits of information needed for his hunting activities. 2. The Introduction of The Revier System The fundamental concept of this system is based on the assumption that the private landowner should reserve a right to the pursuit of game and take responsibility for wildlife management.

  • PDF

Natural Language Processing Model for Data Visualization Interaction in Chatbot Environment (챗봇 환경에서 데이터 시각화 인터랙션을 위한 자연어처리 모델)

  • Oh, Sang Heon;Hur, Su Jin;Kim, Sung-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.281-290
    • /
    • 2020
  • With the spread of smartphones, services that want to use personalized data are increasing. In particular, healthcare-related services deal with a variety of data, and data visualization techniques are used to effectively show this. As data visualization techniques are used, interactions in visualization are also naturally emphasized. In the PC environment, since the interaction for data visualization is performed with a mouse, various filtering for data is provided. On the other hand, in the case of interaction in a mobile environment, the screen size is small and it is difficult to recognize whether or not the interaction is possible, so that only limited visualization provided by the app can be provided through a button touch method. In order to overcome the limitation of interaction in such a mobile environment, we intend to enable data visualization interactions through conversations with chatbots so that users can check individual data through various visualizations. To do this, it is necessary to convert the user's query into a query and retrieve the result data through the converted query in the database that is storing data periodically. There are many studies currently being done to convert natural language into queries, but research on converting user queries into queries based on visualization has not been done yet. Therefore, in this paper, we will focus on query generation in a situation where a data visualization technique has been determined in advance. Supported interactions are filtering on task x-axis values and comparison between two groups. The test scenario utilized data on the number of steps, and filtering for the x-axis period was shown as a bar graph, and a comparison between the two groups was shown as a line graph. In order to develop a natural language processing model that can receive requested information through visualization, about 15,800 training data were collected through a survey of 1,000 people. As a result of algorithm development and performance evaluation, about 89% accuracy in classification model and 99% accuracy in query generation model was obtained.