• Title/Summary/Keyword: Research Information Systems

Search Result 12,210, Processing Time 0.043 seconds

Prospects & Issues of NFT Art Contents in Blockchain Technology (블록체인 NFT 문화예술콘텐츠의 현황과 과제)

  • Jong-Guk Kim
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.1
    • /
    • pp.115-126
    • /
    • 2023
  • In various fields such as art, design, music, film, sports, games, and fashion, NFTs (Non-Fungible Tokens) are creating new economic value through trading platforms dedicated to NFT art and content. In this article, I analyze the current state of blockchain technology and NFT art content in the context of an expanding market for blockchain-based NFT art content in the metaverse. I also propose several tasks based on the economic and industrial logic of technological innovation. The first task proposed is to integrate cultural arts on blockchain, metaverse, and NFT platforms through digital innovation, instead of separating or distinguishing between creative production and consumption. Before the COVID-19 pandemic, there was a clear separation between creators and consumers. However, with the rise of Web 3.0 platforms, any user can now create and own their own content. Therefore, it is important to promote a collaborative and integrated approach to cultural arts production and consumption in the blockchain and metaverse ecosystem. The second task proposed is to align the legal framework with blockchain-based technological innovation. The enactment and revision of relevant laws should focus on promoting the development of the NFT trading platform ecosystem, rather than merely regulating it for user protection. As blockchain-based technology continues to evolve, it is important that legal systems adapt to support and promote innovation in the space. This shift in focus can help create a more conducive environment for the growth of blockchain-based NFT platforms. The third task proposed is to integrate education on digital arts, including metaverse and NFT art contents, into the current curriculum. This education should focus on convergence and consilience, rather than merely mixing together humanities, technology, and arts. By integrating digital arts education into the curriculum, students can gain a more comprehensive understanding of the potential of blockchain-based technologies and NFT art. This article examines the digital technological innovation such as blockchain, metaverse, and NFT from an economic and industrial point of view. As a limitation of this research, the critical mind such as philosophical thinking or social criticism on technological innovation is left as a future task.

Comparison and Analysis for the Topology of Bladeless Wind Power Generator (블레이드리스 풍력발전기의 토폴로지에 관한 비교·분석)

  • Junhyuk Min;Sungin Jeong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.4
    • /
    • pp.147-154
    • /
    • 2024
  • This study focuses on the modeling and analysis of the linear generator for a bladeless wind power generation to overcome the limitations and drawbacks of conventional wind turbines. A bladeless wind power generation system has the advantages of low land requirement for installation and maintenance cost compared to a blade wind power turbine. Nevertheless, question concerning the generator topology are not satisfying answered. The goal of the research is to compare and analyze the characteristics of horizontal and vertical structures of linear generator for bladeless wind power systems. The proposed topology will be analyzed using magnetic energy by equivalent magnetic circuit method, and then it has been compared and evaluated by finite element method. The results of this project will give elaborate information about new generator structures for wind power system and provide insights into the characteristics of bladeless wind power generation.

Data Efficient Image Classification for Retinal Disease Diagnosis (데이터 효율적 이미지 분류를 통한 안질환 진단)

  • Honggu Kang;Huigyu Yang;Moonseong Kim;Hyunseung Choo
    • Journal of Internet Computing and Services
    • /
    • v.25 no.3
    • /
    • pp.19-25
    • /
    • 2024
  • The worldwide aging population trend is causing an increase in the incidence of major retinal diseases that can lead to blindness, including glaucoma, cataract, and macular degeneration. In the field of ophthalmology, there is a focused interest in diagnosing diseases that are difficult to prevent in order to reduce the rate of blindness. This study proposes a deep learning approach to accurately diagnose ocular diseases in fundus photographs using less data than traditional methods. For this, Convolutional Neural Network (CNN) models capable of effective learning with limited data were selected to classify Conventional Fundus Images (CFI) from various ocular disease patients. The chosen CNN models demonstrated exceptional performance, achieving high Accuracy, Precision, Recall, and F1-score values. This approach reduces manual analysis by ophthalmologists, shortens consultation times, and provides consistent diagnostic results, making it an efficient and accurate diagnostic tool in the medical field.

Evaluation of Non-Point Pollution Loads in Corn-Autumn Kimchi Cabbage Cultivation Areas by Fertilizer Application Levels Using the APEX Model (APEX 모델을 이용한 옥수수-가을배추 재배지의 시비 수준별 비점오염 부하량 평가)

  • Lee, Jong-Mun;Yeob, So-Jin;Jun, Sang-Min;Lee, Byungmo;Yang, Yerin;Choi, Soon-Kun
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.5
    • /
    • pp.15-27
    • /
    • 2024
  • Agriculture is recognized as an important anthropogenic cause of non-point source loads. Improved understanding of non-point source loads according to fertilization practices can promote climate change and eutrophication mitigation. Thus, this study evaluated the impact of conventional and standard fertilization practices on non-point pollution (NPP) loads in a dual-cropping system, utilizing the Agricultural Policy/Environmental eXtender (APEX) model. Our research objectives were twofold: firstly, to calibrate and validate the APEX model with observed data through experiments from 2018 to 2023; and secondly, to compare the NPP loads under conventional and standard fertilization practices. The model calibration and validation showed satisfactory performance in simulating nitrogen (N) and phosphorus (P) loads, illustrating the model's applicability in a Korean agricultural setting. The simulation results under conventional fertilization practices revealed significantly higher NPP loads compared to the standard fertilization, with P loads under conventional practices being notably higher. Our findings emphasize the crucial role of recommended fertilization practices in reducing non-point source pollution. By providing a quantitative assessment of NPP loads under different fertilization practices, this study contributes valuable information to sustainable nutrient management in agricultural systems facing the dual challenges of climate change and environmental conservation.

Enhancing Automated Multi-Object Tracking with Long-Term Occlusions across Consecutive Frames for Heavy Construction Equipment

  • Seongkyun AHN;Seungwon SEO;Choongwan KOO
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1311-1311
    • /
    • 2024
  • Recent advances in artificial intelligence technology have led to active research aimed at systematically managing the productivity and environmental impact of major management targets such as heavy equipment at construction sites. However, challenges arise due to phenomena like partial occlusions, resulting from the dynamic working environment of construction sites (e.g., equipment overlapping, obstruction by structures), which impose practical constraints on precisely monitoring heavy equipment. To address these challenges, this study aims to enhance automated multi-object tracking (MOT) in scenarios involving long-term occlusions across consecutive frames for heavy construction equipment. To achieve this, two methodologies are employed to address long-term occlusions at construction sites: (i) tracking-by-detection and (ii) video inpainting with generative adversarial networks (GANs). Firstly, this study proposes integrating FairMOT with a tracking-by-detection algorithm like ByteTrack or SMILEtrack, demonstrating the robustness of re-identification (Re-ID) in occlusion scenarios. This method maintains previously assigned IDs when heavy equipment is temporarily obscured and then reappears, analyzing location, appearance, or motion characteristics across consecutive frames. Secondly, adopting video inpainting with GAN algorithms such as ProPainter is proposed, demonstrating robustness in removing objects other than the target object (e.g., excavator) during the video preprocessing and filling removed areas using information from surrounding pixels or other frames. This approach addresses long-term occlusion issues by focusing on a single object rather than multiple objects. Through these proposed approaches, improvements in the efficiency and accuracy of detection, tracking, and activity recognition for multiple heavy equipment are expected, mitigating MOT challenges caused by occlusions in dynamic construction site environments. Consequently, these approaches are anticipated to play a significant role in systematically managing heavy equipment productivity, environmental impact, and worker safety through the development of advanced construction and management systems.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

An Analytical Approach Using Topic Mining for Improving the Service Quality of Hotels (호텔 산업의 서비스 품질 향상을 위한 토픽 마이닝 기반 분석 방법)

  • Moon, Hyun Sil;Sung, David;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.21-41
    • /
    • 2019
  • Thanks to the rapid development of information technologies, the data available on Internet have grown rapidly. In this era of big data, many studies have attempted to offer insights and express the effects of data analysis. In the tourism and hospitality industry, many firms and studies in the era of big data have paid attention to online reviews on social media because of their large influence over customers. As tourism is an information-intensive industry, the effect of these information networks on social media platforms is more remarkable compared to any other types of media. However, there are some limitations to the improvements in service quality that can be made based on opinions on social media platforms. Users on social media platforms represent their opinions as text, images, and so on. Raw data sets from these reviews are unstructured. Moreover, these data sets are too big to extract new information and hidden knowledge by human competences. To use them for business intelligence and analytics applications, proper big data techniques like Natural Language Processing and data mining techniques are needed. This study suggests an analytical approach to directly yield insights from these reviews to improve the service quality of hotels. Our proposed approach consists of topic mining to extract topics contained in the reviews and the decision tree modeling to explain the relationship between topics and ratings. Topic mining refers to a method for finding a group of words from a collection of documents that represents a document. Among several topic mining methods, we adopted the Latent Dirichlet Allocation algorithm, which is considered as the most universal algorithm. However, LDA is not enough to find insights that can improve service quality because it cannot find the relationship between topics and ratings. To overcome this limitation, we also use the Classification and Regression Tree method, which is a kind of decision tree technique. Through the CART method, we can find what topics are related to positive or negative ratings of a hotel and visualize the results. Therefore, this study aims to investigate the representation of an analytical approach for the improvement of hotel service quality from unstructured review data sets. Through experiments for four hotels in Hong Kong, we can find the strengths and weaknesses of services for each hotel and suggest improvements to aid in customer satisfaction. Especially from positive reviews, we find what these hotels should maintain for service quality. For example, compared with the other hotels, a hotel has a good location and room condition which are extracted from positive reviews for it. In contrast, we also find what they should modify in their services from negative reviews. For example, a hotel should improve room condition related to soundproof. These results mean that our approach is useful in finding some insights for the service quality of hotels. That is, from the enormous size of review data, our approach can provide practical suggestions for hotel managers to improve their service quality. In the past, studies for improving service quality relied on surveys or interviews of customers. However, these methods are often costly and time consuming and the results may be biased by biased sampling or untrustworthy answers. The proposed approach directly obtains honest feedback from customers' online reviews and draws some insights through a type of big data analysis. So it will be a more useful tool to overcome the limitations of surveys or interviews. Moreover, our approach easily obtains the service quality information of other hotels or services in the tourism industry because it needs only open online reviews and ratings as input data. Furthermore, the performance of our approach will be better if other structured and unstructured data sources are added.

Roles of Cancer Registries in Enhancing Oncology Drug Access in the Asia-Pacific Region

  • Soon, Swee-Sung;Lim, Hwee-Yong;Lopes, Gilberto;Ahn, Jeonghoon;Hu, Min;Ibrahim, Hishamshah Mohd;Jha, Anand;Ko, Bor-Sheng;Lee, Pak Wai;MacDonell, Diana;Sirachainan, Ekaphop;Wee, Hwee-Lin
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.14 no.4
    • /
    • pp.2159-2165
    • /
    • 2013
  • Cancer registries help to establish and maintain cancer incidence reporting system, serve as a resource for investigation of cancer and its causes, and provide information for planning and evaluation of preventive and control programs. However, their wider role in directly enhancing oncology drug access has not been fully explored. We examined the value of cancer registries in oncology drug access in the Asia-Pacific region on three levels: (1) specific registry variable types; (2) macroscopic strategies on the national level; and (3) a regional cancer registry network. Using literature search and proceedings from an expert forum, this paper covers recent cancer registry developments in eight economies in the Asia-Pacific region - Australia, China, Hong Kong, Malaysia, Singapore, South Korea, Taiwan, and Thailand - and the ways they can contribute to oncology drug access. Specific registry variables relating to demographics, tumor characteristics, initial treatment plans, prognostic markers, risk factors, and mortality help to anticipate drug needs, identify high-priority research area and design access programs. On a national level, linking registry data with clinical, drug safety, financial, or drug utilization databases allows analyses of associations between utilization and outcomes. Concurrent efforts should also be channeled into developing and implementing data integrity and stewardship policies, and providing clear avenues to make data available. Less mature registry systems can employ modeling techniques and ad-hoc surveys while increasing coverage. Beyond local settings, a cancer registry network for the Asia-Pacific region would offer cross-learning and research opportunities that can exert leverage through the experiences and capabilities of a highly diverse region.

Performance Analysis of TCAM-based Jumping Window Algorithm for Snort 2.9.0 (Snort 2.9.0 환경을 위한 TCAM 기반 점핑 윈도우 알고리즘의 성능 분석)

  • Lee, Sung-Yun;Ryu, Ki-Yeol
    • Journal of Internet Computing and Services
    • /
    • v.13 no.2
    • /
    • pp.41-49
    • /
    • 2012
  • Wireless network support and extended mobile network environment with exponential growth of smart phone users allow us to utilize the network anytime or anywhere. Malicious attacks such as distributed DOS, internet worm, e-mail virus and so on through high-speed networks increase and the number of patterns is dramatically increasing accordingly by increasing network traffic due to this internet technology development. To detect the patterns in intrusion detection systems, an existing research proposed an efficient algorithm called the jumping window algorithm and analyzed approximately 2,000 patterns in Snort 2.1.0, the most famous intrusion detection system. using the algorithm. However, it is inappropriate from the number of TCAM lookups and TCAM memory efficiency to use the result proposed in the research in current environment (Snort 2.9.0) that has longer patterns and a lot of patterns because the jumping window algorithm is affected by the number of patterns and pattern length. In this paper, we simulate the number of TCAM lookups and the required TCAM size in the jumping window with approximately 8,100 patterns from Snort-2.9.0 rules, and then analyse the simulation result. While Snort 2.1.0 requires 16-byte window and 9Mb TCAM size to show the most effective performance as proposed in the previous research, in this paper we suggest 16-byte window and 4 18Mb-TCAMs which are cascaded in Snort 2.9.0 environment.

Precision monitoring of radial growth of trees and micro-climate at a Korean Fir (Abies koreana Wilson) forest at 10 minutes interval in 2016 on Mt. Hallasan National Park, Jeju Island, Korea

  • Kim, Eun-Shik;Cho, Hong-Bum;Heo, Daeyoung;Kim, Nae-Soo;Kim, Young-Sun;Lee, Kyeseon;Lee, Sung-Hoon;Ryu, Jaehong
    • Journal of Ecology and Environment
    • /
    • v.43 no.2
    • /
    • pp.226-245
    • /
    • 2019
  • To understand the dynamics of radial growth of trees and micro-climate at a site of Korean fir (Abies koreana Wilson) forest on high-altitude area of Mt. Hallasan National Park, Jeju Island, Korea, high precision dendrometers were installed on the stems of Korean fir trees, and the sensors for measuring micro-climate of the forest at 10 minutes interval were also installed at the forest. Data from the sensors were sent to nodes, collected to a gateway wireless, and transmitted to a data server using mobile phone communication system. By analyzing the radial growth data for the trees during the growing season in 2016, we can estimate that the radial growth of Korean fir trees initiated in late April to early May and ceased in late August to early September, which indicates that period for the radial growth was about 4 months in 2016. It is interesting to observe that the daily ambient temperature and the daily soil temperature at the depth of 20 cm coincided with the values of about 10 ℃ when the radial growth of the trees initiated in 2016. When the radial growth ceased, the values of the ambient temperature went down below about 15 ℃ and 16 ℃, respectively. While the ambient temperature and the soil temperature are evaluated to be the good indicators for the initiation and the cessation of radial growth, it becomes clear that radii of tree stems showed diurnal growth patterns affected by diurnal change of ambient temperature. In addition, the wetting and drying of the surface of the tree stems affected by precipitation became the additional factors that affect the expansion and shrinkage of the tree stems at the forest site. While it is interesting to note that the interrelationships among the micro-climatic factors at the forest site were well explained through this study, it should be recognized that the precision monitoring made possible with the application of high resolution sensors in the measurement of the radial increment combined with the observation of 10 minutes interval with aids of information and communication technology in the ecosystem observation.