• Title/Summary/Keyword: Processing Software

Search Result 5,893, Processing Time 0.035 seconds

Effect of Cognitive Affordance of Interactive Media Art Content on the Interaction and Interest of Audience (인터랙티브 미디어아트 콘텐츠의 인지적 어포던스가 관람자의 인터랙션과 흥미에 미치는 영향)

  • Lee, Gangso;Choi, Yoo-Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.9
    • /
    • pp.441-450
    • /
    • 2016
  • In this study, we investigate the effect of the level of cognitive affordance which explains an explicit interaction method on the interest of viewers. Viewer's recognition of the interaction method is associated with cognitive affordance as a matter of visual-perceptual exposure of the input device and viewer's cognition of it. The final goal of the research on affordance is to enhance the audience participation rather than the smooth interation. Many interactive media artworks have been designed with hiding the explicit explanation to the artwork due to worry that the explicit explanation may also hinder the induction of impressions leading the viewer to an aesthetic experience and the retainment of interest. In this context, we set up two hypotheses for study on cognitive affordance. First, the more explicit the explanation of interaction method is, the higher the viewer' understanding of interaction method is. Second, the more explicit the explanation of interaction method is, the lower the interest of the viewer is. An interactive media art work was manufactured with three versions which vary in the degree of visual-perceptual information suggestion and we analyzed the participation and interest level of audience in each version. As a result of the experiments, the version with high explicitness of interaction was found to have long time spent on watching and high participation and interest of viewers. On the contrary, the version with an unexplicit interaction method was found to have low interest and satisfaction of viewers. Therefore, regarding usability, the hypothesis that a more explicit explanation of interaction would lower the curiosity and interest in exploration of the viewer was dismissed. It was confirmed that improvement of cognitive affordance raised the interaction of the work of art and interest of the viewer in the proposed interactive content. This study implies that interactive media art work should be designed in view of that the interaction and interest of audience can be lowered when cognitive affordance is low.

Enhanced Production of Carboxymethylcellulase by a Newly Isolated Marine Microorganism Bacillus atrophaeus LBH-18 Using Rice Bran, a Byproduct from the Rice Processing Industry (미강을 이용한 해양미생물 Bacillus atrophaeus LBH-18 유래의 carboxymethylcellulase 생산의 최적화)

  • Kim, Yi-Joon;Cao, Wa;Lee, Yu-Jeong;Lee, Sang-Un;Jeong, Jeong-Han;Lee, Jin-Woo
    • Journal of Life Science
    • /
    • v.22 no.10
    • /
    • pp.1295-1306
    • /
    • 2012
  • A microorganism producing carboxymethylcellulase (CMCase) was isolated from seawater and identified as Bacillus atrophaeus. This species was designated as B. atrophaeus LBH-18 based on its evolutionary distance and the phylogenetic tree resulting from 16S rDNA sequencing and the neighbor-joining method. The optimal conditions for rice bran (68.1 g/l), peptone (9.1 g/l), and initial pH (7.0) of the medium for cell growth was determined by Design Expert Software based on the response surface method; conditions for production of CMCase were 55.2 g/l, 6.6 g/l, and 7.1, respectively. The optimal temperature for cell growth and the production of CMCase by B. atrophaeus LBH-18 was $30^{\circ}C$. The optimal conditions of agitation speed and aeration rate for cell growth in a 7-l bioreactor were 324 rpm and 0.9 vvm, respectively, whereas those for production of CMCase were 343 rpm and 0.6 vvm, respectively. The optimal inner pressure for cell growth and production of CMCase in a 100-l bioreactor was 0.06 MPa. Maximal production of CMCase under optimal conditions in a 100-l bioreactor was 127.5 U/ml, which was 1.32 times higher than that without an inner pressure. In this study, rice bran was developed as a carbon source for industrial scale production of CMCase by B. atrophaeus LBH-18. Reduced time for the production of CMCase from 7 to 10 days to 3 days by using a bacterial strain with submerged fermentation also resulted in increased productivity of CMCase and a decrease in its production cost.

Prediction of Correct Answer Rate and Identification of Significant Factors for CSAT English Test Based on Data Mining Techniques (데이터마이닝 기법을 활용한 대학수학능력시험 영어영역 정답률 예측 및 주요 요인 분석)

  • Park, Hee Jin;Jang, Kyoung Ye;Lee, Youn Ho;Kim, Woo Je;Kang, Pil Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.11
    • /
    • pp.509-520
    • /
    • 2015
  • College Scholastic Ability Test(CSAT) is a primary test to evaluate the study achievement of high-school students and used by most universities for admission decision in South Korea. Because its level of difficulty is a significant issue to both students and universities, the government makes a huge effort to have a consistent difficulty level every year. However, the actual levels of difficulty have significantly fluctuated, which causes many problems with university admission. In this paper, we build two types of data-driven prediction models to predict correct answer rate and to identify significant factors for CSAT English test through accumulated test data of CSAT, unlike traditional methods depending on experts' judgments. Initially, we derive candidate question-specific factors that can influence the correct answer rate, such as the position, EBS-relation, readability, from the annual CSAT practices and CSAT for 10 years. In addition, we drive context-specific factors by employing topic modeling which identify the underlying topics over the text. Then, the correct answer rate is predicted by multiple linear regression and level of difficulty is predicted by classification tree. The experimental results show that 90% of accuracy can be achieved by the level of difficulty (difficult/easy) classification model, whereas the error rate for correct answer rate is below 16%. Points and problem category are found to be critical to predict the correct answer rate. In addition, the correct answer rate is also influenced by some of the topics discovered by topic modeling. Based on our study, it will be possible to predict the range of expected correct answer rate for both question-level and entire test-level, which will help CSAT examiners to control the level of difficulties.

Research on the influence of web celebrity live broadcast on consumer's purchase intention - Adjusting effect of web celebrity live broadcast contextualization

  • Zou, Ji-Kai;Guo, Han-Wen;Liu, Zi-Yang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.6
    • /
    • pp.239-250
    • /
    • 2020
  • The purpose of this paper is to explore the influence of the "contextualization" effect of web celebrity live broadcast on the e-commerce platform on consumers' perception of product value, risk and purchase intention. Live in this paper, using Taobao shopping consumers as the research object, the survey method, questionnaire survey is adopted, the form through the questionnaire and distributed network, a live in order to further validation of web celebrity effect of contextualized actual influence on consumer purchase intention, questionnaire design the Likert scale, seven and recycling questionnaire analysis using the statistical software SPSS 23.0 and AMOS 22.0 after processing the data. After determining the reliability and validity of the questionnaire, the exploratory factor analysis was used to verify the hypothesis and calculate the actual adjustment degree of the "contextualization" effect of web celebrity live broadcasting on consumers' purchase intention. The research results of this paper are summarized as follows :(1) consumers' perceived value of products can significantly positively affect their purchase intention, while perceived risk has a significantly negative impact on their purchase intention; (2) consumers' trust and purchase intention to products are regulated by the "contextualization" of web celebrity live broadcast. Specifically, for web celebrity live broadcasting with good "contextualization" effect, the perceived value of consumer products has a positive impact on product trust, which is higher than that of web celebrity live broadcasting with poor "contextualization" effect. In terms of resolving consumers' perceived risks to products, web celebrity live broadcast with good "contextualization" effect is also significantly better than web celebrity live broadcast with poor "contextualization" effect. Based on empirical analysis, this paper concludes that web celebrity live broadcasting will become a new breakthrough for the sustainable growth of the e-commerce industry, and puts forward Suggestions on the e-commerce marketing mode and the transformation of web celebrity live broadcasting industry.

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

Analysis of Skin Color Pigments from Camera RGB Signal Using Skin Pigment Absorption Spectrum (피부색소 흡수 스펙트럼을 이용한 카메라 RGB 신호의 피부색 성분 분석)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • In this paper, a method to directly calculate the major elements of skin color such as melanin and hemoglobin from the RGB signal of the camera is proposed. The main elements of skin color typically measure spectral reflectance using specific equipment, and reconfigure the values at some wavelengths of the measured light. The values calculated by this method include such things as melanin index and erythema index, and require special equipment such as a spectral reflectance measuring device or a multi-spectral camera. It is difficult to find a direct calculation method for such component elements from a general digital camera, and a method of indirectly calculating the concentration of melanin and hemoglobin using independent component analysis has been proposed. This method targets a region of a certain RGB image, extracts characteristic vectors of melanin and hemoglobin, and calculates the concentration in a manner similar to that of Principal Component Analysis. The disadvantage of this method is that it is difficult to directly calculate the pixel unit because a group of pixels in a certain area is used as an input, and since the extracted feature vector is implemented by an optimization method, it tends to be calculated with a different value each time it is executed. The final calculation is determined in the form of an image representing the components of melanin and hemoglobin by converting it back to the RGB coordinate system without using the feature vector itself. In order to improve the disadvantages of this method, the proposed method is to calculate the component values of melanin and hemoglobin in a feature space rather than an RGB coordinate system using a feature vector, and calculate the spectral reflectance corresponding to the skin color using a general digital camera. Methods and methods of calculating detailed components constituting skin pigments such as melanin, oxidized hemoglobin, deoxidized hemoglobin, and carotenoid using spectral reflectance. The proposed method does not require special equipment such as a spectral reflectance measuring device or a multi-spectral camera, and unlike the existing method, direct calculation of the pixel unit is possible, and the same characteristics can be obtained even in repeated execution. The standard diviation of density for melanin and hemoglobin of proposed method was 15% compared to conventional and therefore gives 6 times stable.

Analysis of Social Trends for Electric Scooters Using Dynamic Topic Modeling and Sentiment Analysis (동적 토픽 모델링과 감성 분석을 활용한 전동킥보드에 대한 사회적 동향 분석)

  • Kyoungok, Kim;Yerang, Shin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.1
    • /
    • pp.19-30
    • /
    • 2023
  • An electric scooter(e-scooter), one popularized micro-mobility vehicle has shown rapidly increasing use in many cities. In South Korea, the use of e-scooters has greatly increased, as some companies have launched e-scooter sharing services in a few large cities, starting with Seoul in 2018. However, the use of e-scooters is still controversial because of issues such as parking and safety. Since the perception toward the means of transportation affects the mode choice, it is necessary to track the trends for electric scooters to make the use of e-scooters more active. Hence, this study aimed to analyze the trends related to e-scooters. For this purpose, we analyzed news articles related to e-scooters published from 2014 to 2020 using dynamic topic modeling to extract issues and sentiment analysis to investigate how the degree of positive and negative opinions in news articles had changed. As a result of topic modeling, it was possible to extract three different topics related to micro-mobility technologies, shared e-scooter services, and regulations for micro-mobility, and the proportion of the topic for regulations for micro-mobility increased as shared e-scooter services increased in recent years. In addition, the top positive words included quick, enjoyable, and easy, whereas the top negative words included threat, complaint, and ilegal, which implies that people satisfied with the convenience of e-scooter or e-scooter sharing services, but safety and parking issues should be addressed for micro-mobility services to become more active. In conclusion, this study was able to understand how issues and social trends related to e-scooters have changed, and to determine the issues that need to be addressed. Moreover, it is expected that the research framework using dynamic topic modeling and sentiment analysis will be helpful in determining social trends on various areas.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Research on ITB Contract Terms Classification Model for Risk Management in EPC Projects: Deep Learning-Based PLM Ensemble Techniques (EPC 프로젝트의 위험 관리를 위한 ITB 문서 조항 분류 모델 연구: 딥러닝 기반 PLM 앙상블 기법 활용)

  • Hyunsang Lee;Wonseok Lee;Bogeun Jo;Heejun Lee;Sangjin Oh;Sangwoo You;Maru Nam;Hyunsik Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.471-480
    • /
    • 2023
  • The Korean construction order volume in South Korea grew significantly from 91.3 trillion won in public orders in 2013 to a total of 212 trillion won in 2021, particularly in the private sector. As the size of the domestic and overseas markets grew, the scale and complexity of EPC (Engineering, Procurement, Construction) projects increased, and risk management of project management and ITB (Invitation to Bid) documents became a critical issue. The time granted to actual construction companies in the bidding process following the EPC project award is not only limited, but also extremely challenging to review all the risk terms in the ITB document due to manpower and cost issues. Previous research attempted to categorize the risk terms in EPC contract documents and detect them based on AI, but there were limitations to practical use due to problems related to data, such as the limit of labeled data utilization and class imbalance. Therefore, this study aims to develop an AI model that can categorize the contract terms based on the FIDIC Yellow 2017(Federation Internationale Des Ingenieurs-Conseils Contract terms) standard in detail, rather than defining and classifying risk terms like previous research. A multi-text classification function is necessary because the contract terms that need to be reviewed in detail may vary depending on the scale and type of the project. To enhance the performance of the multi-text classification model, we developed the ELECTRA PLM (Pre-trained Language Model) capable of efficiently learning the context of text data from the pre-training stage, and conducted a four-step experiment to validate the performance of the model. As a result, the ensemble version of the self-developed ITB-ELECTRA model and Legal-BERT achieved the best performance with a weighted average F1-Score of 76% in the classification of 57 contract terms.