• Title/Summary/Keyword: 기계적 특성해석

Search Result 1,190, Processing Time 0.028 seconds

Exploring Factors to Minimize Hallucination Phenomena in Generative AI - Focusing on Consumer Emotion and Experience Analysis - (생성형AI의 환각현상 최소화를 위한 요인 탐색 연구 - 소비자의 감성·경험 분석을 중심으로-)

  • Jinho Ahn;Wookwhan Jung
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.77-90
    • /
    • 2024
  • This research aims to investigate methods of leveraging generative artificial intelligence in service sectors where consumer sentiment and experience are paramount, focusing on minimizing hallucination phenomena during usage and developing strategic services tailored to consumer sentiment and experiences. To this end, the study examined both mechanical approaches and user-generated prompts, experimenting with factors such as business item definition, provision of persona characteristics, examples and context-specific imperative verbs, and the specification of output formats and tone concepts. The research explores how generative AI can contribute to enhancing the accuracy of personalized content and user satisfaction. Moreover, these approaches play a crucial role in addressing issues related to hallucination phenomena that may arise when applying generative AI in real services, contributing to consumer service innovation through generative AI. The findings demonstrate the significant role generative AI can play in richly interpreting consumer sentiment and experiences, broadening the potential for application across various industry sectors and suggesting new directions for consumer sentiment and experience strategies beyond technological advancements. However, as this research is based on the relatively novel field of generative AI technology, there are many areas where it falls short. Future studies need to explore the generalizability of research factors and the conditional effects in more diverse industrial settings. Additionally, with the rapid advancement of AI technology, continuous research into new forms of hallucination symptoms and the development of new strategies to address them will be necessary.

Analysis of Fluidization in a Fluidized Bed External Heat Exchanger using Barracuda Simulation (바라쿠다 시뮬레이션을 이용한 유동층 외부 열교환기의 유동해석)

  • Lee, Jongmin;Kim, Dongwon;Park, Kyoungil;Lee, Gyuhwa
    • Korean Chemical Engineering Research
    • /
    • v.58 no.4
    • /
    • pp.642-650
    • /
    • 2020
  • In general, the circulation path of the fluidized particles in a CFB (Circulating Fluidized Bed) boiler is such that the particles entrained from a combustor are collected by a cyclone and recirculated to the combustor via a sealpot which is one of non-mechanical valves. However, when a fluidized bed heat exchanger (FBHE) is installed to additionally absorb heat from the fluidized particles, some particles in the sealpot pass through the FBHE and then flow into the combustor. At this time, in the FBHE operated in the bubbling fluidization regime, if the heat flow is not evenly distributed by poor mixing of the hot particles (800~950 ℃) flowing in from the sealpot, the heat exchanger tubes would be locally heated and then damaged, and the agglomeration of particles could also occur by formation of hot spot. This may affect the stable operation of the circulating fluidized bed. In this study, the unevenness of heat flow arising from structural problems of the FBHE of the domestic D-CFB boiler was found through the operating data analysis and the CPFD (Computational Particle Fluid Dynamics) simulation using Barracuda VR. Actually, the temperature of the heat exchanger tubes in the FBHE showed the closest correlation with the change in particle temperature of the sealpot. It was also found that the non-uniformity of the heat flow was caused by channeling of hot particles flowing in from the sealpot. However, it was difficult to eliminate the non-uniformity even though the fluidizing velocity of the FBHE was increased enough to fluidize hot particles vigorously. When the premixing zone for hot particles flowing in from the sealpot is installed and when the structure is changed through the symmetrization of the FBHE discharge line for particles reflowing into the combustor, the particle mixing and the uniformity of heat flow were found to be increased considerably. Therefore, it could be suggested that the structural modification of the FBHE, related to premixing and symmetric flow of hot particles, is an alternative to reduce the non-uniformity of the heat flow and to minimize the poor particle mixing.

Use of Chicken Meat and Processing Technologies (가금육의 이용과 가공기술)

  • Ahn, Dong-Uk
    • Proceedings of the Korea Society of Poultry Science Conference
    • /
    • 2003.07b
    • /
    • pp.67-88
    • /
    • 2003
  • The consumption of poultry meat (chicken and turkey) grew the most during the past few decades due to several contributing factors such as low price. product research and development. favorable meat characteristics, responsive to consumer needs, vertical integration and industry consolidation, new processing equipments and technology, and aggressive marketing. The major processing technologies developed and used in chicken processing include forming/restructuring, tumbling, curing, smoking, massaging, injection, marination, emulsifying, breading, battering, shredding, dicing, and individual quick freezing. These processing technologies were applied to various parts of chicken including whole carcass. Product developments using breast, thigh, and mechanically separated chicken meat greatly increased the utilization of poultry meat. Chicken breast became the symbol of healthy food, which made chicken meat as the most frequent menu items in restaurants. However, the use of and product development for dark meat, which includes thigh, drum, and chicken wings were rather limited due to comparatively high fat content in dark meat. Majority of chicken are currently sold as further processed ready-to-cook or ready-to-eat forms. Major quality issues in chicken meat include pink color problems in uncured cooked breast, lipid oxidation and off-flavor, tenderness PSE breast, and food safety. Research and development to ensure the safety and quality of raw and cooked chicken meat using new processing technologies will be the major issues in the future as they are now. Especially, the application of irradiation in raw and cooked chicken meat products will be increased dramatically within next 5 years. The market share of ready-to-eat cooked meat products will be increased. More portion controlled finished products, dark meat products, and organic and ethnic products with various packaging approaches will also be introduced.

  • PDF

A Study on the Value-Relevance of Intangible Expenditure: compare high-technology firms to low-technology firms (첨단산업과 비첨단산업의 무형자산성 지출의 가치관련성에 대한 비교연구)

  • Lee, Chae Ri
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.9 no.1
    • /
    • pp.153-164
    • /
    • 2014
  • This study is to investigate the effects of intangible assets such as research & development, education & training and advertisement on firm values of high-technology firms and low-technology firms listed in the KOSDAQ market, and to analyze the value-relativeness between the audit quality of companies and the expenditure of intangible assets. The substitute measurement of firm values is Tobin's Q model. The sample period for positive analysis is from 2003 to 2008, and the samples, excepting for financial business, are manufacturing companies of closing accounts corporate on December, based on companies of KOSDAQ that are listed in security. Finally, data from about 305 companies are used in this analysis. Followings are the results of the analysis. First, research & development, education & training of high-technology firms have an effect on firm values, and education & training of low-technology have an effect on firm values. Second, we find that audit quality(BIG4) increases the value relevance of R&D expenditures of high-technology firms and audit quality(BIG4) increases the value relevance of education & training expenditures of low-technology firms. This paper is meaningful in that it verified the value-relativeness of cost of intangible assets compared with high-technology firms to low-technology firms.

  • PDF

과학기술위성 3호 주탑재체 MIRIS 개발 현황

  • Han, Won-Yong;Lee, Dae-Hui;Park, Yeong-Sik;Jeong, Ung-Seop;Lee, Chang-Hui;Mun, Bong-Gon;Park, Seong-Jun;Cha, Sang-Mok;Pyo, Jeong-Hyeon;Ga, Neung-Hyeon;Lee, Deok-Haeng;Park, Jang-Hyeon;Seon, Gwang-Il;Nam, Uk-Won;Yang, Sun-Cheol;Lee, Seung-U;Park, Jong-O;Lee, Hyeong-Mok;Toshio, Matsumoto
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.35 no.2
    • /
    • pp.55.2-55.2
    • /
    • 2010
  • 한국천문연구원은 과학기술위성 3호의 주탑재체인 다목적 적외선영상시스템(Multipurpose Infra-Red Imaging System, MIRIS)을 개발하고 있다. 이 연구개발 사업은 2007년 교육과학기술부의 과학위성 3호 사업 주탑재체 공모를 통하여 10여개의 후보 탑재체 제안서 중에서 최종적으로 채택되었고, 2011년 발사를 목표로, 3년 동안의 연구개발 기간을 거쳐 현재 비행모델 (FM, Flight Model) 개발이 진행 중이다. MIRIS는 한국천문연구원이 개발하여 2003년 발사에 성공한 과학위성 1호 주탑재체인 원자외선 영상분광기 (FIMS, Far ultra-violet IMaging Spectroscope)에 이어 국내에서 자체 개발되는 두 번째 우주망원경이다. MIRIS는 우주공간에서 0.9~2 micron 사이 적외선 영역의 파쉔 알파 방출선 (Paschen Alpha Emiision Line)과 광대역 I, H 파장영역을 관측할 예정이다. 주요 과학임무로는 아직까지 국제 천문학계에서 잘 알려지지 않은 우리은하 내부에 분포한 고온 플라즈마 (Warm Ionized Medium, WIM)의 기원 연구와 아울러 우리은하 성간난류(Interstellar Turbulence)의 특성 및 적외선 우주배경복사의 (Cosmic Infrared Background; CIB) 거대구조 등을 관측연구할 예정이다. 특히 MIRIS는 저온상태 (절대온도 77K, 약 $-200^{\circ}C$)에서 우주공간 관측을 수행할 예정이므로, 국내에서는 연구기반이 취약한 극저온 광학계 및 기계부 설계기술, 극저온 냉각기술 및 열해석 설계기술과 적외선 센서기술 및 자료처리 기술 등 관련기술을 개발하고 있으며 이러한 기반기술을 바탕으로, 아직까지 국내에서 시도된 바 없는 적외선우주망원경 개발을 통하여, 우리나라의 관련 우주기술 분야의 기초원천 기술로서 크게 활용될 것으로 기대하고 있다.

  • PDF

Numerical Study on the Effect of Diesel Injection Parameters on Combustion and Emission Characteristics in RCCI Engine (RCCI 엔진의 디젤 분사 파라미터에 따른 연소 및 배출가스 특성에 대한 수치적 연구)

  • Ham, Yun-Young;Min, Sunki
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.6
    • /
    • pp.75-82
    • /
    • 2021
  • Low-temperature combustion (LTC) strategies, such as HCCI (Homogeneous Charge Compression Ignition), PCCI (Premixed Charge Compression Ignition), and RCCI (Reactivity Controlled Compression Ignition), have been developed to effectively reduce NOx and PM while increasing the thermal efficiency of diesel engines. Through numerical analysis, this study examined the effects of the injection timing and two-stage injection ratio of diesel fuel, a highly reactive fuel, on the performance and exhaust gas of RCCI engines using gasoline as the low reactive fuel and diesel as the highly reactive fuel. In the case of two-stage injection, combustion slows down if the first injection timing is too advanced. The combustion temperature decreases, resulting in lower combustion performance and an increase in HC and CO. The injection timing of approximately -60°ATDC is considered the optimal injection timing considering the combustion performance, exhaust gas, and maximum pressure rise rate. When the second injection timing was changed during the two-stage injection, considering the combustion performance, exhaust gas, and the maximum pressure increase rate, it was judged to be optimal around -30°ATDC. In the case of two-stage injection, the optimal result was obtained when the first injection amount was set to approximately 60%. Finally, a two-stage injection rather than a single injection was considered more effective on the combustion performance and exhaust gas.

Numerical Modeling of Hydrogen Embrittlement-induced Ductile Fracture Using a Gurson-Cohesive Model (GCM) and Hydrogen Diffusion (Gurson-Cohesive Model(GCM)과 수소 확산 모델을 결합한 수소 취화 파괴 해석 기법)

  • Jihyuk Park;Nam-Su Huh;Kyoungsoo Park
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.4
    • /
    • pp.267-274
    • /
    • 2024
  • Hydrogen embrittlement fracture poses a challenge in ensuring the structural integrity of materials exposed to hydrogen-rich environments. This study advances our comprehension of hydrogen-induced fracture through an integrated numerical modeling approach. In addition, it employs a ductile fracture model named the Gurson-cohesive model (GCM) and hydrogen diffusion analysis. GCM is employed as a fracture model that combines the Gurson model to illustrate the continuum damage evolution and the cohesive zone model to describe crack surface discontinuity and softening behavior. Moreover, porosity and stress triaxiality are considered as crack initiation criteria . A hydrogen diffusion analysis is also integrated with the GCM to account for hydrogen enhanced decohesion (HEDE) mechanisms and their subsequent impacts on crack initiation and propagation. This framework considers the influence of hydrogen on the softening behavior of the traction-separation relationship on the discontinuous crack surface. Parametric studies explore the sensitivity to diffusion properties and hydrogen-induced fracture properties. By combining numerical models of hydrogen diffusion and the ductile fracture model, this study provides an understanding of hydrogen-induced fracture and thereby contributes significantly to the ongoing efforts to design materials that are resilient to hydrogen embrittlement in practical engineering applications.

Prediction of Key Variables Affecting NBA Playoffs Advancement: Focusing on 3 Points and Turnover Features (미국 프로농구(NBA)의 플레이오프 진출에 영향을 미치는 주요 변수 예측: 3점과 턴오버 속성을 중심으로)

  • An, Sehwan;Kim, Youngmin
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.263-286
    • /
    • 2022
  • This study acquires NBA statistical information for a total of 32 years from 1990 to 2022 using web crawling, observes variables of interest through exploratory data analysis, and generates related derived variables. Unused variables were removed through a purification process on the input data, and correlation analysis, t-test, and ANOVA were performed on the remaining variables. For the variable of interest, the difference in the mean between the groups that advanced to the playoffs and did not advance to the playoffs was tested, and then to compensate for this, the average difference between the three groups (higher/middle/lower) based on ranking was reconfirmed. Of the input data, only this year's season data was used as a test set, and 5-fold cross-validation was performed by dividing the training set and the validation set for model training. The overfitting problem was solved by comparing the cross-validation result and the final analysis result using the test set to confirm that there was no difference in the performance matrix. Because the quality level of the raw data is high and the statistical assumptions are satisfied, most of the models showed good results despite the small data set. This study not only predicts NBA game results or classifies whether or not to advance to the playoffs using machine learning, but also examines whether the variables of interest are included in the major variables with high importance by understanding the importance of input attribute. Through the visualization of SHAP value, it was possible to overcome the limitation that could not be interpreted only with the result of feature importance, and to compensate for the lack of consistency in the importance calculation in the process of entering/removing variables. It was found that a number of variables related to three points and errors classified as subjects of interest in this study were included in the major variables affecting advancing to the playoffs in the NBA. Although this study is similar in that it includes topics such as match results, playoffs, and championship predictions, which have been dealt with in the existing sports data analysis field, and comparatively analyzed several machine learning models for analysis, there is a difference in that the interest features are set in advance and statistically verified, so that it is compared with the machine learning analysis result. Also, it was differentiated from existing studies by presenting explanatory visualization results using SHAP, one of the XAI models.

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.