• Title/Summary/Keyword: Product machine

Search Result 855, Processing Time 0.03 seconds

Application of deep learning technique for battery lead tab welding error detection (배터리 리드탭 압흔 오류 검출의 딥러닝 기법 적용)

  • Kim, YunHo;Kim, ByeongMan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.2
    • /
    • pp.71-82
    • /
    • 2022
  • In order to replace the sampling tensile test of products produced in the tab welding process, which is one of the automotive battery manufacturing processes, vision inspectors are currently being developed and used. However, the vision inspection has the problem of inspection position error and the cost of improving it. In order to solve these problems, there are recent cases of applying deep learning technology. As one such case, this paper tries to examine the usefulness of applying Faster R-CNN, one of the deep learning technologies, to existing product inspection. The images acquired through the existing vision inspection machine are used as training data and trained using the Faster R-CNN ResNet101 V1 1024x1024 model. The results of the conventional vision test and Faster R-CNN test are compared and analyzed based on the test standards of 0% non-detection and 10% over-detection. The non-detection rate is 34.5% in the conventional vision test and 0% in the Faster R-CNN test. The over-detection rate is 100% in the conventional vision test and 6.9% in Faster R-CNN. From these results, it is confirmed that deep learning technology is very useful for detecting welding error of lead tabs in automobile batteries.

Interfacial Properties of Propylene Oxide Adducted Sodium Laureth Sulfate Anionic Surfactant (프로필렌 옥사이드를 부가한 소듐 라우레스 설페이트 음이온 계면활성제의 계면 특성에 관한 연구)

  • Jeong Min Lee;Ki Ho Park;Hee Dong Shin;Woo Jin Jeong;Jong Choo Lim
    • Applied Chemistry for Engineering
    • /
    • v.34 no.3
    • /
    • pp.264-271
    • /
    • 2023
  • In this study, ASCO SLES-430 surfactant was synthesized by adducting 3 moles of ethylene oxide and 1 mole of propylene oxide to lauryl alcohol followed by a sulfation process, and the structure of the synthesized ASCO SLES-430 was elucidated by performing FT-IR, 1H-NMR and 13C-NMR analyses. Interfacial properties such as critical micelle concentration, static surface tension, emulsification index, and contact angle were measured, and environmental compatibility indices such as oral toxicity and skin irritation were also estimated for ASCO SLES-430. Both results were compared with ASCO SLES-226 and ASCO SLES-328 SLES surfactants possessing 2 moles and 3 moles of ethylene oxide, respectively. In particular, both foaming ability and foam stability were evaluated for ASCO SLES-430 and compared with ASCO SLES-226 and ASCO SLES-328, which have been widely used in detergent products, in order to test the potential applicability of ASCO SLES-430 in detergent product formulation for a small capacity built-in washing machine.

A Study on the Design Preference Survey for Development of Auxiliary Therapy Products Utilizing Music of Mild Cognitive Impairment (경도인지장애인의 음악을 활용한 보조 치료기기 제품개발을 위한 디자인 선호도 조사에 관한 연구)

  • Lee, Hae Goo
    • Korea Science and Art Forum
    • /
    • v.31
    • /
    • pp.355-365
    • /
    • 2017
  • The future population of Korea is expected to reach the second highest level in the world by 2060 for the elderly. It is because of the rapid development of low fertility and medical technology. The burden of society for the elderly is expected to increase steadily. The elderly person firstly appears functional disorder. They have low ability in memory and in cognitive will be. Their activities are therefore limited. And economic production capacity is sharply reduced. Self-sufficiency is a difficult situation. They need help in economic and social aspects. Products for them need research and development. The elderly have a Mild Cognitive Impairment(MCI) stage with poor cognitive abilities. It is effective to combine pharmacological and non-pharmacological treatment methods for people with mild cognitive impairment. The effects of non-pharmacological treatments on music have been proven. This paper is a study on the appearance from the viewpoint of design in the development of ancillary instruments using music therapy techniques with Digital Convergence. For this study, we investigated the preference for external appearance of mild cognitive impairment. Two times surveys were conducted. As a result, the design of home care product for the hard cognitive impaired was different from that of a conventional game machine or set top box. It should be designed according to the user's special circumstances. They are memory and cognitive abilities. Products that meet physical and mental changes must be developed.

The Prediction of Cryptocurrency Prices Using eXplainable Artificial Intelligence based on Deep Learning (설명 가능한 인공지능과 CNN을 활용한 암호화폐 가격 등락 예측모형)

  • Taeho Hong;Jonggwan Won;Eunmi Kim;Minsu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.129-148
    • /
    • 2023
  • Bitcoin is a blockchain technology-based digital currency that has been recognized as a representative cryptocurrency and a financial investment asset. Due to its highly volatile nature, Bitcoin has gained a lot of attention from investors and the public. Based on this popularity, numerous studies have been conducted on price and trend prediction using machine learning and deep learning. This study employed LSTM (Long Short Term Memory) and CNN (Convolutional Neural Networks), which have shown potential for predictive performance in the finance domain, to enhance the classification accuracy in Bitcoin price trend prediction. XAI(eXplainable Artificial Intelligence) techniques were applied to the predictive model to enhance its explainability and interpretability by providing a comprehensive explanation of the model. In the empirical experiment, CNN was applied to technical indicators and Google trend data to build a Bitcoin price trend prediction model, and the CNN model using both technical indicators and Google trend data clearly outperformed the other models using neural networks, SVM, and LSTM. Then SHAP(Shapley Additive exPlanations) was applied to the predictive model to obtain explanations about the output values. Important prediction drivers in input variables were extracted through global interpretation, and the interpretation of the predictive model's decision process for each instance was suggested through local interpretation. The results show that our proposed research framework demonstrates both improved classification accuracy and explainability by using CNN, Google trend data, and SHAP.

Establishment of Bank Channel Strategy using Correspondence Analysis : Based on the Customer's Choice Factors of Bank Channel (대응분석을 이용한 은행 채널전략 수립연구 : 고객의 은행채널 선택요인을 바탕으로)

  • Park, Un Hak;Park, Young Bae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.6
    • /
    • pp.151-171
    • /
    • 2023
  • For the efficient establishment of a channel strategy for banks, this study aims to propose a channel model by classifying channels into types, and carrying out a correspondence analysis per type. A survey of bankers was conducted to visualize categorical data and create a positioning map. As a result, first, 12 banking channels were classified into 4 types based on business processing subjects and places, which were then, further grouped into the categories of full-banking and self-banking. Second, a correspondence analysis according to the classified types was carried out, and it was found that the branch-type is suitable for product description and customer management, while the banking-type is suitable for efficient business processing without time and space constraints. Furthermore, the analysis also showed that the machine-type and banking-type are inappropriate for customer management, and the mobility-type demonstrates low operational effectiveness due to a lack of awareness. The aforementioned findings suggest the need for a hybrid convergence channel that reflects the characteristics of banking tasks and fills in the gaps between the different channels. Third, a channel model was derived by adding a common area to the 2×2 model consisting of the business processing subjects and places. Therefore, this study is meaningful in that it examines the diversification of channels and factors in the division of roles by channel type based on customers' banking channel selection factors, and presents basic research findings for future channel strategy establishment and efficient channel operation.

Utilizing deep learning algorithm and high-resolution precipitation product to predict water level variability (고해상도 강우자료와 딥러닝 알고리즘을 활용한 수위 변동성 예측)

  • Han, Heechan;Kang, Narae;Yoon, Jungsoo;Hwang, Seokhwan
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.7
    • /
    • pp.471-479
    • /
    • 2024
  • Flood damage is becoming more serious due to the heavy rainfall caused by climate change. Physically based hydrological models have been utilized to predict stream water level variability and provide flood forecasting. Recently, hydrological simulations using machine learning and deep learning algorithms based on nonlinear relationships between hydrological data have been getting attention. In this study, the Long Short-Term Memory (LSTM) algorithm is used to predict the water level of the Seomjin River watershed. In addition, Climate Prediction Center morphing method (CMORPH)-based gridded precipitation data is applied as input data for the algorithm to overcome for the limitations of ground data. The water level prediction results of the LSTM algorithm coupling with the CMORPH data showed that the mean CC was 0.98, RMSE was 0.07 m, and NSE was 0.97. It is expected that deep learning and remote data can be used together to overcome for the shortcomings of ground observation data and to obtain reliable prediction results.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

THE EFFECTS OF SURFACE CONTAMINATION BY HEMOSTATIC AGENTS ON THE SHEAR BOND STRENGTH OF COMPOMER (지혈제 오염이 콤포머의 전단결합강도에 미치는 영향)

  • Heo, Jeong-Moo;Kwak, Ju-Seog;Lee, Hwang;Lee, Su-Jong;Im, Mi-Kyung
    • Restorative Dentistry and Endodontics
    • /
    • v.27 no.2
    • /
    • pp.150-157
    • /
    • 2002
  • One of the latest concepts in bonding are "total etch", in which both enamel and dentin are etched with an acid to remove the smear layers, and "wet dentin" in which the dentin is not dry but left moist before application of the bonding primer Ideally the application of a bonding agent to tooth structure should be insensitive to minor contamination from oral fluids. Clinically, contaminations such as saliva, gingival fluid, blood and handpiece lubricant are often encountered by dentists during cavity preparation. The aim of this study was to evaluate the effect of contamination by hemostatic agents on shear bond strength of compomer restorations. One hundred and ten extracted human maxillary and mandibular molar teeth were collected. The teeth were removed soft tissue remnant and debris and stored in physiologic solution until they were used. Small flat area on dentin of the buccal surface were wet ground serially with 400, 800 and 1200 abrasive papers on automatic polishing machine. The teeth were randomly divided into 11 groups. Each group was conditioned as follows : Group 1: Dentin surface was not etched and not contaminated by hemostatic agents. Group 2: Dentin surface was not etched but was contaminated by Astringedent$^{\circledR}$(Ultradent product Inc., Utah, U.S.A.) Group 3: Dentin surface was not etched but was contaminated by Bosmin$^{\circledR}$(Jeil Pharm, Korea.). Group 4: Dentin surface was not etched but was contaminated by Epri-dent$^{\circledR}$(Epr Industries, NJ, U.S.A.). Group 5: Dentin surface was etched and not contaminated by hemostatic agents. Group 6: Dentin sorface was etched and contaminated by Astringedent$^{\circledR}$. Group 7 : Dentin surface was etched and contaminated by Bosmin$^{\circledR}$. Group 8: Dentin surface was etched and contaminated by Epri-dent$^{\circledR}$. Group 9: Dentin surface was contaminated by Astringedent$^{\circledR}$. The contaminated surface was rinsed by water and dried by compressed air. Group 10: Dentin surface was contaminated by Bosmin$^{\circledR}$. The contaminated surface was rinsed by water and dried by compressed air. Group 11 : Dentin surface was contaminated by Epri-dent$^{\circledR}$. The contaminated surface was rinsed by water and dried by compressed air. After surface conditioning, F2000$^{\circledR}$ was applicated on the conditoned dentin surface The teeth were thermocycled in distilled water at 5$^{\circ}C$ and 55$^{\circ}C$ for 1,000 cycles. The samples were placed on the binder with the bonded compomer-dentin interface parallel to the knife-edge shearing rod of the Universal Testing Machine(Zwick Z020, Zwick Co., Germany) running at a cross head speed or 1.0 mm/min. Group 2 showed significant decrease in shear bond strength compared with group 1 and group 6 showed significant decrease in shear bond strength compared with group 5. There were no significant differences in shear bond strength between group 5 and group 9, 10 and 11.

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.