• 제목/요약/키워드: Input Output Analysis

Search Result 2,373, Processing Time 0.03 seconds

Analysis of Sawmill Productivity and Optimum Combination of Production Factors (제재생산성(製材生産性)과 적정생산요소투입량(適正生産要素投入量) 계측(計測))

  • Cho, Woong Hyuk
    • Journal of Korean Society of Forest Science
    • /
    • v.32 no.1
    • /
    • pp.29-35
    • /
    • 1976
  • In order to estimate sawmill productivities, rates of technical change and optimum combination of production factors, Cobb-Douglas production functions have been derived using data obtained from 96 sample mills in Busan-Incheon, southwestern and northeastern areas. The results may be summarized as follows: 1. There is a tendency of expanding average sawmill size in the areas. The horse-power holdings per mill have been increased at the rates of 91 percent in Busan-Incheon, 7.7 percent in southwestern and 16.9 percent in northeastern areas. This implies that the mills around log-importing ports have made rapid development, compared with those in forest regions. 2. The regression coefficients (production elasticities) of the functions for the year of 1967 in the above three areas are much similar each other, but significant differencies are found in the production functions of 1975. In other words, sawmill productivity was mainly restricted by capital deficiencies in all areas in 1967, but this situation was succeeded only by N-E area in 1975. The range of sum of regression coefficients is 1.0437-1.4214, this indicates increasing rates of return to scale. 3. The annual rates of technical changes in B-I, S-W and N-E areas for the observed period are 17.6, 7.6 and 2.2 percents respectively. Busan-Incheon is the only area where labor productivity is higher than that of capital. 4. The best combination of production factors for maximizing firm's profit is subject to the changes of input and output prices. With some assumptions of prices and costs, the optimum levels of power and labor input in B-I, S-W and N-E areas are 57:17, 427:94 and 192:27.

  • PDF

Forward Osmotic Pressure-Free (△𝜋≤0) Reverse Osmosis and Osmotic Pressure Approximation of Concentrated NaCl Solutions (정삼투-무삼투압차(△𝜋≤0) 법 역삼투 해수 담수화 및 고농도 NaCl 용액의 삼투압 근사식)

  • Chang, Ho Nam;Choi, Kyung-Rok;Jung, Kwonsu;Park, Gwon Woo;Kim, Yeu-Chun;Suh, Charles;Kim, Nakjong;Kim, Do Hyun;Kim, Beom Su;Kim, Han Min;Chang, Yoon-Seok;Kim, Nam Uk;Kim, In Ho;Kim, Kunwoo;Lee, Habit;Qiang, Fei
    • Membrane Journal
    • /
    • v.32 no.4
    • /
    • pp.235-252
    • /
    • 2022
  • Forward osmotic pressure-free reverse osmosis (Δ𝜋=0 RO) was invented in 2013. The first patent (US 9,950,297 B2) was registered on April 18, 2018. The "Osmotic Pressure of Concentrated Solutions" in JACS (1908) by G.N. Lewis of MIT was used for the estimation. The Chang's RO system differs from conventional RO (C-RO) in that two-chamber system of osmotic pressure equalizer and a low-pressure RO system while C-RO is based on a single chamber. Chang claimed that all aqueous solutions, including salt water, regardless of its osmotic pressure can be separated into water and salt. The second patent (US 10.953.367B2, March 23, 2021) showed that a low-pressure reverse osmosis is possible for 3.0% input at Δ𝜋 of 10 to 12 bar. Singularity ZERO reverse osmosis from his third patent (Korea patent 10-22322755, US-PCT/KR202003595) for a 3.0% NaCl input, 50% more water recovery, use of 1/3 RO membrane area, and 1/5th of theoretical energy. These numbers come from Chang's laboratory experiments and theoretical analysis. Relative residence time (RRT) of feed and OE chambers makes Δ𝜋 to zero or negative by recycling enriched feed flow. The construction cost by S-ZERO was estimated to be around 50~60% of the current RO system.

GWB: An integrated software system for Managing and Analyzing Genomic Sequences (GWB: 유전자 서열 데이터의 관리와 분석을 위한 통합 소프트웨어 시스템)

  • Kim In-Cheol;Jin Hoon
    • Journal of Internet Computing and Services
    • /
    • v.5 no.5
    • /
    • pp.1-15
    • /
    • 2004
  • In this paper, we explain the design and implementation of GWB(Gene WorkBench), which is a web-based, integrated system for efficiently managing and analyzing genomic sequences, Most existing software systems handling genomic sequences rarely provide both managing facilities and analyzing facilities. The analysis programs also tend to be unit programs that include just single or some part of the required functions. Moreover, these programs are widely distributed over Internet and require different execution environments. As lots of manual and conversion works are required for using these programs together, many life science researchers suffer great inconveniences. in order to overcome the problems of existing systems and provide a more convenient one for helping genomic researches in effective ways, this paper integrates both managing facilities and analyzing facilities into a single system called GWB. Most important issues regarding the design of GWB are how to integrate many different analysis programs into a single software system, and how to provide data or databases of different formats required to run these programs. In order to address these issues, GWB integrates different analysis programs byusing common input/output interfaces called wrappers, suggests a common format of genomic sequence data, organizes local databases consisting of a relational database and an indexed sequential file, and provides facilities for converting data among several well-known different formats and exporting local databases into XML files.

  • PDF

A Study of Textured Image Segmentation using Phase Information (페이즈 정보를 이용한 텍스처 영상 분할 연구)

  • Oh, Suk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.2
    • /
    • pp.249-256
    • /
    • 2011
  • Finding a new set of features representing textured images is one of the most important studies in textured image analysis. This is because it is impossible to construct a perfect set of features representing every textured image, and it is inevitable to choose some relevant features which are efficient to on-going image processing jobs. This paper intends to find relevant features which are efficient to textured image segmentation. In this regards, this paper presents a different method for the segmentation of textured images based on the Gabor filter. Gabor filter is known to be a very efficient and effective tool which represents human visual system for texture analysis. Filtering a real-valued input image by the Gabor filter results in complex-valued output data defined in the spatial frequency domain. This complex value, as usual, gives the module and the phase. This paper focused its attention on the phase information, rather than the module information. In fact, the module information is considered very useful at region analysis in texture, while the phase information was considered almost of no use. But this paper shows that the phase information can also be fully useful and effective at region analysis in texture, once a good method introduced. We now propose "phase derivated method", which is an efficient and effective way to compute the useful phase information directly from the filtered value. This new method reduces effectively computing burden and widen applicable textured images.

An Empirical Comparison and Verification Study on the Seaport Clustering Measurement Using Meta-Frontier DEA and Integer Programming Models (메타프론티어 DEA모형과 정수계획모형을 이용한 항만클러스터링 측정에 대한 실증적 비교 및 검증연구)

  • Park, Ro-Kyung
    • Journal of Korea Port Economic Association
    • /
    • v.33 no.2
    • /
    • pp.53-82
    • /
    • 2017
  • The purpose of this study is to show the clustering trend and compare empirical results, as well as to choose the clustering ports for 3 Korean ports (Busan, Incheon, and Gwangyang) by using meta-frontier DEA (Data Envelopment Analysis) and integer models on 38 Asian container ports over the period 2005-2014. The models consider 4 input variables (birth length, depth, total area, and number of cranes) and 1 output variable (container TEU). The main empirical results of the study are as follows. First, the meta-frontier DEA for Chinese seaports identifies as most efficient ports (in decreasing order) Shanghai, Hongkong, Ningbo, Qingdao, and Guangzhou, while efficient Korean seaports are Busan, Incheon, and Gwangyang. Second, the clustering results of the integer model show that the Busan port should cluster with Dubai, Hongkong, Shanghai, Guangzhou, Ningbo, Qingdao, Singapore, and Kaosiung, while Incheon and Gwangyang should cluster with Shahid Rajaee, Haifa, Khor Fakkan, Tanjung Perak, Osaka, Keelong, and Bangkok ports. Third, clustering through the integer model sharply increases the group efficiency of Incheon (401.84%) and Gwangyang (354.25%), but not that of the Busan port. Fourth, the efficiency ranking comparison between the two models before and after the clustering using the Wilcoxon signed-rank test is matched with the average level of group efficiency (57.88 %) and the technology gap ratio (80.93%). The policy implication of this study is that Korean port policy planners should employ meta-frontier DEA, as well as integer models when clustering is needed among Asian container ports for enhancing the efficiency. In addition Korean seaport managers and port authorities should introduce port development and management plans accounting for the reference and clustered seaports after careful analysis.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

A Comparative Study on the Aesthetic Aspect of Design Preferred Between Countries Centering Around the Analysis on the Aesthetic Aspect of Mobile Phone Preferred by Korean and Chinese Consumers - (국가 간 선호 디자인의 심미성요소 비교연구 - 한.중 소비자 선호휴대폰의 심미성요소 분석을 중심으로 -)

  • Jeong Su-Kyoung;Hong Jung-Pyo
    • Science of Emotion and Sensibility
    • /
    • v.9 no.1
    • /
    • pp.49-61
    • /
    • 2006
  • The present mobile phone industry has significant effect on the domestic economy and has taken root as the core item that has the responsibility to lead the Korean economy for a considerable period of time. As the mobile phone market becomes gigantic, the mobile phone is being used by people in broader age bracket, and functions or designs preferred by people of various age are getting more diverse. Like that, as the mobile phone has greater effect on and meaning in our daily lives, consumers of mobile phone have growing expectation of the mobile phone Now, the core function of voice communication via the mobile phone is not a great concern to consumers. But the function, such as more convenient and friendly information input and output, processing and storage, and the design, which is more sophisticated and optimized for the user environment, are being demanded, not just the simple voice communication. And as the modern design is getting more similar to the objects of traditional high art consumed by consumers every day, the aesthetic aspect of design can play an important role, as the factor that differentiates the product, in creating new value which forms the spiritual and emotional value of human beings to improve the quality of living, and in addition, the willingness of consumers to buy is determined by the design that they prefer the most. Like that, a new design of mobile phone based on a new dimension and preferred by the consumers the most is urgently required to be developed by shedding light on the factors related to the preference of consumers on the basis of the analysis on the aesthetic aspect, which can be said to be the most critical factor in the design process. Therefore, this study aims to identity the common preference and different factors of aesthetic aspects through the analysis on the aesthetic aspects of the mobile phone preferred by users among countries, and figure out the formative artistic factors of aesthetic aspects that are considered to be important, in order to propose the guideline on the aesthetic aspect of mobile phone that can be applied to the design of mobile phone practically.

  • PDF

Analysis of Industrial Linkage Effects for Farm Land Base Development Project -With respect to the Hwangrak Benefited Area with Reservoir - (농업생산기반 정비사업의 산업연관효과분석 -황락 저수지지구를 중심으로-)

  • Lim, Jae Hwan;Han, Seok Ho
    • Korean Journal of Agricultural Science
    • /
    • v.26 no.2
    • /
    • pp.77-93
    • /
    • 1999
  • This study is aiming at identifying the foreward and backward lingkage effects of the farm land base development project. Korean Government has continuously carried out farmland base development projets including the integrated agricultural development projects. large and medium scale irrigation projects and the comprehensive development of the four big river basin including tidal land reclamation and estuary dam construction for the all weather farming since 1962. the starting year of the five year economic development plans. Consequently the irrigation rate of paddy fields in Korea reached to 75% in 1998 and to escalate the irrigation rate, the Government had procured heavy investment fund from IBRD. IMF and OECF etc. To cope with the agricultural problems like trade liberalization in accordance with WTO policy, the government has tried to solve such problems as new farmland base development policy, preservation of the farmland and expansion of farmland to meet self-sufficiency of foods in the future. Especially, farmland base development projects have been challanged to environmental and ecological problems in evaluating economic benefits and costs where the value of non-market goods have not been included in those. Up to data, in evaluating benefits and costs of the projects, farmland base development projects have been confined to direct incremental value of farm products and it's related costs. Therefore the projects'efficiency as a decision making criteria has shown the low level of economic efficiencies. In estimating economic efficiencies including Leontiefs input-output analysis of the projects could not be founded in Korea at present. Accordingly this study is aimed at achieving and identifying the following objectives. (1) To identify the problems related to the financial supports of the Government in implementing the proposed projects. (2) To estimated backward and foreward linkage effects of the proposed project from the view point of national economy as a whole. To achieve the objectives, Hwangrak benefited area with reservoir which is located in Seosan-haemi Disticts, Chungnam Province were selected as a case study. The main results of the study are summarized as follows : a. The present value of investment and O & M cost were amounted to 3,510million won and the present value of the value added in related industries was estimated at 5.913million won for the period of economic life of 70 years. b. The total discounted value of farm products in the concerned industries derived by the project was estimated at 10,495million won and the foreward and backward linkage effects of the project were amounted to 6,760 and 5,126million won respectively. c. The total number of employment opportunities derived from the related industries for the period of project life were 3,136 man/year. d. Farmland base development projects were showed that the backward linkage effects estimated by index of the sensitivity dispersion were larger than the forward linkage effect estimated by index of the power of dispersion. On the other hand, the forward linkage effect of rice production value during project life was larger than the backward linkage effect e. The rate of creation of new job opportunity by means of implementing civil engineering works were shown high in itself rather than any other fields. and the linkage effects of production of the project investment were mainly derived from the metal and non-metal fields. f. According to the industrial linkage effect analysis, farmland base development projects were identified economically feasible from the view point of national economy as a whole even though the economic efficiencies of the project was outstandingly decreased owing to delaying construction period and increasing project costs.

  • PDF