• Title/Summary/Keyword: Business Layer

Search Result 277, Processing Time 0.026 seconds

4-Branch Waveguide Thermo-Optic Switch With Unequal Width Heaters (크기가 다른 전극폭을 갖는 4분기 광도파로형 열광학스위치)

  • Song, Hyun-Chae;Rhee Tae-Hyung;Shin, Sang-Yung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.37 no.6
    • /
    • pp.57-63
    • /
    • 2000
  • A multi-branch thermo-optic switch has a problem that driving powers in the switching states are different from each other; the power consumption for the inner output port is more than twice as large as that form the outer output port. In this pater, to solve this problem unequal width heaters and the waveguide structure with a thin overcladding layer are proposed in a four-branch thermo-optic switch. The proposed structure is fabricated with the polymer materials with high index difference, Teflon and polyimides. The fabricated device was measured at the wavelength of 1550 nm. The measured characteristics exhibit the smaller difference in the power consumption between the switching states and the driving power les than the previous four-branch thermo-optic switch with equal width heaters. As for the device performance, the crosstalk is better than - 16 dB at about 310 ~ 390 mW, the insertion loss is 4.7 dB, and the switching time is less than 1 ms.

  • PDF

A Study on Survey of The Actual Use of Bedding - Comparisons Between Taegu and Other Areas - (침구류(寢具類)의 사용(使用) 실태(實態)에 관(關)한 조사(調査) 연구(硏究) - 대구지역(大邱地域)과 전국(全國) 8대(大) 지역(地域)의 비교(比較) -)

  • Jung, Yean;Sung, Su-Kwong
    • Journal of Fashion Business
    • /
    • v.1 no.4
    • /
    • pp.10-18
    • /
    • 1997
  • Sleeping is important for human being in that it has a great effect on their daily activities. Considering that an average adult sleeps seven to eight hours a third of our life is spent in bed. The present study intends to make consumers aware of the important of the bedding science and help the manufactures develop efficient, up-to-date beds. For these two purposes, consumers' recognitions and opinions on the features of the bedding were surveyed. In the study, a questionnaire consisting of various items on the features of the bedding was designed and distributed to consumers in 8 regions & Taegu in Korea to survey the patterns in which they buy, use, and maintain their bedding. The result of the study are as follows; Air conditioner is widely used popular because of hot weather in Taegu. People in Taegu mostly purchase their bedding from market, in every 5~6 years. The important point to be considered for bedding purchases were humidity absorption, air permeability, light weight in summer comforters, thermal insulation, flexibility, color figure design in winter comforters, humidity absorption, flexibility, color figure design in mats. As summer bedding, a sheet of single-layer quilt and rush mat were most popular which indicates climate of hot and humid weather in Taegu. Bedding uses in winter were in order a sheet of cotton, silk, wool quilt, and silk quilt showed high level in contrast to other areas. Dissatisfactions with summer quilts were humidity absorption, air permeability, heaviness and in winter quilts they were dissatisfied with thermal insulation, heaviness, flattening. In details about bedding managements, people in Taegu most frequently disinfected their bedding by sunlight in one month, and refinished every 5 years.

  • PDF

Prediction of Baltic Dry Index by Applications of Long Short-Term Memory (Long Short-Term Memory를 활용한 건화물운임지수 예측)

  • HAN, Minsoo;YU, Song-Jin
    • Journal of Korean Society for Quality Management
    • /
    • v.47 no.3
    • /
    • pp.497-508
    • /
    • 2019
  • Purpose: The purpose of this study is to overcome limitations of conventional studies that to predict Baltic Dry Index (BDI). The study proposed applications of Artificial Neural Network (ANN) named Long Short-Term Memory (LSTM) to predict BDI. Methods: The BDI time-series prediction was carried out through eight variables related to the dry bulk market. The prediction was conducted in two steps. First, identifying the goodness of fitness for the BDI time-series of specific ANN models and determining the network structures to be used in the next step. While using ANN's generalization capability, the structures determined in the previous steps were used in the empirical prediction step, and the sliding-window method was applied to make a daily (one-day ahead) prediction. Results: At the empirical prediction step, it was possible to predict variable y(BDI time series) at point of time t by 8 variables (related to the dry bulk market) of x at point of time (t-1). LSTM, known to be good at learning over a long period of time, showed the best performance with higher predictive accuracy compared to Multi-Layer Perceptron (MLP) and Recurrent Neural Network (RNN). Conclusion: Applying this study to real business would require long-term predictions by applying more detailed forecasting techniques. I hope that the research can provide a point of reference in the dry bulk market, and furthermore in the decision-making and investment in the future of the shipping business as a whole.

A case Study on Application of Granular Compaction Pile in Fly Ash Landfill Area (Fly ash로 매립된 지역에서 쇄석다짐말뚝 적용에 관한 사례연구)

  • Lee, Jun-Eui;Lee, Seung-Joo;Hong, Jong-Chul;Lee, Jong-Young;Han, Jung-Geun
    • Journal of the Korean Geosynthetics Society
    • /
    • v.18 no.3
    • /
    • pp.1-9
    • /
    • 2019
  • In this study, the effect of ground improvement was to be verified by granular compaction pile from the ground reclaimed with Fly Ash landfill site. The depth and strength parameters of the Fly ash layer was determined using the ground investigation and cone penetration test. And the STONE C program was used to predict the strength parameter, bearing capacity and settlement of the improved ground. As a result of the plate bearing test, the bearing capacity of improvement ground was higher than the design load and the settlement was smaller than the reference value. After the construction, the improvement effect by the cone penetration test was confirmed. The cone penetration resistance value($q_c$) increased by 250% to 500% and the effect was excellent.

The Analysis of the Maison Margiela's Design Code -Focusing on the Checklist Method- (메종 마르지엘라의 디자인 코드 분석 -체크리스트법을 중심으로-)

  • Mok, So-Ri;Cho, Jean-Suk
    • Journal of Fashion Business
    • /
    • v.19 no.4
    • /
    • pp.135-152
    • /
    • 2015
  • The purpose of this study is to explore the role of deconstruction and creative destruction in Maison Margiela’s fashion design code, which has opened the door to a new wave of innovative design since 1988. Using a combination of theoretical analysis, precedent research, and a review of existing literature, five years of Maison Margiela's works (2010-2014) were evaluated. The analysis shows that Maison Margiela’s Design Code consists of the following key elements: Addition, Extension, Asymmetry, Elimination, Deconstruction, Complanation, and Inversion. Addition refers to act of attaching additional pieces of fabric or clothing on the existing piece. Extension refers to the act of extending the design elements, such as their position or features. Asymmetry means the irregular positioning of left-to right and front-to-back length and features. Deconstruction could be seen in intentionally frayed sleeves, open seams, and tears in the cloth. The element of Elimination was evident in the removal of key pieces of clothing such as a coat, pants, blouse, and a jacket. Complanation refers to the reversion to a two dimensional contemplation of the human form rather than the more obvious 3-dimensional form. Finally, Inversion was used by displaying an inner layer of clothing on the outside or exposing seams or zippers in a way that people are not accustomed to seeing. It also meant that the order of wearing clothes was sometimes inverted, so external layers would be worn within clothing that are traditionally underneath. Maison Margiela’s creations represented a break

An Anomalous Sequence Detection Method Based on An Extended LSTM Autoencoder (확장된 LSTM 오토인코더 기반 이상 시퀀스 탐지 기법)

  • Lee, Jooyeon;Lee, Ki Yong
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.1
    • /
    • pp.127-140
    • /
    • 2021
  • Recently, sequence data containing time information, such as sensor measurement data and purchase history, has been generated in various applications. So far, many methods for finding sequences that are significantly different from other sequences among given sequences have been proposed. However, most of them have a limitation that they consider only the order of elements in the sequences. Therefore, in this paper, we propose a new anomalous sequence detection method that considers both the order of elements and the time interval between elements. The proposed method uses an extended LSTM autoencoder model, which has an additional layer that converts a sequence into a form that can help effectively learn both the order of elements and the time interval between elements. The proposed method learns the features of the given sequences with the extended LSTM autoencoder model, and then detects sequences that the model does not reconstruct well as anomalous sequences. Using experiments on synthetic data that contains both normal and anomalous sequences, we show that the proposed method achieves an accuracy close to 100% compared to the method that uses only the traditional LSTM autoencoder.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Development of Software Education Support System using Learning Analysis Technique (학습분석 기법을 적용한 소프트웨어교육 지원 시스템 개발)

  • Jeon, In-seong;Song, Ki-Sang
    • Journal of The Korean Association of Information Education
    • /
    • v.24 no.2
    • /
    • pp.157-165
    • /
    • 2020
  • As interest in software education has increased, discussions on teaching, learning, and evaluation method it have also been active. One of the problems of software education teaching method is that the instructor cannot grasp the content of coding in progress in the learner's computer in real time, and therefore, instructors are limited in providing feedback to learners in a timely manner. To overcome this problem, in this study, we developed a software education support system that grasps the real-time learner coding situation under block-based programming environment by applying a learning analysis technique and delivers it to the instructor, and visualizes the data collected during learning through the Hadoop system. The system includes a presentation layer to which teachers and learners access, a business layer to analyze and structure code, and a DB layer to store class information, account information, and learning information. The instructor can set the content to be learned in advance in the software education support system, and compare and analyze the learner's achievement through the computational thinking components rubric, based on the data comparing the stored code with the students' code.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.