• Title/Summary/Keyword: data complexity

Search Result 2,399, Processing Time 0.202 seconds

A Study on the Emotional Reaction to the Interior Design - Focusing on the Worship Space in the Church Buildings - (실내공간 구성요소에 의한 감성반응 연구 - 기독교 예배공간 강단부를 중심으로 -)

  • Lee, Hyun-Jeong;Lee, Gyoo-Baek
    • Archives of design research
    • /
    • v.18 no.4 s.62
    • /
    • pp.257-266
    • /
    • 2005
  • The purpose of this study is to investigate the psychological reaction to the image of the worship space in the church buildings and to quantify its contribution of the stimulation elements causing such reaction, and finally to suggest basic data for realizing emotional worship space of the church architecture. For this, 143 christians were surveyed to analyze the relationship between 23 emotional expressions extracted from the worship space and 32 images of the worship space. The combined data was described with the two dimensional dispersion using the quantification theory III. The analysis found out that 'simplicity-complexity' of the image consisted of the horizontal axis (the x-axis) and 'creativity' of the image the vertical axis(the y-axis). In addition, to extract the causal relationship between the value of emotional reaction and its stimulation elements quantitatively, the author indicated 4 emotional word groups such as simple, sublime for x-axis and typical creative for y-axis based on its similarity by the cluster analysis, The quantification theory I was also used with total value of equivalent emotional words as the standard variance and the emotional stimulation elements of the worship space as the independent variance. 9 specific examples of the emotional stimulation elements were selected including colors and shapes of the wall and the ceiling, shapes and finish of the floor materials, window shapes, and the use of the symbolic elements. Furthermore, 31 subcategories were also chosen to analyse their contribution on the emotional reaction. As a result, the color and finish of the wall found to be the most effective element on the subjects' emotional reaction, while the symbolic elements and the color of the wall found to be the least effective. It is estimated that the present study would be helpful to increase the emotional satisfaction of the users and to approach a spatial design through satisfying the types and purposes of the space.

  • PDF

Performance Evaluation of Machine Learning and Deep Learning Algorithms in Crop Classification: Impact of Hyper-parameters and Training Sample Size (작물분류에서 기계학습 및 딥러닝 알고리즘의 분류 성능 평가: 하이퍼파라미터와 훈련자료 크기의 영향 분석)

  • Kim, Yeseul;Kwak, Geun-Ho;Lee, Kyung-Do;Na, Sang-Il;Park, Chan-Won;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.811-827
    • /
    • 2018
  • The purpose of this study is to compare machine learning algorithm and deep learning algorithm in crop classification using multi-temporal remote sensing data. For this, impacts of machine learning and deep learning algorithms on (a) hyper-parameter and (2) training sample size were compared and analyzed for Haenam-gun, Korea and Illinois State, USA. In the comparison experiment, support vector machine (SVM) was applied as machine learning algorithm and convolutional neural network (CNN) was applied as deep learning algorithm. In particular, 2D-CNN considering 2-dimensional spatial information and 3D-CNN with extended time dimension from 2D-CNN were applied as CNN. As a result of the experiment, it was found that the hyper-parameter values of CNN, considering various hyper-parameter, defined in the two study areas were similar compared with SVM. Based on this result, although it takes much time to optimize the model in CNN, it is considered that it is possible to apply transfer learning that can extend optimized CNN model to other regions. Then, in the experiment results with various training sample size, the impact of that on CNN was larger than SVM. In particular, this impact was exaggerated in Illinois State with heterogeneous spatial patterns. In addition, the lowest classification performance of 3D-CNN was presented in Illinois State, which is considered to be due to over-fitting as complexity of the model. That is, the classification performance was relatively degraded due to heterogeneous patterns and noise effect of input data, although the training accuracy of 3D-CNN model was high. This result simply that a proper classification algorithms should be selected considering spatial characteristics of study areas. Also, a large amount of training samples is necessary to guarantee higher classification performance in CNN, particularly in 3D-CNN.

Validation of the Korean Version of the Trauma Symptom Checklist-40 among Psychiatric Outpatients (정신건강의학과 외래환자 대상 한국판 외상 증상 체크리스트(Trauma Symptom Checklist-40)의 타당도 연구)

  • Park, Jin;Kim, Daeho;Kim, Eunkyung;Kim, Seokhyun;Yun, Mirim
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.26 no.1
    • /
    • pp.35-43
    • /
    • 2018
  • Objectives : Effects of multiple trauma are complex and extend beyond core PTSD symptoms. However, few psychological instruments for trauma assessment address this issue of symptom complexity. The Trauma Symptom Checklist-40 (TSC-40) is a self-report scale that assesses wide range of symptoms associated with childhood or adult traumatic experience. The purpose of the present study was to evaluate the validity of the Korean Version of the TSC-40 in a sample of psychiatric outpatients. Methods : Data of 367 treatment-seeking patients with DSM-IV diagnoses were obtained from an outpatient department of psychiatric unit at a university hospital. The diagnoses were anxiety disorder, posttraumatic stress disorder, depressive disorder, adjustment disorder and others. Included in the psychometric data were the TSC-40, the Life events checklist, the Impact of Event Scale-Revised, the Zung's Self-report Depression Scale, and the Zung's Self-report Anxiety Scale. Cronbach's ${\alpha}$ for internal consistency were calculated. Convergent and concurrent validity was approached with correlation between the TSC-40 and other scales (PTSD, anxiety and depression). Results : Exploratory factor analysis of the Korean Version of TSC-40 extracted seven-factor structure accounted for 59.55% of total variance that was contextually similar to a six-factor structure and five-factor structure of the original English version. The Korean Version of TSC-40 demonstrated a high level of internal consistency. (Cronbach's ${\alpha}=0.94$) and good concurrent and convergent validity with another PTSD scale and anxiety and depression scales. Conclusions : Excellent construct validity of The Korean Version of TSC-40 was proved in this study. And subtle difference in the factor structure may reflect the cultural issues and the sample characteristics such as heterogeneous clinical population (including non-trauma related disorders) and outpatient status. Overall, this study demonstrated that the Korean version of TSC-40 is psychometrically sound and can be used for Korean clinical population.

An Empirical Study on the Importance of Psychological Contract Commitment in Information Systems Outsourcing (정보시스템 아웃소싱에서 심리적 계약 커미트먼트의 중요성에 대한 연구)

  • Kim, Hyung-Jin;Lee, Sang-Hoon;Lee, Ho-Geun
    • Asia pacific journal of information systems
    • /
    • v.17 no.2
    • /
    • pp.49-81
    • /
    • 2007
  • Research in the IS (Information Systems) outsourcing has focused on the importance of legal contracts and partnerships between vendors and clients. Without detailed legal contracts, there is no guarantee that an outsourcing vendor would not indulge in self-serving behavior. In addition, partnerships can supplement legal contracts in managing the relationship between clients and vendors legal contracts by itself cannot deal with all the complexity and ambiguity involved with IS outsourcing relationships. In this paper, we introduce a psychological contract (between client and vendor) as an important variable for IS outsourcing success. A psychological contract refers to individual's mental beliefs about his or her mutual obligations in a contractual relationship (Rousseau, 1995). A psychological contract emerges when one party believes that a promise of future returns has been made, a contribution has been given, and thus, an obligation has been created to provide future benefits (Rousseau, 1989). An employmentpsychological contract, which is a widespread concept in psychology, refers to employer and employee expectations of the employment relationship, i.e. mutual obligations, values, expectations and aspirations that operate over and above the formal contract of employment (Smithson and Lewis, 2003). Similar to the psychological contract between an employer and employee, IS outsourcing involves a contract and a set of mutual obligations between client and vendor (Ho et al., 2003). Given the lack of prior research on psychological contracts in the IS outsourcing context, we extend such studies and give insights through investigating the role of psychological contracts between client and vendor. Psychological contract theory offers highly relevant and sound theoretical lens for studying IS outsourcing management because of its six distinctive principles: (1) it focuses on mutual (rather than one-sided) obligations between contractual parties, (2) it's more comprehensive than the concept of legal contract, (3) it's an individual-level construct, (4) it changes over time, (5) it affects organizational behaviors, and (6) it's susceptible to organizational factors (Koh et al., 2004; Rousseau, 1996; Coyle-Shapiro, 2000). The aim of this paper is to put the concept, psychological contract commitment (PCC), under the spotlight, by finding out its mediating effects between legal contracts/partnerships and IS outsourcing success. Our interest is in the psychological contract commitment (PCC) or commitment to psychological contracts, which is the extent to which a partner consistently and deeply concerns with what the counter-party believes as obligations during the IS project. The basic premise for the hypothesized relationship between PCC and success is that for outsourcing success, client and vendor should continually commit to mutual obligations in which both parties believe, rather than to only explicit obligations. The psychological contract commitment playsa pivotal role in evaluating a counter-party because it reflects what one party really expects from the other. If one party consistently shows high commitment to psychological contracts, the other party would evaluate it positively. This will increase positive reciprocation efforts of the other party, thus leading to successful outsourcing outcomes (McNeeley and Meglino, 1994). We have used matched sample data for this research. We have collected three responses from each set of a client and a vendor firm: a project manager of the client firm, a project member from the vendor firm with whom the project manager cooperated, and an end-user of the client company who actually used the outsourced information systems. Special caution was given to the data collection process to avoid any bias in responses. We first sent three types of questionnaires (A, Band C) to each project manager of the client firm, asking him/her to answer the first type of questionnaires (A).

Pollutant Loading Estimate from Yongdam Watershed Using BASINS/HSPF (BASINS/HSPF를 이용한 용담댐 유역의 오염부하량 산정)

  • Jang, Jae-Ho;Jung, Kwang-Wook;Jeon, Ji-Hong;Yoon, Chun-Gyeong
    • Korean Journal of Ecology and Environment
    • /
    • v.39 no.2 s.116
    • /
    • pp.187-197
    • /
    • 2006
  • A mathematical modeling program called Hydrological Simulation Program-FORTRAN (HSPF) developed by the United States Environmental Protection Agency(EPA) was applied to the Yongdam Watershed to examine its applicability for loading estimates in watershed scale. It was run under BASINS (Better Assessment Science for Integrating point and Nonpoint Sources) program, and the model was validated using monitoring data of 2002 ${\sim}$ 2003. The model efficiency of runoff was high in comparison between simulated and observed data, while it was relatively low in the water quality parameters. But its reliability and performance were within the expectation considering complexity of the watershed and pollutant sources and land uses intermixed in the watershed. The estimated pollutant load from Yongdam watershed for BOD, T-N and T-P was 1,290,804 kg $yr{-1}$, 3,753,750 kg $yr{-1}$ and 77,404 kg $yr{-1}$,respectively. Non-point source (NPS) contribution was high showing BOD 57.2%, T-N 92.0% and T-P 60.2% of the total annual loading in the study area. The NPS loading during the monsoon rainy season (June to September) was about 55 ${\sim}$ 72% of total NPS loading, and runoff volume was also in a similar rate (69%). However, water quality was not necessarily high during the rainy season, and showed a decreasing trend with increasing water flow. Overall, the BASINS/HSPF was applied to the Yongdam watershed successfully without difficulty, and it was found that the model could be used conveniently to assess watershed characteristics and to estimate pollutant loading in watershed scale.

Numerical and Experimental Study on the Coal Reaction in an Entrained Flow Gasifier (습식분류층 석탄가스화기 수치해석 및 실험적 연구)

  • Kim, Hey-Suk;Choi, Seung-Hee;Hwang, Min-Jung;Song, Woo-Young;Shin, Mi-Soo;Jang, Dong-Soon;Yun, Sang-June;Choi, Young-Chan;Lee, Gae-Goo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.2
    • /
    • pp.165-174
    • /
    • 2010
  • The numerical modeling of a coal gasification reaction occurring in an entrained flow coal gasifier is presented in this study. The purposes of this study are to develop a reliable evaluation method of coal gasifier not only for the basic design but also further system operation optimization using a CFD(Computational Fluid Dynamics) method. The coal gasification reaction consists of a series of reaction processes such as water evaporation, coal devolatilization, heterogeneous char reactions, and coal-off gaseous reaction in two-phase, turbulent and radiation participating media. Both numerical and experimental studies are made for the 1.0 ton/day entrained flow coal gasifier installed in the Korea Institute of Energy Research (KIER). The comprehensive computer program in this study is made basically using commercial CFD program by implementing several subroutines necessary for gasification process, which include Eddy-Breakup model together with the harmonic mean approach for turbulent reaction. Further Lagrangian approach in particle trajectory is adopted with the consideration of turbulent effect caused by the non-linearity of drag force, etc. The program developed is successfully evaluated against experimental data such as profiles of temperature and gaseous species concentration together with the cold gas efficiency. Further intensive investigation has been made in terms of the size distribution of pulverized coal particle, the slurry concentration, and the design parameters of gasifier. These parameters considered in this study are compared and evaluated each other through the calculated syngas production rate and cold gas efficiency, appearing to directly affect gasification performance. Considering the complexity of entrained coal gasification, even if the results of this study looks physically reasonable and consistent in parametric study, more efforts of elaborating modeling together with the systematic evaluation against experimental data are necessary for the development of an reliable design tool using CFD method.

Characteristics of Pollution Loading from Kyongan Stream Watershed by BASINS/SWAT. (BASINS/SWAT 모델을 이용한 경안천 유역의 오염부하 배출 특성)

  • Jang, Jae-Ho;Yoon, Chun-Gyeong;Jung, Kwang-Wook;Lee, Sae-Bom
    • Korean Journal of Ecology and Environment
    • /
    • v.42 no.2
    • /
    • pp.200-211
    • /
    • 2009
  • A mathematical modeling program called Soil and Water Assessment Tool (SWAT) developed by USDA was applied to Kyongan stream watershed. It was run under BASINS (Better Assessment Science for Integrating point and Non-point Sources) program, and the model was calibrated and validated using KTMDL monitoring data of 2004${\sim}$2008. The model efficiency of flow ranged from very good to fair in comparison between simulated and observed data and it was good in the water quality parameters like flow range. The model reliability and performance were within the expectation considering complexity of the watershed and pollutant sources. The results of pollutant loads estimation as yearly (2004${\sim}$2008), pollutant loadings from 2006 were higher than rest of year caused by high precipitation and flow. Average non-point source (NPS) pollution rates were 30.4%, 45.3%, 28.1% for SS, TN and TP respectably. The NPS pollutant loading for SS, TN and TP during the monsoon rainy season (June to September) was about 61.8${\sim}$88.7% of total NPS pollutant loading, and flow volume was also in a similar range. SS concentration depended on precipitation and pollution loading patterns, but TN and TP concentration was not necessarily high during the rainy season, and showed a decreasing trend with increasing water flow. SWAT based on BASINS was applied to the Kyongan stream watershed successfully without difficulty, and it was found that the model could be used conveniently to assess watershed characteristics and to estimate pollutant loading including point and non-point sources in watershed scale.

A Study of the Influencing Factors for Decision Making on Construction Contract Types : Focused on DoD Construction Acquisitions with Firm Fixed Price and Cost Reimbursable in FAR (건설공사 대가지급방식의 의사결정 영향요인에 관한 연구 - 미국 연방조달규정에 따른 미국 국방성의 정액계약과 실비정산계약을 중심으로 -)

  • Son, Young-Hoon;Kim, Kyung-Rai
    • Korean Journal of Construction Engineering and Management
    • /
    • v.25 no.2
    • /
    • pp.23-35
    • /
    • 2024
  • This study analyzed the correlation between each of the 12 influencing factors in FAR 16.04 and the decision-making process for construction contract types, using data from a total of 2,406 DoD Construction Acquisitions spanning from 2008 to 2022. The study considered 12 independent variables, grouped into 4 Characteristics with 3 factors each. Meanwhile, all other contract types were categorized into two types: Firm-Fixed-Price (FFP) and Cost-Reimbursement Contract (CRC), which served as the dependent variables. The findings revealed that FFP contracts significantly dominated in terms of acquisition volume. In line with prevailing beliefs, logistic data analysis and Analytical Hierarchy Process (AHP) analysis of Relative Weights from Experts' Survey demonstrated that independent variables like Uncertainty of the Scope of Work and Complexity found out to be increasing the likelihood of selecting CRC. The number of contractors in the market does indeed influence the possibilities of contract decision-making between CRC and FFP. Meanwhile, the p-values of the top 3 influencing factors on CRC from the AHP analysis-namely, Appropriateness of CAS, Project Urgency, and Cost Analysis-exceeded 0.05 in the binominal regression results, rendering it inconclusive whether they significantly influenced the construction contract type decision, particularly with respect to payment methods. This outcome partly results from the fact that a majority of respondents possessed specific experiences related to the USFK relocation project. Furthermore, influencing factors in construction projects behave differently than common beliefs suggest. As a result, it is imperative to consider the 12 influencing factors categorized into 4 Characteristics areas before establishing acquisition strategies for targeted construction projects.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.