• Title/Summary/Keyword: Big 5 Model

Search Result 444, Processing Time 0.042 seconds

Evolution of Business Model: From Plug To Platform - Dawon DNS Business Case- (비즈니스 모델의 진화: 플러그에서 플랫폼으로 -다원 DNS IoT 기술의 사례-)

  • Park, MinHyuk;Yeo, Unnam;Lee, Jungwoo
    • Journal of Information Technology Services
    • /
    • v.20 no.5
    • /
    • pp.105-118
    • /
    • 2021
  • As we enter the era of the 4th industrial revolution, information and communication technologies, including artificial intelligence and big data, are converging throughout society. Especially, as the importance of the social foundation of hyper-connection grows, the social influence of IoT, a network of connecting objects, people, and various entities, is also gradually expanding. In addition, as a pandemic, COVID-19, continues, interests in untact-oriented technology and service development are growing more than ever, and each company is trying to establish a core competency strategy to gain an edge in competition in the changing society. This study is a case study centered on Dawon DNS, a company that provides an IoT-based AI smart plug platform. Dawon DNS is broadening its services while developing products by applying advanced technologies, and this study is aiming to investigate the core competencies of the business evolution process. The obtained result of this study will provide implications for companies to become more competitive by suggesting the attitudes and strategies that startups should have during the transforming business environment.

A study of an Architecture of Digital Twin Ship with Mixed Reality

  • Lee, Eun-Joo;Kim, Geo-Hwa;Jang, Hwa-Sup
    • Journal of Navigation and Port Research
    • /
    • v.46 no.5
    • /
    • pp.458-470
    • /
    • 2022
  • As the 4th industrial revolution progresses, the application of several cutting-edge technologies such as the Internet of Things, big data, and mixed reality (MR) in relation to autonomous ships is being considered in the maritime logistics field. The aim of this study was to apply the concept of a digital twin model based on Human Machine Interaction (HMI) including a digital twin model and the role of an operator to a ship. The role of the digital twin is divided into information provision, support, decision, and implementation. The role of the operator is divided into operation, decision-making, supervision, and standby. The system constituting the ship was investigated. The digital twin system that could be applied to the ship was also investigated. The cloud-based digital twin system architecture that could apply investigated applications was divided into ship data collection (part 1), cloud system (part 2), analysis system/ application (part 3), and MR/mobile system (part 4). A Mixed Reality device HoloLens was used as an HMI equipment to perform a simulation test of a digital twin system of an 8 m battery-based electric propulsion ship.

Direct Finite Element Model Generation using 3 Dimensional Scan Data (3D SCAN DATA 를 이용한 직접유한요소모델 생성)

  • Lee Su-Young;Kim Sung-Jin;Jeong Jae-Young;Park Jong-Sik;Lee Seong-Beom
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.5 s.182
    • /
    • pp.143-148
    • /
    • 2006
  • It is still very difficult to generate a geometry model and finite element model, which has complex and many free surface, even though 3D CAD solutions are applied. Furthermore, in the medical field, which is a big growth area of recent years, there is no drawing. For these reasons, making a geometry model, which is used in finite element analysis, is very difficult. To resolve these problems and satisfy the requests of the need to create a 3D digital file for an object where none had existed before, new technologies are appeared recently. Among the recent technologies, there is a growing interest in the availability of fast, affordable optical range laser scanning. The development of 3D laser scan technology to obtain 3D point cloud data, made it possible to generate 3D model of complex object. To generate CAD and finite element model using point cloud data from 3D scanning, surface reconstruction applications have widely used. In the early stage, these applications have many difficulties, such as data handling, model creation time and so on. Recently developed point-based surface generation applications partly resolve these difficulties. However there are still many problems. In case of large and complex object scanning, generation of CAD and finite element model has a significant amount of working time and effort. Hence, we concerned developing a good direct finite element model generation method using point cloud's location coordinate value to save working time and obtain accurate finite element model.

A Hybrid Efficient Feature Selection Model for High Dimensional Data Set based on KNHNAES (2013~2015) (KNHNAES (2013~2015) 에 기반한 대형 특징 공간 데이터집 혼합형 효율적인 특징 선택 모델)

  • Kwon, Tae il;Li, Dingkun;Park, Hyun Woo;Ryu, Kwang Sun;Kim, Eui Tak;Piao, Minghao
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.739-747
    • /
    • 2018
  • With a large feature space data, feature selection has become an extremely important procedure in the Data Mining process. But the traditional feature selection methods with single process may no longer fit for this procedure. In this paper, we proposed a hybrid efficient feature selection model for high dimensional data. We have applied our model on KNHNAES data set, the result shows that our model outperforms many existing methods in terms of accuracy over than at least 5%.

An Estimating Model for Job-Site Overhead Costs according to Progress Rate (공정률에 따른 아파트 건설공사 현장관리비 산정모델)

  • Jeong, Kichang;Lee, Jaeseob
    • Korean Journal of Construction Engineering and Management
    • /
    • v.19 no.5
    • /
    • pp.43-52
    • /
    • 2018
  • Generally, research on construction cost has been done mostly regarding its direct cost, thus model regarding indirect cost lacks attention. This research seeks to introduce a model to predict on-site overhead cost for apartment construction projects, which constitutes a big portion in Korean construction industry. We devised an equation of 9th degree via curve-fitting, using multiple on-site actual expense data, which can be used to calculate per-progress rate, per-day on-site overhead cost. We further show prospective usage of the model by applying it on construction projects sizing about 30 billion won. Regarding the fact that previous studies could not recognize pattern changes of a total on-site overhead cost, this model is worthy of its conveniency and thoroughness, as well as providing reasonal ground for its derivation in predicting on-site overhead cost of apartment construction projects.

Development of a model to analyze the relationship between smart pig-farm environmental data and daily weight increase based on decision tree (의사결정트리를 이용한 돈사 환경데이터와 일당증체 간의 연관성 분석 모델 개발)

  • Han, KangHwi;Lee, Woongsup;Sung, Kil-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.12
    • /
    • pp.2348-2354
    • /
    • 2016
  • In recent days, IoT (Internet of Things) technology has been widely used in the field of agriculture, which enables the collection of environmental data and biometric data into the database. The availability of big data on agriculture results in the increase of the machine learning based analysis. Through the analysis, it is possible to forecast agricultural production and the diseases of livestock, thus helping the efficient decision making in the management of smart farm. Herein, we use the environmental and biometric data of Smart Pig farm to derive the accurate relationship model between the environmental information and the daily weight increase of swine and verify the accuracy of the derived model. To this end, we applied the M5P tree algorithm of machine learning which reveals that the wind speed is the major factor which affects the daily weight increase of swine.

Assessing the adoption potential of a smart greenhouse farming system for tomatoes and strawberries using the TOA-MD model

  • Lee, Won Seok;Kim, Hyun Seok
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.4
    • /
    • pp.743-752
    • /
    • 2020
  • The purpose of this study was to estimate the economic evaluation of a smart farm investment for tomatoes and strawberries. In addition, the potential adoption rate of the smart farm was derived for different scenarios. This study analyzed the economic evaluation with the net present value (NPV) method and estimated the adoption potential of the smart farm with the trade-off analysis, minimum data (TOA-MD) model. The results were as follows: The analysis of the net present value shows that the smart farm investment for the two crops are economically feasible, and the minimum prices for the tomatoes and strawberries should be 1,179 and 3,797 won/kg to secure a sufficient economic feasibility for the smart farm investment. Next, the analysis of the potential adoption rates for smart farms through the TOA-MD model showed that when the support ratio for the adoption of a smart farm system was 50% and the price increase rates were, respectively, - 5, 2.5, 0, 2.5, and 5%, the conversion rates for tomato farms to switch to smart farms were 0.97, 1.78, 3.05, 4.91, and 7.47%, while the ratios of the strawberry farms to switch to smart farms were 0.12, 0.29, 0.65, 1.33, and 2.53%, respectively. This study has some known limitations, but it provides useful information on decision making about smart farm adoption and can contribute to government policies on smart farms.

Hologram Based QSAR Analysis of CXCR-2 Inhibitors

  • Sathya., B
    • Journal of Integrative Natural Science
    • /
    • v.10 no.2
    • /
    • pp.78-84
    • /
    • 2017
  • CXC chemokine receptor 2 (CXCR2) is a prominent chemokine receptor on neutrophils. CXCR2 antagonist may reduce the neutrophil chemotaxis and alter the inflammatory response because the neutrophilic inflammation in the lung diseases is found to be largely regulated through CXCR2 receptor. Hence, in the present study, Hologram based Quantitative Structure Activity Relationship Study was performed on a series of CXCR2 antagonist named pyrimidine-5-carbonitrile-6-alkyl derivatives. The best HQSAR model was obtained using atoms, bonds, and chirality as fragment distinction parameter using hologram length 151 and 6 components with fragment size of minimum 4 and maximum 7. Significant cross-validated correlation coefficient ($q^2=0.774$) and non cross-validated correlation coefficients ($r^2=0.977$) were obtained. The model was then used to evaluate the six external test compounds and its $r^2_{pred}$ was found to be 0.614. Contribution map show that presence of cyclopropyl ring and its bulkier substituent's makes big contributions for improving the biological activities of the compounds. We hope that our HQSAR model and analysis will be helpful for future design of novel and structurally related CXCR2 antagonists.

Recent R&D Trends for 3D Deep Learning (3D 딥러닝 기술 동향)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Choi, J.S.;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.5
    • /
    • pp.103-110
    • /
    • 2018
  • Studies on artificial intelligence have been developed for the past couple of decades. After a few periods of prosperity and recession, a new machine learning method, so-called Deep Learning, has been introduced. This is the result of high-quality big- data, an increase in computing power, and the development of new algorithms. The main targets for deep learning are 1D audio and 2D images. The application domain is being extended from a discriminative model, such as classification/segmentation, to a generative model. Currently, deep learning is used for processing 3D data. However, unlike 2D, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become more popular owing to advances in 3D vision technology, the generation/acquisition of 3D data remains a very difficult problem. Moreover, it is not easy to directly apply an existing network model, such as a convolution network, owing to the variety of 3D data representations. In this paper, we summarize the 3D deep learning technology that have started to be developed within the last 2 years.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.