• Title/Summary/Keyword: Smart structures

Search Result 2,155, Processing Time 0.025 seconds

Towards high-accuracy data modelling, uncertainty quantification and correlation analysis for SHM measurements during typhoon events using an improved most likely heteroscedastic Gaussian process

  • Qi-Ang Wang;Hao-Bo Wang;Zhan-Guo Ma;Yi-Qing Ni;Zhi-Jun Liu;Jian Jiang;Rui Sun;Hao-Wei Zhu
    • Smart Structures and Systems
    • /
    • v.32 no.4
    • /
    • pp.267-279
    • /
    • 2023
  • Data modelling and interpretation for structural health monitoring (SHM) field data are critical for evaluating structural performance and quantifying the vulnerability of infrastructure systems. In order to improve the data modelling accuracy, and extend the application range from data regression analysis to out-of-sample forecasting analysis, an improved most likely heteroscedastic Gaussian process (iMLHGP) methodology is proposed in this study by the incorporation of the outof-sample forecasting algorithm. The proposed iMLHGP method overcomes this limitation of constant variance of Gaussian process (GP), and can be used for estimating non-stationary typhoon-induced response statistics with high volatility. The first attempt at performing data regression and forecasting analysis on structural responses using the proposed iMLHGP method has been presented by applying it to real-world filed SHM data from an instrumented cable-stay bridge during typhoon events. Uncertainty quantification and correlation analysis were also carried out to investigate the influence of typhoons on bridge strain data. Results show that the iMLHGP method has high accuracy in both regression and out-of-sample forecasting. The iMLHGP framework takes both data heteroscedasticity and accurate analytical processing of noise variance (replace with a point estimation on the most likely value) into account to avoid the intensive computational effort. According to uncertainty quantification and correlation analysis results, the uncertainties of strain measurements are affected by both traffic and wind speed. The overall change of bridge strain is affected by temperature, and the local fluctuation is greatly affected by wind speed in typhoon conditions.

A Study on the Digital Construction Information Structure for the Implementing Digital Twin of Road Construction Sites (도로 건설현장의 디지털트윈 구현을 위한 디지털 건설정보구조에 관한 연구)

  • Taewon Chung;Hyon Wook Ji;Jin Hoon Bok
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.1
    • /
    • pp.153-166
    • /
    • 2024
  • The digitalization of tasks for smart construction requires the smooth exchange of digital data among stakeholders to be effective, but there is a lack of digital data standardization and utilization methods. This paper proposes a digital construction information structure to transform information from road construction sites into digital formats. The study targets include significant tasks, such as work planning, scheduling, safety management, and quality control. The key to the construction information structure is separating construction information into objects and activities, defining unit works by combining these two types of information to ensure flexibility in representing and modifying construction information. The objects and activities have their respective hierarchical structures, which are defined flexibly to match the actual content. This structure achieves both efficiency and detail. The pilot structure was applied to highway construction projects and implemented digitally using general formats. This study enables the digitalization of road construction processes that closely resemble reality, accelerating the digital transformation of the civil engineering industry by developing a digital twin of the entire road construction lifecycle.

Micropatterning of Polyimide and Liquid Crystal Elastomer Bilayer for Smart Actuator (스마트 액추에이터를 위한 폴리이미드 및 액정 엘라스토머 이중층의 미세패터닝)

  • Yerin Sung;Hyun Seung Choi;Wonseong Song;Vanessa;Yuri Kim;Yeonhae Ryu;Youngjin Kim;Jaemin Im;Dae Seok Kim;Hyun Ho Choi
    • Journal of Adhesion and Interface
    • /
    • v.25 no.1
    • /
    • pp.169-274
    • /
    • 2024
  • Recent attention has been drawn to materials that undergo reversible expansion and contraction in response to external stimuli, leading to morphological changes. These materials hold potential applications in various fields including soft robotics, sensors, and artificial muscles. In this study, a novel material capable of responding to high temperatures for protection or encapsulation is proposed. To achieve this, liquid crystal elastomer (LCE) with nematic-isotropic transition properties and polyimide (PI) with high mechanical strength and thermal stability were utilized. To utilize a solution process, a dope solution was synthesized and introduced into micro-printing techniques to develop a two-dimensional pattern of LCE/PI bilayer structures with sub-millimeter widths. The honeycomb-patterned LCE/PI bilayer mesh combined the mechanical strength of PI with the high-temperature contraction behavior of LCE, and selective printing of LCE facilitated deformation in desired directions at high temperatures. Consequently, the functionality of selectively and reversibly encapsulating specific high-temperature materials was achieved. This study suggests potential applications in various actuator fields where functionalities can be implemented across different temperature ranges without the need for electrical energy input, contingent upon molecular changes in LCE.

A Study of the Beauty Commerce Customer Segment Classification and Application based on Machine Learning: Focusing on Untact Service (머신러닝 기반의 뷰티 커머스 고객 세그먼트 분류 및 활용 방안: 언택트 서비스 중심으로)

  • Sang-Hyeak Yoon;Yoon-Jin Choi;So-Hyun Lee;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.22 no.4
    • /
    • pp.75-92
    • /
    • 2020
  • As population and generation structures change, more and more customers tend to avoid facing relation due to the development of information technology and spread of smart phones. This phenomenon consists with efficiency and immediacy, which are the consumption patterns of modern customers who are used to information technology, so offline network-oriented distribution companies actively try to switch their sales and services to untact patterns. Recently, untact services are boosted in various fields, but beauty products are not easy to be recommended through untact services due to many options depending on skin types and conditions. There have been many studies on recommendations and development of recommendation systems in the online beauty field, but most of them are the ones that develop recommendation algorithm using survey or social data. In other words, there were not enough studies that classify segments based on user information such as skin types and product preference. Therefore, this study classifies customer segments using machine learning technique K-prototypesalgorithm based on customer information and search log data of mobile application, which is one of untact services in the beauty field, based on which, untact marketing strategy is suggested. This study expands the scope of the previous literature by classifying customer segments using the machine learning technique. This study is practically meaningful in that it classifies customer segments by reflecting new consumption trend of untact service, and based on this, it suggests a specific plan that can be used in untact services of the beauty field.

Enhancing Project Integration and Interoperability of GIS and BIM Based on IFC (IFC 기반 GIS와 BIM 프로젝트 통합관리 및 상호 운용성 강화)

  • Kim, Tae-Hee;Kim, Tae-Hyun;Lee, Yong-Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.54 no.1
    • /
    • pp.89-102
    • /
    • 2024
  • The recent advancements in Smart City and Digital Twin technologies have highlighted the critical role of integrating GIS and BIM in urban planning and construction projects. This integration ensures the consistency and accuracy of information, facilitating smooth information exchange. However, achieving interoperability requires standardization and effective project integration management strategies. This study proposes interoperability solutions for the integration of GIS and BIM for managing various projects. The research involves an in-depth analysis of the IFC schema and data structures based on the latest IFC4 version and proposes methods to ensure the consistency of reference point coordinates and coordinate systems. The study was conducted by setting the EPSG:5186 coordinate system, used by the National Geographic Information Institute's digital topographic map, and applying virtual shift origin coordinates. Through BIMvision, the results of the shape and error check coordinates' movement in the BIM model were reviewed, confirming that the error check coordinates moved consistently with the reference point coordinates. Additionally, it was verified that even when the coordinate system was changed to EPSG:5179 used by Naver Map and road name addresses, or EPSG:5181 used by Kakao Map, the BIM model's shape and coordinates remained consistently unchanged. Notably, by inputting the EPSG code information into the IFC file, the potential for coordinate system interoperability between projects was confirmed. Therefore, this study presents an integrated and systematic management approach for information sharing, automation processes, enhanced collaboration, and sustainable development of GIS and BIM. This is expected to improve compatibility across various software platforms, enhancing information consistency and efficiency across multiple projects.

Effect of Social Relation on Digital Device Usage: A Social Capital Perspective (개인의 사회적 관계가 디지털 기기 활용에 미치는 영향에 대한 연구: 사회적 자본 관점)

  • Yunmo Koo;Joohyun Oh
    • Information Systems Review
    • /
    • v.21 no.3
    • /
    • pp.131-149
    • /
    • 2019
  • As smart phones, tablets, and other digital devices become more pervasive, theoretical arguments around digital divide, which has previously focused on "access," is now expanding to effectively "utilize," actively "produce" and "share" information. Such discussion is significant because the impact on inter-personal and social networks depends on how digital divides are used, which can then recreate or exacerbate social inequality structures. This study examines the effect of individual's social relations and two types of social capital (i.e., bonding and bridging) on economic and socio-participatory usage of digital devices. An empirical analysis of dataset from 740 surveys reveals that the more horizontal the social relations of the individual, the more both bonding and bridging social capital increase. However, rather than the social relationship of the individual directly influencing the two types of digital device usage, it has an indirect effect on both economic and socio-participatory usage of digital devices. In particular, mediating effects of both bonding and bridging social capital exist in the case of economic usage of digital devices, whereas bonding social capital only has mediating effects on economic usage of digital devices. We discuss the role of social capital on digital devices usage and present the theoretical and practical implications.

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

Analyzing animation techniques used in webtoons and their potential issues (웹툰 연출의 애니메이션 기법활용과 문제점 분석)

  • Kim, Yu-mi
    • Cartoon and Animation Studies
    • /
    • s.46
    • /
    • pp.85-106
    • /
    • 2017
  • With the media's shift into the digital era in the 2000s, comic book publishers attempted a transition into the new medium by establishing a distribution structure using internet networks. But that effort shied from escaping the parallel-page reading structure of traditional comics. On the other hand, webtoons are showing divers changes by redesigning the structure of traditional sequential art media; they tend to separate and allot spaces according to the vertical scroll reading method of the internet browser and include animations, sound effects and background music. This trend is also in accordance with the preferences of modern readers. Modern society has complicated social structures with the development of various media; the public is therefore exposed to different stimuli and shows characteristics of differentiated perceptions. In other words, webtoons display more relevant and entertaining characteristics by inserting sounds and using moving texts and characters in specific frames, while traditional comics require an appreciation of withdrawal and immersion like other published media. Motions in webtoons are partially applied for dramatic tension or to create an effective expression of action. For example, hand-drawn animation is adopted to express motions by dividing motion images into many layers. Sounds are also utilized, such as background music with episode-related lyrics, melodies, ambient sounds and motion-related sound effects. In addition, webtoons provide readers with new amusement by giving tactile stimuli via the vibration of a smart phone. As stated above, the vertical direction, time-based nature of animation motions and tactile stimuli used in webtoons are differentiated from published comics. However, webtoons' utilization of innovative techniques hasn't yet reached its full potential. In addition to the fact that the software used for webtoon effects is operationally complex, this is a transitional phenomenon since there is still a lack of technical understanding of animation and sound application amongst the general public. For example, a sound might be programmed to play when a specific frame scrolls into view on the monitor, but the frame may be scrolled faster or slower than the author intended; in this case, sound can end before or after a reader sees the whole image. The motion of each frame is also programmed to start in a similar fashion. Therefore, a reader's scroll speed is related to the motion's speed. For this reason, motions might miss the intended timing and be unnatural because they are played out of context. Also, finished sound effects can disturb the concentration of readers. These problems come from a shortage of continuity; to solve these, naturally activated consecutive sounds or animations, like the simple rotation of joints when a character moves, is required.

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.