• Title/Summary/Keyword: Modeling Methods

Search Result 3,865, Processing Time 0.039 seconds

Slope design optimization framework for road cross section using genetic algorithm based on BIM

  • Ke DAI;Shuhan YANG;Zeru LIU;Jung In KIM;Min Jae SUH
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.558-565
    • /
    • 2024
  • This paper presents the development of an optimization framework for road slope design. Recognizing the limitations of current manual stability analysis methods, which are time-consuming, are error-prone, and suffer from data mismatches, this study proposes a systematic approach to improve efficiency, reduce costs, and ensure the safety of infrastructure projects. The framework addresses the subjectivity inherent in engineers' decision-making process by formalizing decision variables, constraints, and objective functions to minimize costs while ensuring safety and environmental considerations. The necessity of this framework is embodied by the review of existing literature, which reveals a trend toward specialization within sub-disciplines of road design; however, a gap remains in addressing the complexities of road slope design through an integrated optimization approach. A genetic algorithm (GA) is employed as a fundamental optimization tool due to its well-established mechanisms of selection, crossover, and mutation, which are suitable for evolving road slope designs toward optimal solutions. An automated batch analysis process supports the GA, demonstrating the potential of the proposed framework. Although the framework focuses on the design optimization of single cross-section road slopes, the implications extend to broader applications in civil engineering practices. Future research directions include refining the GA, expanding the decision variables, and empirically validating the framework in real-world scenarios. Ultimately, this research lays the groundwork for more comprehensive optimization models that could consider multiple cross-sections and contribute to safer and more cost-effective road slope designs.

Multi-objective Generative Design Based on Outdoor Environmental Factors: An Educational Complex Design Case Study

  • Kamyar FATEMIFAR;Qinghao ZENG;Ali TAYEFEH-YARAGHBAFHA;Pardis PISHDAD
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.585-594
    • /
    • 2024
  • In recent years, the construction industry has rapidly adopted offsite-manufacturing and distributed construction methods. This change brings a variety of challenges requiring innovative solutions, such as the utilization of AI-driven and generative design. Numerous studies have explored the concept of multi-objective generative design with genetic algorithms in construction. However, this paper highlights the challenges and proposes a solution for combining generative design with distributed construction to address the need for agility in design. To achieve this goal, the research delves into the development of a multi-objective generative design optimization using a weighted genetic algorithm based on simulated annealing. The specific design case adopted is an educational complex. The proposed process strives for scalable economic viability, environmental comfort, and operational efficiency by optimizing modular configurations of architectural spaces, facilitating affordable, scalable, and optimized construction. Rhino-Grasshopper and Galapagos design tools are used to create a virtual environment capable of generating architectural configurations within defined boundaries. Optimization factors include adherence to urban regulations, acoustic comfort, and sunlight exposure. A normalized scoring approach is also presented to prioritize design preferences, enabling systematic and data-driven design decision-making. Building Information Modeling (BIM) tools are also used to transform the optimization results into tangible architectural elements and visualize the outcome. The resulting process contributes both to practice and academia. Practitioners in AEC industry could gain benefit through adopting and adapting its features with the unique characteristics of various construction projects while educators and future researchers can modify and enhance this process based on new requirements.

System-level measurements based force identification (시스템 레벨의 응답을 이용한 가진력 추정)

  • Seung-Hwan Do;Min-Ho Pak;Seunghun Baek
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.5
    • /
    • pp.547-556
    • /
    • 2024
  • To predict the response of dynamic systems through analysis, it is essential to accurately estimate the system's stiffness and apply it to the analytical model. However, directly measuring the stiffness of actual mechanical systems is challenging. Many existing methods involve decomposing the system into components, obtaining the frequency response for each component, and then reassembling them to determine the overall system response. This process can be cumbersome, and variations in coupling conditions between components can increase errors. In this study, a new method is proposed to estimate system stiffness indirectly through experiments without decomposing the system into components. The approach combines response measurements from the entire system with a theoretical model for analysis. It simplifies the stiffness source into a lumped mass model and constructs the equations of motion based on a reduced-order model of the entire system. Subsequently, the stiffness is quantified by calculating the interface forces between the stiffness source and the receiver using vibration measurements obtained at arbitrary positions through experimentation.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

Visual Media Education in Visual Arts Education (미술교육에 있어서 시각적 미디어를 통한 조형교육에 관한 연구)

  • Park Ji-Sook
    • Journal of Science of Art and Design
    • /
    • v.7
    • /
    • pp.64-104
    • /
    • 2005
  • Visual media transmits image and information reproduced in large quantities, such as a photography, film, television, video, advertisement, or computer image. Correspondence to the students' reception and recognition of culture in the future. arrangements for the field of studies of visual culture. 'Visual Culture' implies cultural phenomena of visual images via visual media, which includes not only the categories of traditional arts like a painting, sculpture, print, or design, but the performance arts including a fashion show or parade of carnival, and the mass and electronic media like a photography, film, television, video, advertisement, cartoon, animation, or computer image. In the world of visual media, Image' functions as an essential medium of communication. Therefore, people call the culture of today fra of Image Culture', which has been converted from an alphabet convergence era to an image convergence one. Image, via visual media, has become a dominant means for communication in large part of human life, so we can designate an Image' as a typical aspect of visual culture today. Image, as an essential medium of communication, plays an important role in contemporary society. The one way is the conversion of analogue image like an actual picture, photograph, or film into digital one through the digitalization of digital camera or scanner as 'an analogue/digital commutator'. The other is a way of process with a computer drawing, or modeling of objects. It is appropriate to the production of pictorial and surreal images. Digital images, produced by the other, can be divided into the form of Pixel' and form of Vector'. Vector is a line linking the point of departure to the point of end, which organizes informations. Computer stores each line's standard location and correlative locations to one another Digital image shows for more 'Perfectness' than any other visual media. Digital image has been evolving in the diverse aspects, such as a production of geometrical or organic image compositing, interactive art, multimedia art, or web art, which has been applied a computer as an extended trot of painting. Someone often interprets digitalized copy with endless reproduction of original even as an extension of a print. Visual af is no longer a simple activity of representation by a painter or sculptor, but now is intimately associated with a matter of application of media. There is some problem in images via visual media. First, the image via media doesn't reflect a reality as it is, but reflects an artificial manipulated world, that is, a virtual reality. Second, the introduction of digital effect and the development of image processing technology have enhanced a spectacle of destructive and violent scenes. Third, a child intends to recognize the interactive images of computer game and virtual reality as a reality, or truth. Education needs not only to point out an ill effect of mass media and prevent the younger generation from being damaged by it, but also to offer a knowledge and know-how to cope actively with social, cultural circumstances. Visual media education is one of these essential methods for the contemporary and future human being in the overflowing of image informations. The fosterage of 'Visual Literacy' can be considered as a very purpose of visual media education. This is a way to lead an individual to the discerning, active consumer and producer of visual media in life as far as possible. The elements of 'Visual Literacy' can be divided into a faculty of recognition related to the visual media, a faculty of critical reception, a faculty of appropriate application, a faculty of active work and a faculty of creative modeling, which are promoted at the same time by the education of 'visual literacy'. In conclusion, the education of 'Visual Literacy' guides students to comprehend and discriminate the visual image media carefully, or receive them critically, apply them properly, or produce them creatively and voluntarily. Moreover, it leads to an artistic activity by means of new media. This education can be approached and enhanced by the connection and integration with real life. Visual arts and education of them play an important role in the digital era depended on visual communications via image information. Visual me야a of day functions as an essential element both in daily life and in arts. Students can soundly understand visual phenomena of today by means of visual media, and apply it as an expression tool of life culture as well. A new recognition and valuation visual image and media education is required to cultivate the capability of active, upright dealing with the changes of history of civilization. 1) Visual media education helps to cultivate a sensibility for images, which reacts to and deals with the circumstances. 2) It helps students to comprehend the contemporary arts and culture via new media. 3) It supplies a chance of students' experiencing a visual modeling by means of new media. 4) There are educational opportunities of images with temporality and spaciality, and therefore a discerning person becomes to increase. 5) The modeling activity via new media leads students to be continuously interested in the school and production of plastic arts. 6) It raises the ability of visual communications dealing with image information society. 7) An education of digital image is significant in respect of cultivation of man of talent for the future society of image information as well. To correspond to the changing and developing social, cultural circumstances, and the form and recognition of students' reception of them, visual arts education must arrange the field of studying on a new visual culture. Besides, a program needs to be developed, which is in more systematic and active level in relation to visual media education. Educational contents should be extended to the media for visual images, that is, photography, film, television, video, computer graphic, animation, music video, computer game and multimedia. Every media must be separately approached, because they maintain the modes and peculiarities of their own according to the conveyance form of message. The concrete and systematic method of teaching and the quality of education must be researched and developed, centering around the development of a course of study. Teacher's foundational capability of teaching should be cultivated for the visual media education. In this case, it must be paid attention to the fact that a technological level of media is considered as a secondary. Because school education doesn't intend to train expert and skillful producers, but intends to lay stress on the essential aesthetic one with visual media under the social and cultural context, in respect of a consumer including a man of culture.

  • PDF

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

Closed Integral Form Expansion for the Highly Efficient Analysis of Fiber Raman Amplifier (라만증폭기의 효율적인 성능분석을 위한 라만방정식의 적분형 전개와 수치해석 알고리즘)

  • Choi, Lark-Kwon;Park, Jae-Hyoung;Kim, Pil-Han;Park, Jong-Han;Park, Nam-Kyoo
    • Korean Journal of Optics and Photonics
    • /
    • v.16 no.3
    • /
    • pp.182-190
    • /
    • 2005
  • The fiber Raman amplifier(FRA) is a distinctly advantageous technology. Due to its wider, flexible gain bandwidth, and intrinsically lower noise characteristics, FRA has become an indispensable technology of today. Various FRA modeling methods, with different levels of convergence speed and accuracy, have been proposed in order to gain valuable insights for the FRA dynamics and optimum design before real implementation. Still, all these approaches share the common platform of coupled ordinary differential equations(ODE) for the Raman equation set that must be solved along the long length of fiber propagation axis. The ODE platform has classically set the bar for achievable convergence speed, resulting exhaustive calculation efforts. In this work, we propose an alternative, highly efficient framework for FRA analysis. In treating the Raman gain as the perturbation factor in an adiabatic process, we achieved implementation of the algorithm by deriving a recursive relation for the integrals of power inside fiber with the effective length and by constructing a matrix formalism for the solution of the given FRA problem. Finally, by adiabatically turning on the Raman process in the fiber as increasing the order of iterations, the FRA solution can be obtained along the iteration axis for the whole length of fiber rather than along the fiber propagation axis, enabling faster convergence speed, at the equivalent accuracy achievable with the methods based on coupled ODEs. Performance comparison in all co-, counter-, bi-directionally pumped multi-channel FRA shows more than 102 times faster with the convergence speed of the Average power method at the same level of accuracy(relative deviation < 0.03dB).

Real data-based active sonar signal synthesis method (실데이터 기반 능동 소나 신호 합성 방법론)

  • Yunsu Kim;Juho Kim;Jongwon Seok;Jungpyo Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.1
    • /
    • pp.9-18
    • /
    • 2024
  • The importance of active sonar systems is emerging due to the quietness of underwater targets and the increase in ambient noise due to the increase in maritime traffic. However, the low signal-to-noise ratio of the echo signal due to multipath propagation of the signal, various clutter, ambient noise and reverberation makes it difficult to identify underwater targets using active sonar. Attempts have been made to apply data-based methods such as machine learning or deep learning to improve the performance of underwater target recognition systems, but it is difficult to collect enough data for training due to the nature of sonar datasets. Methods based on mathematical modeling have been mainly used to compensate for insufficient active sonar data. However, methodologies based on mathematical modeling have limitations in accurately simulating complex underwater phenomena. Therefore, in this paper, we propose a sonar signal synthesis method based on a deep neural network. In order to apply the neural network model to the field of sonar signal synthesis, the proposed method appropriately corrects the attention-based encoder and decoder to the sonar signal, which is the main module of the Tacotron model mainly used in the field of speech synthesis. It is possible to synthesize a signal more similar to the actual signal by training the proposed model using the dataset collected by arranging a simulated target in an actual marine environment. In order to verify the performance of the proposed method, Perceptual evaluation of audio quality test was conducted and within score difference -2.3 was shown compared to actual signal in a total of four different environments. These results prove that the active sonar signal generated by the proposed method approximates the actual signal.

Measuring Consumer Preferences Using Multi-Attribute Utility Theory (다속성 효용이론을 활용한 소비자 선호조사)

  • Ahn, Jae-Hyeon;Bang, Young-Sok;Han, Sang-Pil
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.1-20
    • /
    • 2008
  • Based on the multi-attribute utility theory (MAUT), we present a survey method to measure consumer preferences. The multi-attribute utility theory has been used to make decisions in OR/MS field; however, we show that the method can be effectively used to estimate the demand for new services by measuring individual level utility function. Because conjoint method has been widely used to measure consumer preferences for new products and services, we compare the pros and cons of two consumer preference survey methods. Further, we illustrate how swing weighing method can be effectively used to elicit customer preferences especially for new telecommunications services, Multi-attribute utility theory is a compositional approach for modeling customer preference, in which researchers calculate overall service utility by summing up the evaluation results for each attribute. On the contrary, conjoint method is a decompositional approach, which requires holistic evaluations for profiles. Partworth for each attribute is derived or estimated based on the evaluation, and finally consumer preferences for each profile are calculated. However, if the profiles are quite new and unfamiliar to the survey respondents, they will find it very difficult to accurately evaluate the profiles. We believe that the multi-attribute utility theory-based survey method is more appropriate than the conjoint method, because respondents only need to assess attribute level preferences and not holistic assessment. We chose swing weighting method among many weight assessment methods in multi-attribute utility theory, because it is designed to perform in a simple and fast manner. As illustrated in Clemen and Reilly (2001), to assess swing weights, the first step is to create the worst possible outcome as a benchmark by setting the worst level on each of the attributes. Then, each of the succeeding rows "swings" one of the attributes from worst to best. Upon constructing the swing table, respondents rank order the outcomes (rows). The next step is to rate the outcomes in which the rating for the benchmark is set to be 0 and the rating for the best outcome to be 100, and the ratings for other outcomes are determined in the ranges between 0 and 100. In calculating weight for each attribute, ratings are normalized by the total sum of all ratings. To demonstrate the applicability of the approach, we elicited and analyzed individual-level customer preference for new telecommunication services-WiBro and HSDPA. We began with a randomly selected 800 interviewees, and reduced them to 432 because other remaining ones were related to the people who did not show strong intention for subscription to new telecommunications services. For each combination of content and handset, number of responses which favored WiBro and HSDPA were counted, respectively. It was assumed that interviewee favors a specific service when expected utility is greater than that of competing service(s). Then, the market share of each service was calculated by normalizing the total number of responses which preferred each service. Holistic evaluation of new and unfamiliar service is a tough challenge for survey respondents. We have developed a simple and easy method to assess individual level preference by estimating weight of each attribute. Swing method was applied for this purpose. We believe that estimating individual level preference will be quite flexibly used to predict market performance of new services in many different business environments.

Smart Ship Container With M2M Technology (M2M 기술을 이용한 스마트 선박 컨테이너)

  • Sharma, Ronesh;Lee, Seong Ro
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.3
    • /
    • pp.278-287
    • /
    • 2013
  • Modern information technologies continue to provide industries with new and improved methods. With the rapid development of Machine to Machine (M2M) communication, a smart container supply chain management is formed based on high performance sensors, computer vision, Global Positioning System (GPS) satellites, and Globle System for Mobile (GSM) communication. Existing supply chain management has limitation to real time container tracking. This paper focuses on the studies and implementation of real time container chain management with the development of the container identification system and automatic alert system for interrupts and for normal periodical alerts. The concept and methods of smart container modeling are introduced together with the structure explained prior to the implementation of smart container tracking alert system. Firstly, the paper introduces the container code identification and recognition algorithm implemented in visual studio 2010 with Opencv (computer vision library) and Tesseract (OCR engine) for real time operation. Secondly it discusses the current automatic alert system provided for real time container tracking and the limitations of those systems. Finally the paper summarizes the challenges and the possibilities for the future work for real time container tracking solutions with the ubiquitous mobile and satellite network together with the high performance sensors and computer vision. All of those components combine to provide an excellent delivery of supply chain management with outstanding operation and security.