• Title/Summary/Keyword: large integration time-step

Search Result 39, Processing Time 0.027 seconds

Application of computer methods in music composition using smart nanobeams

  • Ying Shi;Maryam Shokravi;X. Chen
    • Advances in nano research
    • /
    • v.17 no.3
    • /
    • pp.285-291
    • /
    • 2024
  • The paper considers one of the new applications of computer methods in music composition, using smart nanobeams-an integration of advanced computational techniques with new, specially designed materials for enhanced performance capabilities in music composition. The research applies some peculiar properties of smart nanobeams, embedded with piezoelectric materials that modulate and control sound vibrations in real-time. The study is conducted to determine the effects of changes in the length, thickness of nanobeams and the applied voltage on acoustical properties and the tone quality of musical instruments with the help of numerical simulations and optimization algorithms. By means of piezo-elasticity theory, different governing equations of nanobeam systems can be derived, which are solved by the numerical method to predict the dynamic behavior of the system under different conditions. Results show that manipulation of the parameters allows great control over pitch, timbre, and resonance of the instrument; such a system offers new ways in which composers and performers can create music. This research also validates the computational model against available theoretical data, proving the accuracy and possible applications of the former. The work thus marks a large step towards the intersection of music composition with smart material technology, and, when further developed, it would mean that smart nanobeams could revolutionize the process for composing and performing music on these instruments.

Determination of the Critical Buckling Loads of Shallow Arches Using Nonlinear Analysis of Motion (비선형 운동해석에 의한 낮은 아치의 동적 임계좌굴하중의 결정)

  • Kim, Yun Tae;Huh, Taik Nyung;Kim, Moon Kyum;Hwang, Hak Joo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.12 no.2
    • /
    • pp.43-54
    • /
    • 1992
  • For shallow arches with large dynamic loading, linear analysis is no longer considered as practical and accurate. In this study, a method is presented for the dynamic analysis of shallow arches in which geometric nonlinearity must be considered. A program is developed for the analysis of the nonlinear dynamic behavior and for evaluation of critical buckling loads of shallow arches. Geometric nonlinearity is modeled using Lagrangian description of the motion. The finite element analysis procedure is used to solve the dynamic equation of motion and Newmark method is adopted in the approximation of time integration. A shallow arch subject to radial step loads is analyzed. The results are compared with those from other researches to verify the developed program. The behavior of arches is analyzed using the non-dimensional time, load, and shape parameters. It is shown that geometric nonlinearity should be considered in the analysis of shallow arches and probability of buckling failure is getting higher as arches are getting shallower. It is confirmed that arches with the same shape parameter have the same deflection ratio at the same time parameter when arches are loaded with the same parametric load. In addition, it is proved that buckling of arches with the same shape parameter occurs at the same load parameter. Circular arches, which are under a single or uniform normal load, are analyzed for comparison. A parabolic arch with radial step load is also analyzed. It is verified that the developed program is applicable for those problems.

  • PDF

Validation of underwater explosion response analysis for airbag inflator using a fluid-structure interaction algorithm

  • Lee, Sang-Gab;Lee, Jae-Seok;Chung, Hyun;Na, Yangsup;Park, Kyung-Hoon
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.12 no.1
    • /
    • pp.988-995
    • /
    • 2020
  • Air gun shock systems are commonly used as alternative explosion energy sources for underwater explosion (UNDEX) shock tests owing to their low cost and environmental impact. The airbag inflator of automotive airbag systems is also very useful to generate extremely rapid underwater gas release in labscale tests. To overcome the restrictions on the very small computational time step owing to the very fine fluid mesh around the nozzle hole in the explicit integration algorithm, and also the absence of a commercial solver and software for gas UNDEX of airbag inflator, an idealized airbag inflator and fluid mesh modeling technique was developed using nozzle holes of relatively large size and several small TNT charges instead of gas inside the airbag inflator. The objective of this study is to validate the results of an UNDEX response analysis of one and two idealized airbag inflators by comparison with the results of shock tests in a small water tank. This comparison was performed using the multi-material Arbitrary Lagrangian-Eulerian formulation and fluid-structure interaction algorithm. The number, size, vertical distance from the nozzle outlet, detonation velocity, and lighting times of small TNT charges were determined. Through mesh size convergence tests, the UNDEX response analysis and idealized airbag inflator modeling were validated.

A Study on Performance Improvement and Development of Integrity Verification Software of TCP/IP output data of VCS Correlation Block (VCS 상관블록의 TCP/IP 출력데이터의 무결성 검사 소프트웨어의 개발과 성능개선에 관한 연구)

  • Yeom, Jae-Hwan;Roh, Duk-Gyoo;Oh, Chung-Sik;Jung, Jin-Seung;Chung, Dong-Kyu;Oh, Se-Jin
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.13 no.4
    • /
    • pp.211-219
    • /
    • 2012
  • In this paper, we described the software development for verifying the integrity of output data of TCP/IP for VLBI Correlation Subsystem (VCS) correlation block and proposed the performance improvement method in order to prevent the data loss of correlation output. The VCS correlation results are saved at the Data Archive system through TCP/IP packet transmission. In this paper, the integrity verification software is developed so as to confirm the integrity of correlation result saved at the data archive system using TCP/IP packet information of VCS. The 3-step integrity verification process is proposed by using the developed software, its effectiveness was confirmed in consequence of correlation experiments. In addition, TCP/IP packet transmission must be completed within minimum integration period. However, there is not only TCP/IP packet loss occurred but also the problem of correlation result integrity affected in account of a large quantity of packets and data during short integration time. In this paper, the reason of TCP/IP packet loss is analyzed and the modified methods for FPGA(Field Programmable Gate Array) of VCS are proposed, the integrity problem of correlation results will be solved.

Developing Experimental Method of Real-time Data Transfer and Imaging using Astronomical Observations for Scientific Inquiry Activities (과학탐구활동을 위한 천문 관측 자료의 실시간 전송 및 영상 구현 실험 방법 개발)

  • Kim, Soon-Wook
    • Journal of the Korean earth science society
    • /
    • v.33 no.2
    • /
    • pp.183-199
    • /
    • 2012
  • Previous Earth Science textbooks have mostly lacked the latest astronomical phenomena frequently being reported in mass media such as popular science magazines. One of the main directions in the revision of the 2009 National Curriculum of Korea is to actively include those phenomena. Furthermore, despite a close link between astronomy and physics, the concept of modern physics has not been actively introduced in Earth Science textbooks and at the same time the linkage of physics to astronomy has rarely been studies in physics textbooks. Therefore, the concept of integration among different fields in science is emphasized in the new National Curriculum. Transient phenomena in the high energy astrophysical objects are examples that reflect such issue. The purpose of this study is to introduce transferring a real-time data and making imaging of astronomical observations using e-Science. As a first step, we performed the first experiment for a large data transfer of astronomical observation between Korea and Japan using KOREN, a National Research and Education Test Network. We introduce actively on-going fields of e-Science in observational activities of astronomy and astrophysics, and their close interrelationship with scientific inquiry activities and public outreach activities. We discuss our experiment in the scientific and educational aspects to the primitive e-Science activity in the Korean astronomical society and, in turn, provide a prospective view for its application to the scientific inquiry activities and public outreach activities in the upcoming commercial Gbps-level internet environments.

Pre/Post processor for structural analysis simulation integration with open source solver (Calculix, Code_Aster) (오픈소스 솔버(Calculix, Code_Aster)를 통합한 구조해석 시뮬레이션 전·후처리기 개발)

  • Seo, Dong-Woo;Kim, Jae-Sung;Kim, Myung-Il
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.9
    • /
    • pp.425-435
    • /
    • 2017
  • Structural analysis is used not only for large enterprises, but also for small and medium sized ones, as a necessary procedure for strengthening the certification process for product delivery and shortening the time in the process from concept design to detailed design. Open-source solvers that can be used atlow cost differ from commercial solvers. If there is a problem with the input data, such as with the grid, errors or failures can occur in the calculation step. In this paper, we propose a pre- and post-processor that can be easily applied to the analysis of mechanical structural problems by using the existing structural analysis open source solver (Caculix, Code_Aster). In particular, we propose algorithms for analyzing different types of data using open source solvers in order to extract and generate accurate information,such as 3D models, grids and simulation conditions, and develop and apply information analysis. In addition, to improve the accuracy of open source solvers and to prevent errors, we created a grid that matches the solver characteristics and developed an automatic healing function for the grid model. Finally, to verify the accuracy of the system, the verification and utilization results are compared with the software used.

A Semantic-Based Mashup Development Tool Supporting Various Open API Types (다양한 Open API 타입들을 지원하는 시맨틱 기반 매쉬업 개발 툴)

  • Lee, Yong-Ju
    • Journal of Internet Computing and Services
    • /
    • v.13 no.3
    • /
    • pp.115-126
    • /
    • 2012
  • Mashups have become very popular over the last few years, and their use also varies for IT convergency services. In spite of their popularity, there are several challenging issues when combining Open APIs into mashups, First, since portal sites may have a large number of APIs available for mashups, manually searching and finding compatible APIs can be a tedious and time-consuming task. Second, none of the existing portal sites provides a way to leverage semantic techniques that have been developed to assist users in locating and integrating APIs like those seen in traditional SOAP-based web services. Third, although suitable APIs have been discovered, the integration of these APIs is required for in-depth programming knowledge. To solve these issues, we first show that existing techniques and algorithms used for finding and matching SOAP-based web services can be reused, with only minor changes. Next, we show how the characteristics of APIs can be syntactically defined and semantically described, and how to use the syntactic and semantic descriptions to aid the easy discovery and composition of Open APIs. Finally, we propose a goal-directed interactive approach for the dynamic composition of APIs, where the final mashup is gradually generated by a forward chaining of APIs. At each step, a new API is added to the composition.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

An Examination of Knowledge Sourcing Strategies Effects on Corporate Performance in Small Enterprises (소규모 기업에 있어서 지식소싱 전략이 기업성과에 미치는 영향 고찰)

  • Choi, Byoung-Gu
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.57-81
    • /
    • 2008
  • Knowledge is an essential strategic weapon for sustaining competitive advantage and is the key determinant for organizational growth. When knowledge is shared and disseminated throughout the organization, it increases an organization's value by providing the ability to respond to new and unusual situations. The growing importance of knowledge as a critical resource has forced executives to pay attention to their organizational knowledge. Organizations are increasingly undertaking knowledge management initiatives and making significant investments. Knowledge sourcing is considered as the first important step in effective knowledge management. Most firms continue to make an effort to realize the benefits of knowledge management by using various knowledge sources effectively. Appropriate knowledge sourcing strategies enable organizations to create, acquire, and access knowledge in a timely manner by reducing search and transfer costs, which result in better firm performance. In response, the knowledge management literature has devoted substantial attention to the analysis of knowledge sourcing strategies. Many studies have categorized knowledge sourcing strategies into intemal- and external-oriented. Internal-oriented sourcing strategy attempts to increase firm performance by integrating knowledge within the boundary of the firm. On the contrary, external-oriented strategy attempts to bring knowledge in from outside sources via either acquisition or imitation, and then to transfer that knowledge across to the organization. However, the extant literature on knowledge sourcing strategies focuses primarily on large organizations. Although many studies have clearly highlighted major differences between large and small firms and the need to adopt different strategies for different firm sizes, scant attention has been given to analyzing how knowledge sourcing strategies affect firm performance in small firms and what are the differences between small and large firms in the patterns of knowledge sourcing strategies adoption. This study attempts to advance the current literature by examining the impact of knowledge sourcing strategies on small firm performance from a holistic perspective. By drawing on knowledge based theory from organization science and complementarity theory from the economics literature, this paper is motivated by the following questions: (1) what are the adoption patterns of different knowledge sourcing strategies in small firms (i,e., what sourcing strategies should be adopted and which sourcing strategies work well together in small firms)?; and (2) what are the performance implications of these adoption patterns? In order to answer the questions, this study developed three hypotheses. First hypothesis based on knowledge based theory is that internal-oriented knowledge sourcing is positively associated with small firm performance. Second hypothesis developed on the basis of knowledge based theory is that external-oriented knowledge sourcing is positively associated with small firm performance. The third one based on complementarity theory is that pursuing both internal- and external-oriented knowledge sourcing simultaneously is negatively or less positively associated with small firm performance. As a sampling frame, 700 firms were identified from the Annual Corporation Report in Korea. Survey questionnaires were mailed to owners or executives who were most erudite about the firm s knowledge sourcing strategies and performance. A total of 188 companies replied, yielding a response rate of 26.8%. Due to incomplete data, 12 responses were eliminated, leaving 176 responses for the final analysis. Since all independent variables were measured using continuous variables, supermodularity function was used to test the hypotheses based on the cross partial derivative of payoff function. The results indicated no significant impact of internal-oriented sourcing strategies while positive impact of external-oriented sourcing strategy on small firm performance. This intriguing result could be explained on the basis of various resource and capital constraints of small firms. Small firms typically have restricted financial and human resources. They do not have enough assets to always develop knowledge internally. Another possible explanation is competency traps or core rigidities. Building up a knowledge base based on internal knowledge creates core competences, but at the same time, excessive internal focused knowledge exploration leads to behaviors blind to other knowledge. Interestingly, this study found that Internal- and external-oriented knowledge sourcing strategies had a substitutive relationship, which was inconsistent with previous studies that suggested complementary relationship between them. This result might be explained using organizational identification theory. Internal organizational members may perceive external knowledge as a threat, and tend to ignore knowledge from external sources because they prefer to maintain their own knowledge, legitimacy, and homogeneous attitudes. Therefore, integrating knowledge from internal and external sources might not be effective, resulting in failure of improvements of firm performance. Another possible explanation is small firms resource and capital constraints and lack of management expertise and absorptive capacity. Although the integration of different knowledge sources is critical, high levels of knowledge sourcing in many areas are quite expensive and so are often unrealistic for small enterprises. This study provides several implications for research as well as practice. First this study extends the existing knowledge by examining the substitutability (and complementarity) of knowledge sourcing strategies. Most prior studies have tended to investigate the independent effects of these strategies on performance without considering their combined impacts. Furthermore, this study tests complementarity based on the productivity approach that has been considered as a definitive test method for complementarity. Second, this study sheds new light on knowledge management research by identifying the relationship between knowledge sourcing strategies and small firm performance. Most current literature has insisted complementary relationship between knowledge sourcing strategies on the basis of data from large firms. Contrary to the conventional wisdom, this study identifies substitutive relationship between knowledge sourcing strategies using data from small firms. Third, implications for practice highlight that managers of small firms should focus on knowledge sourcing from external-oriented strategies. Moreover, adoption of both sourcing strategies simultaneousiy impedes small firm performance.