• Title/Summary/Keyword: PERFORM-3D

Search Result 844, Processing Time 0.024 seconds

평행식 진동탄환 암거 천공기의 연구 (IV)(V)-실기 설계 제작 및 보장실험-Development of Balanced-Type Oscillating Mole Drainer(IV)(V)

  • 김용환;이승규;서상용
    • Journal of Biosystems Engineering
    • /
    • v.2 no.1
    • /
    • pp.7-24
    • /
    • 1977
  • This paper is the forth and fifth one of the study on balanced type oscillating mole drainer. In the light of the results from previous reports about the model tests, some design criteria were established and a prototype machine was set up for experimental purpose. Motion characteristics and functionof the each parts of the machine were checked and analyzed. After that, performance tests of the prototype machine were carried out in thefield. Obtained results are summarized as follows ; 1. Ten centimeter of the bullet diameter was determined so as to be able to attach it to the tractors with capacity of 30 PS to 40 PS. 2. To maintain the balance between the moments of the front shank and rear shank, the oscillating amplitude of the rear bullet was determined to be larger than that of the front bullet. At the same time , the oscillating direction of the rear bullet was designed with the inclines of ten to thirty degrees. 3. An octagonal dynamo transduced was developed for measuring the compressive force of the upper link is measuring the draft force of the machine. Acceptable linear relationship between forces and strain responses from O.D.T. was obtained. 4. Analysing the balancing mechanism of the acting part of the machine , it was found that the total draft force of the machine was equal to the difference between the sum of the draft force produced from the right and left side bending moments of the lower drawber and the compressive force on the upper link. 5. There are acceptable linear relationship between the strain and twisting moment by driving shaft, and between strain and shank moment. Above results enable us to carry out the field experiment with prototype machine. 6. When the test machine was used in the field, it was possible to reduce the oscillating acceleration by forty percent in average as compared it with the single bullet mole drainer. 7. When the test machine was used under the oscillating condition, the dratt torce was reduced by 27 percent to 59 percent as compared it with the test machine under non-oscillating condition, while the draft force was increased by 7 percent to 20 percent as compared it with the mole drainer having oscillating single bullet. The reasoning behind this fact was considered as the resistance force due to the rear shank and bullet. 8. As the amplitude and frequency of the bullet were increased, the torque was increased accordingly. This tendency could be varied with the various characteristics of the given soils. And the larger frequency and amplitute, the more increasing oscil\ulcornerlating power but decreasing draft brce were needed, and draft force was increased as the velocity was increased.9. When the amplitude of the rear bullet was designed to be larger than that of the front bullet, the minimum value of the moment was lowered and oscillating acceleration was reduced. And when the oscillating direction of the rear bullet was declined back\ulcornerwards, oscillating acceleration was increased along with the increasing angle of decli\ulcornernation. When the test machine was operated in high speed, the difference between maximum moments and minimum ones became narrow. This varying magnitude of moments appeared on the moment oscillogram seems to be correlated to the oscillating acceleration and draft force. 10. From the analysis of variance, it was found that those factors such as frequency, amplitude, and operating velocity significantly affected in the oscillating acceleration, the draft resistance, the torque, the moment, and the total power required. And interaction between frequency and amplitude affected in the oscillating acceleration. 11. Within the given situation of this study, the most preferable operating conditions of the test machine were 7 Hz in oscillating frequency, 0.54 m/sec in operating velocity, and 39.1 mm in oscillating amplitude of front and rear bullets. However, it is necessary to select the proper frequency and magnitude of oscillation depending on the soil properties of the field in which the mole drainer is practiced by use of a bal1nced type oscillating mole drainer. 12. It is recommended that a comparative study of the mole drainers would be performed in the near future using two separate balanced oscillating bullet with the one which is operated by oscillating the movable bullet in a single cylinder or other balanced type which may be single oscillating bullet with spring, damper or balancing weight, and that of thing. To expand the applicability of the balanced type oscillating mole drainer in practical use, it is suggested to develop a new mechanism which perform mole drain with vinyl pipe or filling material such as rice hull.

  • PDF

Seismic Data Processing and Inversion for Characterization of CO2 Storage Prospect in Ulleung Basin, East Sea (동해 울릉분지 CO2 저장소 특성 분석을 위한 탄성파 자료처리 및 역산)

  • Lee, Ho Yong;Kim, Min Jun;Park, Myong-Ho
    • Economic and Environmental Geology
    • /
    • v.48 no.1
    • /
    • pp.25-39
    • /
    • 2015
  • $CO_2$ geological storage plays an important role in reduction of greenhouse gas emissions, but there is a lack of research for CCS demonstration. To achieve the goal of CCS, storing $CO_2$ safely and permanently in underground geological formations, it is essential to understand the characteristics of them, such as total storage capacity, stability, etc. and establish an injection strategy. We perform the impedance inversion for the seismic data acquired from the Ulleung Basin in 2012. To review the possibility of $CO_2$ storage, we also construct porosity models and extract attributes of the prospects from the seismic data. To improve the quality of seismic data, amplitude preserved processing methods, SWD(Shallow Water Demultiple), SRME(Surface Related Multiple Elimination) and Radon Demultiple, are applied. Three well log data are also analysed, and the log correlations of each well are 0.648, 0.574 and 0.342, respectively. All wells are used in building the low-frequency model to generate more robust initial model. Simultaneous pre-stack inversion is performed on all of the 2D profiles and inverted P-impedance, S-impedance and Vp/Vs ratio are generated from the inversion process. With the porosity profiles generated from the seismic inversion process, the porous and non-porous zones can be identified for the purpose of the $CO_2$ sequestration initiative. More detailed characterization of the geological storage and the simulation of $CO_2$ migration might be an essential for the CCS demonstration.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.