• Title/Summary/Keyword: MATLAB

Search Result 3,211, Processing Time 0.027 seconds

A Study on Optimum Takeoff Time of the Hybrid Electric Powered Systems for a Middle Size UAV (중형무인기용 하이브리드 전기동력시스템의 최적 이륙시간에 관한 연구)

  • Lee, Bohwa;Park, Poomin;Kim, Keunbae;Cha, Bongjun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.40 no.11
    • /
    • pp.940-947
    • /
    • 2012
  • The target system is a middle size UAV, which is a low-speed long-endurance UAV with a weight of 18 kg and wingspan of 6.4 m. Three electric power sources, i.e. solar cells, a fuel cell, and a battery, are considered. The optimal takeoff time is determined to maximize the endurance because the generated solar cell's energy is heavily dependent on it. Each power source is modeled in Matlab/Simulink, and the component models are verified with the component test data. The component models are integrated into a power system which is used for power simulations. When takeoff time is at 6 pm and 2 am, it can supply the power during 37.5 hrs and 27.6 hrs, respectively. In addition, the thermostat control simulation for fuel cell demonstrates that it yields more power supply and efficient power distribution.

Reconstruction of internal structures and numerical simulation for concrete composites at mesoscale

  • Du, Chengbin;Jiang, Shouyan;Qin, Wu;Xu, Hairong;Lei, Dong
    • Computers and Concrete
    • /
    • v.10 no.2
    • /
    • pp.135-147
    • /
    • 2012
  • At mesoscale, concrete is considered as a three-phase composite material consisting of the aggregate particles, the cement matrix and the interfacial transition zone (ITZ). The reconstruction of the internal structures for concrete composites requires the identification of the boundary of the aggregate particles and the cement matrix using digital imaging technology followed by post-processing through MATLAB. A parameter study covers the subsection transformation, median filter, and open and close operation of the digital image sample to obtain the optimal parameter for performing the image processing technology. The subsection transformation is performed using a grey histogram of the digital image samples with a threshold value of [120, 210] followed by median filtering with a $16{\times}16$ square module based on the dimensions of the aggregate particles and their internal impurity. We then select a "disk" tectonic structure with a specific radius, which performs open and close operations on the images. The edges of the aggregate particles (similar to the original digital images) are obtained using the canny edge detection method. The finite element model at mesoscale can be established using the proposed image processing technology. The location of the crack determined through the numerical method is identical to the experimental result, and the load-displacement curve determined through the numerical method is in close agreement with the experimental results. Comparisons of the numerical and experimental results show that the proposed image processing technology is highly effective in reconstructing the internal structures of concrete composites.

Assessment of liquefaction potential of the Erzincan, Eastern Turkey

  • Duman, Esra Subasi;Ikizler, Sabriye Banu;Angin, Zekai;Demir, Gokhan
    • Geomechanics and Engineering
    • /
    • v.7 no.6
    • /
    • pp.589-612
    • /
    • 2014
  • This study includes determination of liquefaction potential in Erzincan city center. Erzincan Province is situated within first-degree earthquake zone on earthquake map of Turkey. In this context, the earthquake scenarios were produced using the empirical expressions. Liquefaction potential for different earthquake magnitudes (6.0, 6.5, 7.0) were determined. Liquefaction potential was investigated using Standard Penetration Test (SPT). Liquefaction potential analyses are determined in two steps: geotechnical investigations and calculations. In the first steps, boreholes were drilled to obtain disturbed and undisturbed soil samples and SPT values were obtained. Laboratory tests were made to identify geotechnical properties of soil samples. In the second step, liquefaction potential analyses were examined using two methods, namely Seed and Idriss (1971), Iwasaki et al. (1981). The liquefaction potential broadly classified into three categories, namely non-liquefiable, marginally liquefiable and liquefiable regions. Additionally, the liquefaction potential index classified into four categories, namely non-liquefiable, low, high and very high liquefiable regions. In order to liquefaction analysis complete within a short time, MATLAB program were prepared. Following the analyses, liquefaction potential index is investigated by Iwasaki et al. (1982) methods. At the final stage of this study, liquefaction potential maps and liquefaction potential index maps of the all study area by using IDW (inverse distance weighted) interpolation method in Geostatistical Analyst Module of ArcGIS 10.0 Software were prepared for different earthquake magnitudes and different depths. The results of soil liquefaction potential were evaluated in ArcGIS to map the distributions of drillings with liquefaction potential. The maps showed that there is a spatial variability in the results obtained which made it difficult to clearly separate between regional areas of high or low potential to liquefy. However, this study indicates that the presence of ground water and sandy-silty soils increases the liquefaction potential with the seismic features of the region.

Comparing of the effects of scaled and real earthquake records on structural response

  • Ergun, Mustafa;Ates, Sevket
    • Earthquakes and Structures
    • /
    • v.6 no.4
    • /
    • pp.375-392
    • /
    • 2014
  • Time history analyses have been preferred commonly in earthquake engineering area to determine earthquake performances of structures in recent years. Advances in computer technology and structural analysis have led to common usage of time history analyses. Eurocode 8 allows the use of real earthquake records as an input for linear and nonlinear time history analyses of structures. However, real earthquake records with the desired characteristics sometimes may not be found, for example depending on soil classes, in this case artificial and synthetic earthquake records can be used for seismic analyses rather than real records. Selected earthquake records should be scaled to a code design spectrum to reduce record to record variability in structural responses of considered structures. So, scaling of earthquake records is one of the most important procedures of time history analyses. In this paper, four real earthquake records are scaled to Eurocode 8 design spectrums by using SESCAP (Selection and Scaling Program) based on time domain scaling method and developed by using MATLAB, GUI software, and then scaled and real earthquake records are used for linear time history analyses of a six-storied building. This building is modeled as spatial by SAP2000 software. The objectives of this study are to put basic procedures and criteria of selecting and scaling earthquake records in a nutshell, and to compare the effects of scaled earthquake records on structural response with the effects of real earthquake records on structural response in terms of record to record variability of structural response. Seismic analysis results of building show that record to record variability of structural response caused by scaled earthquake records are fewer than ones caused by real earthquake records.

The Optimized Detection Range of RFID-based Positioning System using k-Nearest Neighbor Algorithm

  • Kim, Jung-Hwan;Heo, Joon;Han, Soo-Hee;Kim, Sang-Min
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2008.10a
    • /
    • pp.297-302
    • /
    • 2008
  • The positioning technology for a moving object is an important and essential component of ubiquitous computing environment and applications, for which Radio Frequency Identification(RFID) has been considered as a core technology. RFID-based positioning system calculates the position of moving object based on k-nearest neighbor(k-nn) algorithm using detected k-tags which have known coordinates and kcan be determined according to the detection range of RFID system. In this paper, RFID-based positioning system determines the position of moving object not using weight factor which depends on received signal strength but assuming that tags within the detection range always operate and have same weight value. Because the latter system is much more economical than the former one. The geometries of tags were determined with considerations in huge buildings like office buildings, shopping malls and warehouses, so they were determined as the line in I-Dimensional space, the square in 2-Dimensional space. In 1-Dimensional space, the optimal detection range is determined as 125% of the tag spacing distance through the analytical and numerical approach. Here, the analytical approach means a mathematical proof and the numerical approach means a simulation using matlab. But the analytical approach is very difficult in 2-Dimensional space, so through the numerical approach, the optimal detection range is determined as 134% of the tag spacing distance in 2-Dimensional space. This result can be used as a fundamental study for designing RFID-based positioning system.

  • PDF

The Optimized Detection Range of RFID-based Positioning System using k-Nearest Neighbor Algorithm

  • Kim, Jung-Hwan;Heo, Joon;Han, Soo-Hee;Kim, Sang-Min
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2008.10a
    • /
    • pp.270-271
    • /
    • 2008
  • The positioning technology for a moving object is an important and essential component of ubiquitous communication computing environment and applications, for which Radio Frequency IDentification Identification(RFID) is has been considered as also a core technology for ubiquitous wireless communication. RFID-based positioning system calculates the position of moving object based on k-nearest neighbor(k-nn) algorithm using detected k-tags which have known coordinates and k can be determined according to the detection range of RFID system. In this paper, RFID-based positioning system determines the position of moving object not using weight factor which depends on received signal strength but assuming that tags within the detection range always operate and have same weight value. Because the latter system is much more economical than the former one. The geometries of tags were determined with considerations in huge buildings like office buildings, shopping malls and warehouses, so they were determined as the line in 1-Dimensional space, the square in 2-Dimensional space and the cubic in 3-Dimensional space. In 1-Dimensional space, the optimal detection range is determined as 125% of the tag spacing distance through the analytical and numerical approach. Here, the analytical approach means a mathematical proof and the numerical approach means a simulation using matlab. But the analytical approach is very difficult in 2- and 3-Dimensional space, so through the numerical approach, the optimal detection range is determined as 134% of the tag spacing distance in 2-Dimensional space and 143% of the tag spacing distance in 3-Dimensional space. This result can be used as a fundamental study for designing RFID-based positioning system.

  • PDF

Performance Improvement on MPLS On-line Routing Algorithm for Dynamic Unbalanced Traffic Load

  • Sa-Ngiamsak, Wisitsak;Sombatsakulkit, Ekanun;Varakulsiripunth, Ruttikorn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1846-1850
    • /
    • 2005
  • This paper presents a constrained-based routing (CBR) algorithm called, Dynamic Possible Path per Link (D-PPL) routing algorithm, for MultiProtocol Label Switching (MPLS) networks. In MPLS on-line routing, future traffics are unknown and network resource is limited. Therefore many routing algorithms such as Minimum Hop Algorithm (MHA), Widest Shortest Path (WSP), Dynamic Link Weight (DLW), Minimum Interference Routing Algorithm (MIRA), Profiled-Based Routing (PBR), Possible Path per Link (PPL) and Residual bandwidth integrated - Possible Path per Link (R-PPL) are proposed in order to improve network throughput and reduce rejection probability. MIRA is the first algorithm that introduces interference level avoidance between source-destination node pairs by integrating topology information or address of source-destination node pairs into the routing calculation. From its results, MIRA improves lower rejection probability performance. Nevertheless, MIRA suffer from its high routing complexity which could be considered as NP-Complete problem. In PBR, complexity of on-line routing is reduced comparing to those of MIRA, because link weights are off-line calculated by statistical profile of history traffics. However, because of dynamic of traffic nature, PBR maybe unsuitable for MPLS on-line routing. Also, both PPL and R-PPL routing algorithm we formerly proposed, are algorithms that achieve reduction of interference level among source-destination node pairs, rejection probability and routing complexity. Again, those previously proposed algorithms do not take into account the dynamic nature of traffic load. In fact, future traffics are unknown, but, amount of previous traffic over link can be measured. Therefore, this is the motivation of our proposed algorithm, the D-PPL. The D-PPL algorithm is improved based on the R-PPL routing algorithm by integrating traffic-per-link parameters. The parameters are periodically updated and are dynamically changed depended on current incoming traffic. The D-PPL tries to reserve residual bandwidth to service future request by avoid routing through those high traffic-per-link parameters. We have developed extensive MATLAB simulator to evaluate performance of the D-PPL. From simulation results, the D-PPL improves performance of MPLS on-line routing in terms of rejection probability and total throughput.

  • PDF

Power Fluctuation Reduction of Pitch-Regulated MW-Class PMSG based WTG System by Controlling Kinetic Energy

  • Howlader, Abdul Motin;Urasaki, Naomitsu;Yona, Atsushi;Senjyu, Tomonobu;Saber, Ahmed Yousuf
    • Journal of international Conference on Electrical Machines and Systems
    • /
    • v.1 no.2
    • /
    • pp.116-124
    • /
    • 2012
  • Wind is an abundant source of natural energy which can be utilized to generate power. Wind velocity does not remain constant, and as a result the output power of wind turbine generators (WTGs) fluctuates. To reduce the fluctuation, different approaches are already being proposed, such as energy storage devices, electric double layer capacitors, flywheels, and so on. These methods are effective but require a significant extra cost to installation and maintenance. This paper proposes to reduce output power fluctuation by controlling kinetic energy of a WTG system. A MW-class pitch-regulated permanent magnet synchronous generator (PMSG) is introduced to apply a power fluctuation reducing method. The major advantage of this proposed method is that, an additional energy storage system is not required to control the power fluctuation. Additionally, the proposed method can mitigate shaft stress of a WTG system. Which is reflected in an enhanced reliability of the wind turbine. Moreover, the proposed method can be changed to the maximum power point tracking (MPPT) control method by adjusting an averaging time. The proposed power smoothing control is compared with the MPPT control method and verified by using the MATLAB SIMULINK environment.

A Study on System Availability Analysis Utilizing Markov Process (마르코프 프로세스를 활용한 시스템 가용도 분석 방법 고찰)

  • Kim, Bohyeon;Kim, Seongkyung;Pagulayan, Dhominick;Hur, Jangwook
    • Journal of Applied Reliability
    • /
    • v.16 no.4
    • /
    • pp.295-304
    • /
    • 2016
  • Purpose: This paper presents an application of Markov Process to reliability and availability analysis. In order to do that of analysis, we set up a specific case of Tablet PC and it's usage scenario. The case has it some spares and maintenance and repair processes. Methods: Different configurations of the tablet PC and as well as their functions are defined. The system configuration and calculated failure rates of components are modeled from Windchill Quality Solution. Two models, without a spare and with spare, are created and compared using Markov Process. The Matlab numerical analysis is used to simulate and show the change of state with time. Availability of the system is computed by determining the time the system stays in different states. Results: The mission availability and steady-state condition availability in accordance with the mission are compared and the availability of the system with spares have improved availability than without spares. Simulated data shows that downtime of the system increased which results in greater availability through the consideration of spares. Conclusion: There's many techniques and methods to do reliability and availability analysis and mostly are time-independent assumptions. But Markov Process, even though its steady-state and ergodic properties, can do time analysis any given time periods.

Delamination evaluation on basalt FRP composite pipe by electrical potential change

  • Altabey, Wael A.
    • Advances in aircraft and spacecraft science
    • /
    • v.4 no.5
    • /
    • pp.515-528
    • /
    • 2017
  • Since composite structures are widely used in structural engineering, delamination in such structures is an important issue of research. Delamination is one of a principal cause of failure in composites. In This study the electrical potential (EP) technique is applied to detect and locate delamination in basalt fiber reinforced polymer (FRP) laminate composite pipe by using electrical capacitance sensor (ECS). The proposed EP method is able to identify and localize hidden delamination inside composite layers without overlapping with other method data accumulated to achieve an overall identification of the delamination location/size in a composite, with high accuracy, easy and low-cost. Twelve electrodes are mounted on the outer surface of the pipe. Afterwards, the delamination is introduced into between the three layers (0º/90º/0º)s laminates pipe, split into twelve scenarios. The dielectric properties change in basalt FRP pipe is measured before and after delamination occurred using arrays of electrical contacts and the variation in capacitance values, capacitance change and node potential distribution are analyzed. Using these changes in electrical potential due to delamination, a finite element simulation model for delamination location/size detection is generated by ANSYS and MATLAB, which are combined to simulate sensor characteristic. Response surfaces method (RSM) are adopted as a tool for solving inverse problems to estimate delamination location/size from the measured electrical potential changes of all segments between electrodes. The results show good convergence between the finite element model (FEM) and estimated results. Also the results indicate that the proposed method successfully assesses the delamination location/size for basalt FRP laminate composite pipes. The illustrated results are in excellent agreement with the experimental results available in the literature, thus validating the accuracy and reliability of the proposed technique.