• Title/Summary/Keyword: software development cost

Search Result 766, Processing Time 0.022 seconds

Development of Global Fishing Application to Build Big Data on Fish Resources (어자원 빅데이터 구축을 위한 글로벌 낚시 앱 개발)

  • Pi, Su-Young;Lee, Jung-A;Yang, Jae-Hyuck
    • Journal of Digital Convergence
    • /
    • v.20 no.3
    • /
    • pp.333-341
    • /
    • 2022
  • Despite rapidly increasing demand for fishing, there is a lack of studies and information related to fishing, and there is a limit to obtaining the data on the global distribution of fish resources. Since the existing method of investigating fish resource distribution is designed to collect the fish resource information by visiting the investigation area using a throwing net, it is almost impossible to collect nation-wide data, such as streams, rivers, and seas. In addition, the existing method of measuring the length of fish used a tape measure, but in this study, a FishingTAG's smart measure was developed. When recording a picture using a FishingTAG's smart measure, the length of the fish and the environmental data when the fish was caught are automatically collected, and there is no need to carry a tape measure, so the user's convenience can be increased. With the development of a global fishing application using a FishingTAG's smart measure, first, it is possible to collect fish resource samples in a wide area around the world continuously on a real time basis. Second, it is possible to reduce the enormous cost for collecting fish resource data and to monitor the distribution and expansion of the alien fish species disturbing the ecosystem. Third, by visualizing global fish resource information through the Google Maps, users can obtain the information on fish resources according to their location. Since it provides the fish resource data collected on a real time basis, it is expected to of great help to various studies and the establishment of policies.

On the Development of Safety Requirements Based on Functional Analysis of LRT Stations in Concept Development Stage (경전철 역사 개념설계 단계에서 기능분석 결과를 활용한 안전요구사항의 생성방법에 관한 연구)

  • Kim, Joo-Uk;Jung, Ho-Jeon;Park, Kee-Jun;Kim, Joorak;Han, Seok Youn;Lee, Jae-Chon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.4
    • /
    • pp.382-391
    • /
    • 2016
  • For safety-critical systems including railways, there has been a growing need for effective and systematic safety management processes. The outcomes of efforts in this area are international safety standards, such as IEC 61508, 62278, and ISO 26262. One of the principal activities in the safety process is hazard analysis. For this reason, considerable efforts have been directed toward methods of hazard analysis. On the other hand, the hazard analysis methods reported thus far appear to be unclear in terms of their relationship with the system design process. In addition, in some cases, the methods appear to rely heavily on information regarding the hardware and software components, the number of which is increasing. These aspects can become troublesome when design changes are necessary. To improve the situation, in this paper, hazard analysis was carried out using the result of functional analysis early in the concept development stage for a safety-critical system design. Because hazard analysis is carried out at the system level and the result is then used to develop the safety requirements, improvements can be expected in terms of the development time and cost when design changes are required due to changes in the requirements. As a case study, the generation of safety requirements for the development of light rail transit stations is presented.

A study on Design of Generation Capacity for Offshore Wind Power Plant : The Case of Chonnam Province in Korea (해상풍력 발전용량 설계에 관한 연구 : 전남사례를 중심으로)

  • Jeong, Moon-Seon;Moon, Chae-Joo;Chang, Young-Hak;Lee, Soo-Hyoung;Lee, Sook-Hee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.3
    • /
    • pp.547-554
    • /
    • 2018
  • Wind energy is widely recognized as one of the cheapest forms of clean and renewable energy. In fact, in several countries, wind energy has achieved cost parity with fossil fuel-based sources of electricity generation for new electricity generation plants. Offshore wind energy development promises to be a significant domestic renewable energy source for the target of korea government 3020 plan. A pivotal activity during the development phase of a wind project is wind resource assessment. Several approaches can be categorized as three basic scales or stages of wind resource assessment: preliminary area identification, area wind resource evaluation, and micrositing. This study is to estimate the wind power capacity of chonnam province offshore area using three basic stages based on the six meteorological mast data. WindPRO was used, one of a well-known wind energy prediction programs and based on more than 25 years of experiences in development of software tools for wind energy project development. The design results of offshore wind power generation capacity is calculated as total 2.52GW with six wind farms in chonnam offshore area.

Timing Verification of AUTOSAR-compliant Diesel Engine Management System Using Measurement-based Worst-case Execution Time Analysis (측정기반 최악실행시간 분석 기법을 이용한 AUTOSAR 호환 승용디젤엔진제어기의 실시간 성능 검증에 관한 연구)

  • Park, Inseok;Kang, Eunhwan;Chung, Jaesung;Sohn, Jeongwon;Sunwoo, Myoungho;Lee, Kangseok;Lee, Wootaik;Youn, Jeamyoung;Won, Donghoon
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.5
    • /
    • pp.91-101
    • /
    • 2014
  • In this study, we presented a timing verification method for a passenger car diesel engine management system (EMS) using measurement-based worst-case execution time (WCET) analysis. In order to cope with AUTOSAR-compliant software architecture, a development process model is proposed. In the process model, a runnable is regarded as a test unit and its temporal behavior (i.e. maximum observed execution time, MOET) is obtained along with on-target functionality evaluation results during online unit test. Furthermore, a cost-effective framework for online unit test is proposed. Because the runtime environment layer and the standard calibration environment are utilized to implement test interface, additional resource consumption of the target processor is minimized. Using the proposed development process model and unit test framework, the MOETs of 86 runnables for diesel EMS are obtained with 213 unit test cases. Using the obtained MOETs of runnables, the WCETs of tasks are estimated and the schedulability is evaluated. From the schedulability analysis results, the problems of the initially designed schedule table is recognized and it is fixed by redesigning of the runnable mapping and task offset. Through the various test scenarios, the proposed method is validated.

Case Study of UML(Unified Modeling Language) Design for Web-based Forest Fire Hazard Index Presentation System (웹 기반 산불위험지수 표출시스템에서의 UML(Unified Modeling Language) 설계 사례)

  • Jo, Myung-Hee;Jo, Yun-Won;Ahn, Seung-Seup
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.5 no.1
    • /
    • pp.58-68
    • /
    • 2002
  • Recently as recognition to prevent nature disasters is reaching the climax, the most important job of government official is to provide information related to the prevention of nature disasters through the Web and to bring notice to prevent disaster under people. Especially, if the case of daily forest fire hazard index is provided within visualization on Web, people may have more chances to understand about forest fire and less damages by large scale of forest fire. Forest fire hazard index presentation system developed in this paper presents daily forest fire hazard index on map visually also provides the information related to it in text format. In order to develop this system, CBDP(Component Based Development Process) is proposed in this paper. This development process tries to emphasize the view of reusability so that it has lifecycle which starts from requirement and domain analysis and finishes to component generation. Moreover, The concept of this development process tries to reflect component based method, which becomes hot issue in software field nowadays. In the future, the component developed in this paper may be possibly reused in other Web GIS application, which has similar function to it so that it may take less cost and time to develop other similar system.

  • PDF

The Development of 1G-PON Reach Extender based on Wavelength Division Multiplexing for Reduction of Optical Core (국사 광역화와 광코어 절감을 위한 파장분할다중 기반의 1기가급 수동 광가입자망 Reach Extender 효율 극대화 기술 개발)

  • Lee, Kyu-Man;Kwon, Taek-Won
    • Journal of Digital Convergence
    • /
    • v.17 no.8
    • /
    • pp.229-235
    • /
    • 2019
  • As the demand for broadband multimedia including the Internet explosively increases, the advancement of the subscriber network is becoming the biggest issue in the telecommunication industry due to the surge of data traffic caused by the emergence of new services such as smart phone, IPTV, VoIP, VOD and cloud services. In this paper, we have developed WDM(Wavelength Division Multiplexing)-PON(passive optical network) based on the 1-Gigabit Reach Externder (RE) technique to reduce optical core. Particularly, in order to strengthen the market competitiveness, we considered low cost, miniaturization, integration technique, and low power of optical parts. In addition, we have developed a batch system by integrating all techniques for reliability, remote management through the development of transmission distance extension and development of capacity increase of optical line by using RE technology in existing PON network. Based on system interworking with existing commercial 1G PON devices, it can be worthy of achievement of wide nationalization and optical core reduction by using this developed system. Based on these results, we are studying development of 10G PON technology.

Development of Industrial Embedded System Platform (산업용 임베디드 시스템 플랫폼 개발)

  • Kim, Dae-Nam;Kim, Kyo-Sun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.5
    • /
    • pp.50-60
    • /
    • 2010
  • For the last half a century, the personal computer and software industries have been prosperous due to the incessant evolution of computer systems. In the 21st century, the embedded system market has greatly increased as the market shifted to the mobile gadget field. While a lot of multimedia gadgets such as mobile phone, navigation system, PMP, etc. are pouring into the market, most industrial control systems still rely on 8-bit micro-controllers and simple application software techniques. Unfortunately, the technological barrier which requires additional investment and higher quality manpower to overcome, and the business risks which come from the uncertainty of the market growth and the competitiveness of the resulting products have prevented the companies in the industry from taking advantage of such fancy technologies. However, high performance, low-power and low-cost hardware and software platforms will enable their high-technology products to be developed and recognized by potential clients in the future. This paper presents such a platform for industrial embedded systems. The platform was designed based on Telechips TCC8300 multimedia processor which embedded a variety of parallel hardware for the implementation of multimedia functions. And open-source Embedded Linux, TinyX and GTK+ are used for implementation of GUI to minimize technology costs. In order to estimate the expected performance and power consumption, the performance improvement and the power consumption due to each of enabled hardware sub-systems including YUV2RGB frame converter are measured. An analytic model was devised to check the feasibility of a new application and trade off its performance and power consumption. The validity of the model has been confirmed by implementing a real target system. The cost can be further mitigated by using the hardware parts which are being used for mass production products mostly in the cell-phone market.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Determinants Affecting Organizational Open Source Software Switch and the Moderating Effects of Managers' Willingness to Secure SW Competitiveness (조직의 오픈소스 소프트웨어 전환에 영향을 미치는 요인과 관리자의 SW 경쟁력 확보의지의 조절효과)

  • Sanghyun Kim;Hyunsun Park
    • Information Systems Review
    • /
    • v.21 no.4
    • /
    • pp.99-123
    • /
    • 2019
  • The software industry is a high value-added industry in the knowledge information age, and its importance is growing as it not only plays a key role in knowledge creation and utilization, but also secures global competitiveness. Among various SW available in today's business environment, Open Source Software(OSS) is rapidly expanding its activity area by not only leading software development, but also integrating with new information technology. Therefore, the purpose of this research is to empirically examine and analyze the effect of factors on the switching behavior to OSS. To accomplish the study's purpose, we suggest the research model based on "Push-Pull-Mooring" framework. This study empirically examines the two categories of antecedents for switching behavior toward OSS. The survey was conducted to employees at various firms that already switched OSS. A total of 268 responses were collected and analyzed by using the structural equational modeling. The results of this study are as follows; first, continuous maintenance cost, vender dependency, functional indifference, and SW resource inefficiency are significantly related to switch to OSS. Second, network-oriented support, testability and strategic flexibility are significantly related to switch to OSS. Finally, the results show that willingness to secures SW competitiveness has a moderating effect on the relationships between push factors and pull factor with exception of improved knowledge, and switch to OSS. The results of this study will contribute to fields related to OSS both theoretically and practically.

Life Prediction of Failure Mechanisms of the CubeSat Mission Board using Sherlock of Reliability and Life Prediction Tools (신뢰성 수명예측 도구 Sherlock을 이용한 큐브위성용 임무보드의 고장 메커니즘별 수명예측)

  • Jeon, Su-Hyeon;Kwon, Yae-Ha;Kwon, Hyeong-Ahn;Lee, Yong-Geun;Lim, In-OK;Oh, Hyun-Ung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.44 no.2
    • /
    • pp.172-180
    • /
    • 2016
  • A cubesat classified as a pico-satellite typically uses commercial-grade components that satisfy the vibration and thermal environmental specifications and goes into mission orbit even after undergoing minimum environment tests due to their lower cost and short development period. However, its reliability exposed to the physical environment such as on-orbit thermal vacuum for long periods cannot be assured under minimum tests criterion. In this paper, we have analysed the reliability and life prediction of the failure mechanisms of the cubesat mission board during its service life under the launch and on-orbit environment by using the sherlock software which has been widely used in automobile fields to predict the reliability of electronic devices.