• Title/Summary/Keyword: exploration methodology

Search Result 163, Processing Time 0.025 seconds

A Preliminary Exploration on Component Based Software Engineering

  • Basha, N Md Jubair;Ganapathy, Gopinath;Moulana, Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.143-148
    • /
    • 2022
  • Component-based software development (CBD) is a methodology that has been embraced by the software industry to accelerate development, save costs and timelines, minimize testing requirements, and boost quality and output. Compared to the conventional software development approach, this led to the system's development being completed more quickly. By choosing components, identifying systems, and evaluating those systems, CBSE contributes significantly to the software development process. The objective of CBSE is to codify and standardize all disciplines that support CBD-related operations. Analysis of the comparison between component-based and scripting technologies reveals that, in terms of qualitative performance, component-based technologies scale more effectively. Further study and application of CBSE are directly related to the CBD approach's success. This paper explores the introductory concepts and comparative analysis related to component-based software engineering which have been around for a while, but proper adaption of CBSE are still lacking issues are also focused.

Matrix Formation in Univariate and Multivariate General Linear Models

  • Arwa A. Alkhalaf
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.44-50
    • /
    • 2024
  • This paper offers an overview of matrix formation and calculation techniques within the framework of General Linear Models (GLMs). It takes a sequential approach, beginning with a detailed exploration of matrix formation and calculation methods in regression analysis and univariate analysis of variance (ANOVA). Subsequently, it extends the discussion to cover multivariate analysis of variance (MANOVA). The primary objective of this study was to provide a clear and accessible explanation of the underlying matrices that play a crucial role in GLMs. Through linking, essentially different statistical methods, by fundamental principles and algebraic foundations that underpin the GLM estimation. Insights presented here aim to assist researchers, statisticians, and data analysts in enhancing their understanding of GLMs and their practical implementation in diverse research domains. This paper contributes to a better comprehension of the matrix-based techniques that can be extended to GLMs.

The Role of Economic Democratization in Economic Development

  • PanJin KIM
    • East Asian Journal of Business Economics (EAJBE)
    • /
    • v.12 no.2
    • /
    • pp.29-34
    • /
    • 2024
  • Purpose: The primary objective of this study was to examine the influence of economic democratization on economic development from diverse perspectives. Research design, Data methodology: Justification of the qualitative literature methods used in this study is essential, as extensive descriptions, justifications, and explanations of the methods used allow researchers to increase the reliability of their studies for specific or specified audiences. Initially, the concept and principal attributes of economic democratization were scrutinized, followed by an exploration of its manifold effects on economic development. Results: Consequently, this study facilitated a comprehensive comprehension of how economic democratization fosters economic growth and advancement in contemporary society. Additionally, the study deliberated on the constraints and hurdles of economic democratization, proposing policy recommendations for future mitigation. Conclusion: In conclusion, this study is anticipated to furnish foundational data for regional economic development to both academia and policymakers. It achieves this by thoroughly evaluating the impact of economic democratization on economic development and delving into the dynamic interaction between democracy and economic progress.

A Preliminary Exploration on Component Based Software Engineering

  • N Md Jubair Basha;Gopinath Ganapathy;Mohammed Moulana
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.8
    • /
    • pp.119-124
    • /
    • 2024
  • Component-based software development (CBD) is a methodology that has been embraced by the software industry to accelerate development, save costs and timelines, minimize testing requirements, and boost quality and output. Compared to the conventional software development approach, this led to the system's development being completed more quickly. By choosing components, identifying systems, and evaluating those systems, CBSE contributes significantly to the software development process. The objective of CBSE is to codify and standardize all disciplines that support CBD-related operations. Analysis of the comparison between component-based and scripting technologies reveals that, in terms of qualitative performance, component-based technologies scale more effectively. Further study and application of CBSE are directly related to the CBD approach's success. This paper explores the introductory concepts and comparative analysis related to component-based software engineering which have been around for a while, but proper adaption of CBSE are still lacking issues are also focused.

Model Validation of a Fast Ethernet Controller for Performance Evaluation of Network Processors (네트워크 프로세서의 성능 예측을 위한 고속 이더넷 제어기의 상위 레벨 모델 검증)

  • Lee Myeong-jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.1
    • /
    • pp.92-99
    • /
    • 2005
  • In this paper, we present a high-level design methodology applied on a network system-on-a-chip(SOC) using SystemC. The main target of our approach is to get optimum performance parameters for high network address translation(NAT) throughput. The Fast Ethernet media access controller(MAC) and its direct memory access(DMA) controller are modeled with SystemC in transaction level. They are calibrated through the cycle-based measurement of the operation of the real Verilog register transfer language(RTL). The NAT throughput of the model is within $\pm$10% error compared to the output of the real evaluation board. Simulation speed of the model is more than 100 times laster than the RTL. The validated models are used for intensive architecture exploration to find the performance bottleneck in the NAT router.

Stretch-free Normal Moveout Correction (Stretch가 없는 수직 시간차 보정)

  • Pyun, Sukjoon
    • Geophysics and Geophysical Exploration
    • /
    • v.20 no.4
    • /
    • pp.232-240
    • /
    • 2017
  • Normal moveout correction is one of the main procedures of seismic reflection data processing and a crucial pre-processing step for AVO analysis. Unfortunately, stretch phenomenon, which is the intrinsic problem of NMO correction, degrades the quality of stack section and reliability of AVO analysis. Although muting is applied to resolve this problem, it makes far-offset traces more useful to develop an advanced NMO correction technique without stretch. In this paper, easy and detailed explanations are provided on the definition and methodology of NMO correction, and then the cause of stretch is explained with its characteristics. A graphical explanation for NMO correction is given for the intuitive understanding of stretch phenomenon. Additionally, the theoretical formulation is derived to quantitatively understand the NMO correction. Through explaining the muting process to remove NMO stretch, the limitations of conventional methods are investigated and the need for a new resolution comes to discussion. We describe a stretch-free NMO correction based on inverse theory among many different stretch-free NMO corrections. Finally, the stretch-free NMO correction is verified through synthetic example and real data.

A Study in Seismic Signal Analysis for the First Arrival Picking (초동발췌를 위한 탄성파 신호분석연구)

  • Lee, Doo-Sung
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.2
    • /
    • pp.131-137
    • /
    • 2007
  • With consideration of the first arrival picking methodology and inherent errors in picking process, I propose, from the computerization point of view, a practical algorithm for picking and error computation. The proposed picking procedure consists of 2-step; 1) picking the first coherent peak or trough events, 2) derive a line which approximates the record in the interval prior to the pick, and set the intercept time of the line as the first break. The length of fitting interval used in experiment, is few samples less than 1/4 width of the arriving wavelet. A quantitative measure of the error involved in first arrival picking is defined as the time length that needed to determine if an event is the first arrival or not. The time length is expressed as a function of frequency bandwidth of the signal and the S/N ratio. For 3 sets of cross-well seismic data, first breaks are picked twice, by manually, and by the proposed method. And at the same time, the error bound for each trace is computed. Experiment results show that good performance of the proposed picking method, and the usefulness of the quantitative error measure in pick-quality evaluation.

Multimedia Extension Instructions and Optimal Many-core Processor Architecture Exploration for Portable Ultrasonic Image Processing (휴대용 초음파 영상처리를 위한 멀티미디어 확장 명령어 및 최적의 매니코어 프로세서 구조 탐색)

  • Kang, Sung-Mo;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.8
    • /
    • pp.1-10
    • /
    • 2012
  • This paper proposes design space exploration methodology of many-core processors including multimedia specific instructions to support high-performance and low power ultrasound imaging for portable devices. To explore the impact of multimedia instructions, we compare programs using multimedia instructions and baseline programs with a same many-core processor in terms of execution time, energy efficiency, and area efficiency. Experimental results using a $256{\times}256$ ultrasound image indicate that programs using multimedia instructions achieve 3.16 times of execution time, 8.13 times of energy efficiency, and 3.16 times of area efficiency over the baseline programs, respectively. Likewise, programs using multimedia instructions outperform the baseline programs using a $240{\times}320$ image (2.16 times of execution time, 4.04 times of energy efficiency, 2.16 times of area efficiency) as well as using a $240{\times}400$ image (2.25 times of execution time, 4.34 times of energy efficiency, 2.25 times of area efficiency). In addition, we explore optimal PE architecture of many-core processors including multimedia instructions by varying the number of PEs and memory size.

Using a H/W ADL-based Compiler for Fixed-point Audio Codec Optimization thru Application Specific Instructions (응용프로그램에 특화된 명령어를 통한 고정 소수점 오디오 코덱 최적화를 위한 ADL 기반 컴파일러 사용)

  • Ahn Min-Wook;Paek Yun-Heung;Cho Jeong-Hun
    • The KIPS Transactions:PartA
    • /
    • v.13A no.4 s.101
    • /
    • pp.275-288
    • /
    • 2006
  • Rapid design space exploration is crucial to customizing embedded system design for exploiting the application behavior. As the time-to-market becomes a key concern of the design, the approach based on an application specific instruction-set processor (ASIP) is considered more seriously as one alternative design methodology. In this approach, the instruction set architecture (ISA) for a target processor is frequently modified to best fit the application with regard to code size and speed. Two goals of this paper is to introduce our new retargetable compiler and how it has been used in ASIP-based design space exploration for a popular digital signal processing (DSP) application. Newly developed retargetable compiler provides not only the functionality of previous retargetable compilers but also visualizes the features of the application program and profiles it so that it can help architecture designers and application programmers to insert new application specific instructions into target architecture for performance increase. Given an initial RISC-style ISA for the target processor, we characterized the application code and incrementally updated the ISA with more application specific instructions to give the compiler a better chance to optimize assembly code for the application. We get 32% performance increase and 20% program size reduction using 6 audio codec specific instructions from retargetable compiler. Our experimental results manifest a glimpse of evidence that a higgly retargetable compiler is essential to rapidly prototype a new ASIP for a specific application.

A Study on the Information Modeling of Defense R&D Process Using IDEF Methodology (IDEF 방법론을 이용한 국방 연구개발 프로세스의 정보모델링 연구)

  • Kim, Chul-Whan
    • The Journal of Society for e-Business Studies
    • /
    • v.10 no.1
    • /
    • pp.41-60
    • /
    • 2005
  • IDEF(Integrated Definition) method, a standard methodology of CALS process modelling, was applied to the weapon system R&D process to provide information modelling by analysing about goal, input, output and constraints in the R&D process. The information to be managed in R&D institutes was identified by using SmartER which is the automation program of IDEF1/1X and obtained information modelling for TO-BE model. The work process of weapon system R&D consists of the concept study phase, the exploration development phase, the system development phase, the prototype manufacturing phase, and the report writing of R&D results phase. The information modelling of weapon system R&D is the R&D work process with information sharing by means of IWSDB Since IDEF is suitable for large scale system development like weapon system R&D, further studies on IDEF would be required to achieve the goal of defense CALS.

  • PDF