• Title/Summary/Keyword: design and analysis of algorithms

Search Result 621, Processing Time 0.027 seconds

A study on the design of an efficient hardware and software mixed-mode image processing system for detecting patient movement (환자움직임 감지를 위한 효율적인 하드웨어 및 소프트웨어 혼성 모드 영상처리시스템설계에 관한 연구)

  • Seungmin Jung;Euisung Jung;Myeonghwan Kim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.29-37
    • /
    • 2024
  • In this paper, we propose an efficient image processing system to detect and track the movement of specific objects such as patients. The proposed system extracts the outline area of an object from a binarized difference image by applying a thinning algorithm that enables more precise detection compared to previous algorithms and is advantageous for mixed-mode design. The binarization and thinning steps, which require a lot of computation, are designed based on RTL (Register Transfer Level) and replaced with optimized hardware blocks through logic circuit synthesis. The designed binarization and thinning block was synthesized into a logic circuit using the standard 180n CMOS library and its operation was verified through simulation. To compare software-based performance, performance analysis of binary and thinning operations was also performed by applying sample images with 640 × 360 resolution in a 32-bit FPGA embedded system environment. As a result of verification, it was confirmed that the mixed-mode design can improve the processing speed by 93.8% in the binary and thinning stages compared to the previous software-only processing speed. The proposed mixed-mode system for object recognition is expected to be able to efficiently monitor patient movements even in an edge computing environment where artificial intelligence networks are not applied.

Capacity determination for a rainfall harvesting unit using an optimization method (최적화 기법을 이용한 빗물이용시설의 저류 용량 결정)

  • Jin, Youngkyu;Kang, Taeuk;Lee, Sangho;Jeong, Taekmun
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.9
    • /
    • pp.681-690
    • /
    • 2020
  • Generally, the design capacity of the rainwater harvesting unit is determined by trial and error method that is repeatedly calculating various analysis scenarios with capacity, reliability, and rainwater utilization ratio, etc. This method not only takes a lot of time to analyze but also involves a lot of calculations, so analysis errors may occur. In order to solve the problem, this study suggested a way to directly determine the minimum capacity to meet arbitrary target reliabilities using the global optimization method. The method was implemented by simulation model with particle swarm optimization (PSO) algorithms using Python language. The pyswarm that is provided as an open-source of python was used as optimization method, that can explore global optimum, and consider constraints. In this study, the developed program was applied to the design data for the rainwater harvesting constructed in Cheongna district 1 in Incheon to verify the efficiency, stability, and accuracy of the analysis. The method of determining the capacity of the rainwater harvesting presented in this study is considered to be of practical value because it can improve the current level of analytical technology.

The Analysis and Design of Advanced Neurofuzzy Polynomial Networks (고급 뉴로퍼지 다항식 네트워크의 해석과 설계)

  • Park, Byeong-Jun;O, Seong-Gwon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.3
    • /
    • pp.18-31
    • /
    • 2002
  • In this study, we introduce a concept of advanced neurofuzzy polynomial networks(ANFPN), a hybrid modeling architecture combining neurofuzzy networks(NFN) and polynomial neural networks(PNN). These networks are highly nonlinear rule-based models. The development of the ANFPN dwells on the technologies of Computational Intelligence(Cl), namely fuzzy sets, neural networks and genetic algorithms. NFN contributes to the formation of the premise part of the rule-based structure of the ANFPN. The consequence part of the ANFPN is designed using PNN. At the premise part of the ANFPN, NFN uses both the simplified fuzzy inference and error back-propagation learning rule. The parameters of the membership functions, learning rates and momentum coefficients are adjusted with the use of genetic optimization. As the consequence structure of ANFPN, PNN is a flexible network architecture whose structure(topology) is developed through learning. In particular, the number of layers and nodes of the PNN are not fixed in advance but is generated in a dynamic way. In this study, we introduce two kinds of ANFPN architectures, namely the basic and the modified one. Here the basic and the modified architecture depend on the number of input variables and the order of polynomial in each layer of PNN structure. Owing to the specific features of two combined architectures, it is possible to consider the nonlinear characteristics of process system and to obtain the better output performance with superb predictive ability. The availability and feasibility of the ANFPN are discussed and illustrated with the aid of two representative numerical examples. The results show that the proposed ANFPN can produce the model with higher accuracy and predictive ability than any other method presented previously.

Research on APC Verification for Disaster Victims and Vulnerable Facilities (재난약자 및 취약시설에 대한 APC실증에 관한 연구)

  • Seungyong Kim;Incheol Hwang;Dongsik Kim;Jungjae Shin;Seunggap Yong
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.199-205
    • /
    • 2024
  • Purpose: This study aims to improve the recognition rate of Auto People Counting (APC) in accurately identifying and providing information on remaining evacuees in disaster-vulnerable facilities such as nursing homes to firefighting and other response agencies in the event of a disaster. Methods: In this study, a baseline model was established using CNN (Convolutional Neural Network) models to improve the algorithm for recognizing images of incoming and outgoing individuals through cameras installed in actual disaster-vulnerable facilities operating APC systems. Various algorithms were analyzed, and the top seven candidates were selected. The research was conducted by utilizing transfer learning models to select the optimal algorithm with the best performance. Results: Experiment results confirmed the precision and recall of Densenet201 and Resnet152v2 models, which exhibited the best performance in terms of time and accuracy. It was observed that both models demonstrated 100% accuracy for all labels, with Densenet201 model showing superior performance. Conclusion: The optimal algorithm applicable to APC among various artificial intelligence algorithms was selected. Further research on algorithm analysis and learning is required to accurately identify the incoming and outgoing individuals in disaster-vulnerable facilities in various disaster situations such as emergencies in the future.

Automated Composition System of Web Services by Semantic and Workflow based Hybrid Techniques (시맨틱과 워크플로우 혼합기법에 의한 자동화된 웹 서비스 조합시스템)

  • Lee, Yong-Ju
    • The KIPS Transactions:PartD
    • /
    • v.14D no.2
    • /
    • pp.265-272
    • /
    • 2007
  • In this paper, we implement an automated composition system of web services using hybrid techniques that merge the benefit of BPEL techniques, with the advantage of OWL-S, BPEL techniques have practical capabilities that fulfil the needs of the business environment such as fault handling and transaction management. However, the main shortcoming of these techniques is the static composition approach, where the service selection and flow management are done a priori and manually. In contrast, OWL-S techniques use ontologies to provide a mechanism to describe the web services functionality in machine-understandable form, making it possible to discover, and integrate web services automatically. This allows for the dynamic integration of compatible web services, possibly discovered at run time, into the composition schema. However, the development of these approaches is still in its infancy and has been largely detached from the BPEL composition effort. In this work, we describe the design of the SemanticBPEL architecture that is a hybrid system of BPEL4WS and OWL-S, and propose algorithms for web service search and integration. In particular, the SemanticBPEL has been implemented based on the open source tools. The proposed system is compared with existing BPEL systems by functional analysis. These comparisions show that our system outperforms existing systems.

Distributed Throughput-Maximization Using the Up- and Downlink Duality in Wireless Networks (무선망에서의 상하향 링크 쌍대성 성질을 활용한 분산적 수율 최대화 기법)

  • Park, Jung-Min;Kim, Seong-Lyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.11A
    • /
    • pp.878-891
    • /
    • 2011
  • We consider the throughput-maximization problem for both the up- and downlink in a wireless network with interference channels. For this purpose, we design an iterative and distributive uplink algorithm based on Lagrangian relaxation. Using the uplink power prices and network duality, we achieve throughput-maximization in the dual downlink that has a symmetric channel and an equal power budget compared to the uplink. The network duality we prove here is a generalized version of previous research [10], [11]. Computational tests show that the performance of the up- and downlink throughput for our algorithms is close to the optimal value for the channel orthogonality factor, ${\theta}{\in}$(0.5, 1]. On the other hand, when the channels are slightly orthogonal (${\theta}{\in}$(0, 0.5]), we observe some throughput degradation in the downlink. We have extended our analysis to the real downlink that has a nonsymmetric channel and an unequal power budget compared to the uplink. It is shown that the modified duality-based approach is thoroughly applied to the real downlink. Considering the complexity of the algorithms in [6] and [18], we conclude that these results are quite encouraging in terms of both performance and practical applicability of the generalized duality theorem.

Development of an Edge-based Point Correlation Algorithm Avoiding Full Point Search in Visual Inspection System (전탐색 회피에 의한 고속 에지기반 점 상관 알고리즘의 개발)

  • Kang, Dong-Joong;Kim, Mun-Jo;Kim, Min-Sung;Lee, Eung-Joo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.327-336
    • /
    • 2004
  • For visual inspection system in real industrial environment, it is one of most important tasks to design fast and stable pattern matching algorithm. This paper presents an edge-based point correlation algorithm avoiding full search in visual inspection system. Conventional algorithms based on NGC(normalized gray-level correlation) have to overcome some difficulties for applying to automated inspection system in factory environment. First of all, NGC algorithms need high time complexity and thus high performance hardware to satisfy real-time process. In addition, lighting condition in realistic factory environments if not stable and therefore intensity variation from uncontrolled lights gives many roubles for applying directly NGC as pattern matching algorithm in this paper, we propose an algorithm to solve these problems from using thinned and binarized edge data and skipping full point search with edge-map analysis. A point correlation algorithm with the thinned edges is introduced with image pyramid technique to reduce the time complexity. Matching edges instead of using original gray-level pixel data overcomes NGC problems and pyramid of edges also provides fast and stable processing. All proposed methods are preyed from experiments using real images.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Application of Texture Feature Analysis Algorithm used the Statistical Characteristics in the Computed Tomography (CT): A base on the Hepatocellular Carcinoma (HCC) (전산화단층촬영 영상에서 통계적 특징을 이용한 질감특징분석 알고리즘의 적용: 간세포암 중심으로)

  • Yoo, Jueun;Jun, Taesung;Kwon, Jina;Jeong, Juyoung;Im, Inchul;Lee, Jaeseung;Park, Hyonghu;Kwak, Byungjoon;Yu, Yunsik
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.1
    • /
    • pp.9-15
    • /
    • 2013
  • In this study, texture feature analysis (TFA) algorithm to automatic recognition of liver disease suggests by utilizing computed tomography (CT), by applying the algorithm computer-aided diagnosis (CAD) of hepatocellular carcinoma (HCC) design. Proposed the performance of each algorithm was to comparison and evaluation. In the HCC image, set up region of analysis (ROA, window size was $40{\times}40$ pixels) and by calculating the figures for TFA algorithm of the six parameters (average gray level, average contrast, measure of smoothness, skewness, measure of uniformity, entropy) HCC recognition rate were calculated. As a result, TFA was found to be significant as a measure of HCC recognition rate. Measure of uniformity was the most recognition. Average contrast, measure of smoothness, and skewness were relatively high, and average gray level, entropy showed a relatively low recognition rate of the parameters. In this regard, showed high recognition algorithms (a maximum of 97.14%, a minimum of 82.86%) use the determining HCC imaging lesions and assist early diagnosis of clinic. If this use to therapy, the diagnostic efficiency of clinical early diagnosis better than before. Later, after add the effective and quantitative analysis, criteria research for generalized of disease recognition is needed to be considered.

Performance Analysis of the IEEE 802.11 Broadcast Scheme in a Wireless Data Network (무선 데이터 망에서 IEEE 802.11 브로드캐스트 기법의 성능 분석)

  • Park, Jae-Sung;Lim, Yu-Jin;Ahn, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.5
    • /
    • pp.56-63
    • /
    • 2009
  • The IEEE 802.11 standard has been used for wireless data networks such as wireless LAN, ad-hoc network, and vehicular ad-hoc network. Thus, the performance analysis of the IEEE 802.11 specification has been one of the hottest issues for network optimization and resource management. Most of the analysis studies were performed in a data plane of the IEEE 802.11 unicast. However, IEEE 802.11 broadcast is widely used for topology management, path management, and data dissemination. Thus, it is important to understand the performance of the broadcast scheme for the design of efficient wireless data network. In this contort, we analyze the IEEE 802.11 broadcast scheme in terms of the broadcast frame reception probability according to the distance from a sending node. Unlike the other works, our analysis framework includes not only the system parameters of the IEEE 802.11 specification such as transmission range, data rate, minimum contention window but also the networking environments such as the number of nodes, network load, and the radio propagation environments. Therefore, our analysis framework is expected to be used for the development of protocols and algorithms in a dynamic wireless data network.