• Title/Summary/Keyword: Security metric

Search Result 89, Processing Time 0.026 seconds

Mapping landuse change and major food crops production in Nepal: Applications for forest resource management

  • Panta, Menaka;Kim, Kye-Hyun;Neupane, Hari Sharma;Joshi, Chudamani
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.390-393
    • /
    • 2008
  • We analyzed the landuse change, quantify the area covered by majors food crops (Paddy, Wheat, Barely, Maize, Millet and Potato) and their productivity trends in Terai, Nepal from 1987 to 2006. We used series of area covered by each single crop and production data published by Government of Nepal, Central Bureau of Statistics and Ministry of Agriculture and Cooperatives. Our results indicated that the agriculture land has increased by about 47% while forest has decreased by 32% between 1964 and 2001 in Terai. Whilst the total cropped area has increased by 19% between 1987 and 2006. The highest incremental change has observed in Potato by 234% followed by Wheat 31%, Maize 20% and Paddy 12% and so on. However, data revealed with very low crops productivity and it showed less than half of its potential except in Potato. The average yield of food crops /hectare /year during last 20 year has found only 3.094 metric tons. Only Potato has gained high average yield by 10.34 metric tons. While others crops yielded entirely low. 3 periods moving average depicted that the productivity trend of Barely and Millet has stagnant while others crops showed slightly up and down and increasing steadily over time. Further study is needed to comprehend the linkage of food productivity in the present food supply to demand and food security system in Terai, Nepal.

  • PDF

Lifesaver: Android-based Application for Human Emergency Falling State Recognition

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.267-275
    • /
    • 2021
  • Smart application is developed in this paper by using an android-based platform to automatically determine the human emergency state (Lifesaver) by using different technology sensors of the mobile. In practice, this Lifesaver has many applications, and it can be easily combined with other applications as well to determine the emergency of humans. For example, if an old human falls due to some medical reasons, then this application is automatically determining the human state and then calls a person from this emergency contact list. Moreover, if the car accidentally crashes due to an accident, then the Lifesaver application is also helping to call a person who is on the emergency contact list to save human life. Therefore, the main objective of this project is to develop an application that can save human life. As a result, the proposed Lifesaver application is utilized to assist the person to get immediate attention in case of absence of help in four different situations. To develop the Lifesaver system, the GPS is also integrated to get the exact location of a human in case of emergency. Moreover, the emergency list of friends and authorities is also maintained to develop this application. To test and evaluate the Lifesaver system, the 50 different human data are collected with different age groups in the range of (40-70) and the performance of the Lifesaver application is also evaluated and compared with other state-of-the-art applications. On average, the Lifesaver system is achieved 95.5% detection accuracy and the value of 91.5 based on emergency index metric, which is outperformed compared to other applications in this domain.

Evaluation and Comparative Analysis of Scalability and Fault Tolerance for Practical Byzantine Fault Tolerant based Blockchain (프랙티컬 비잔틴 장애 허용 기반 블록체인의 확장성과 내결함성 평가 및 비교분석)

  • Lee, Eun-Young;Kim, Nam-Ryeong;Han, Chae-Rim;Lee, Il-Gu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.2
    • /
    • pp.271-277
    • /
    • 2022
  • PBFT (Practical Byzantine Fault Tolerant) is a consensus algorithm that can achieve consensus by resolving unintentional and intentional faults in a distributed network environment and can guarantee high performance and absolute finality. However, as the size of the network increases, the network load also increases due to message broadcasting that repeatedly occurs during the consensus process. Due to the characteristics of the PBFT algorithm, it is suitable for small/private blockchain, but there is a limit to its application to large/public blockchain. Because PBFT affects the performance of blockchain networks, the industry should test whether PBFT is suitable for products and services, and academia needs a unified evaluation metric and technology for PBFT performance improvement research. In this paper, quantitative evaluation metrics and evaluation frameworks that can evaluate PBFT family consensus algorithms are studied. In addition, the throughput, latency, and fault tolerance of PBFT are evaluated using the proposed PBFT evaluation framework.

Image Analysis Fuzzy System

  • Abdelwahed Motwakel;Adnan Shaout;Anwer Mustafa Hilal;Manar Ahmed Hamza
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.1
    • /
    • pp.163-177
    • /
    • 2024
  • The fingerprint image quality relies on the clearness of separated ridges by valleys and the uniformity of the separation. The condition of skin still dominate the overall quality of the fingerprint. However, the identification performance of such system is very sensitive to the quality of the captured fingerprint image. Fingerprint image quality analysis and enhancement are useful in improving the performance of fingerprint identification systems. A fuzzy technique is introduced in this paper for both fingerprint image quality analysis and enhancement. First, the quality analysis is performed by extracting four features from a fingerprint image which are the local clarity score (LCS), global clarity score (GCS), ridge_valley thickness ratio (RVTR), and the Global Contrast Factor (GCF). A fuzzy logic technique that uses Mamdani fuzzy rule model is designed. The fuzzy inference system is able to analyse and determinate the fingerprint image type (oily, dry or neutral) based on the extracted feature values and the fuzzy inference rules. The percentages of the test fuzzy inference system for each type is as follow: For dry fingerprint the percentage is 81.33, for oily the percentage is 54.75, and for neutral the percentage is 68.48. Secondly, a fuzzy morphology is applied to enhance the dry and oily fingerprint images. The fuzzy morphology method improves the quality of a fingerprint image, thus improving the performance of the fingerprint identification system significantly. All experimental work which was done for both quality analysis and image enhancement was done using the DB_ITS_2009 database which is a private database collected by the department of electrical engineering, institute of technology Sepuluh Nopember Surabaya, Indonesia. The performance evaluation was done using the Feature Similarity index (FSIM). Where the FSIM is an image quality assessment (IQA) metric, which uses computational models to measure the image quality consistently with subjective evaluations. The new proposed system outperformed the classical system by 900% for the dry fingerprint images and 14% for the oily fingerprint images.

Software Metric for CBSE Model

  • Iyyappan. M;Sultan Ahmad;Shoney Sebastian;Jabeen Nazeer;A.E.M. Eljialy
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.187-193
    • /
    • 2023
  • Large software systems are being produced with a noticeably higher level of quality with component-based software engineering (CBSE), which places a strong emphasis on breaking down engineered systems into logical or functional components with clearly defined interfaces for inter-component communication. The component-based software engineering is applicable for the commercial products of open-source software. Software metrics play a major role in application development which improves the quantitative measurement of analyzing, scheduling, and reiterating the software module. This methodology will provide an improved result in the process, of better quality and higher usage of software development. The major concern is about the software complexity which is focused on the development and deployment of software. Software metrics will provide an accurate result of software quality, risk, reliability, functionality, and reusability of the component. The proposed metrics are used to assess many aspects of the process, including efficiency, reusability, product interaction, and process complexity. The details description of the various software quality metrics that may be found in the literature on software engineering. In this study, it is explored the advantages and disadvantages of the various software metrics. The topic of component-based software engineering is discussed in this paper along with metrics for software quality, object-oriented metrics, and improved performance.

Evil-Twin Detection Scheme Using SVM with Multi-Factors (다중 요소를 가지는 SVM을 이용한 이블 트윈 탐지 방법)

  • Kang, SungBae;Nyang, DaeHun;Lee, KyungHee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.2
    • /
    • pp.334-348
    • /
    • 2015
  • Widespread use of smart devices accompanies increase of use of access point (AP), which enables the connection to the wireless network. If the appropriate security is not served when a user tries to connect the wireless network through an AP, various security problems can arise due to the rogue APs. In this paper, we are going to examine the threat by evil-twin, which is a kind of rogue APs. Most of recent researches for detecting rogue APs utilize the measured time difference, such as round trip time (RTT), between the evil-twin and authorized APs. These methods, however, suffer from the low detection rate in the network congestion. Due to these reasons, in this paper, we suggest a new factor, packet inter-arrival time (PIAT), in order to detect evil-twins. By using both RTT and PIAT as the learning factors for the support vector machine (SVM), we determine the non-linear metric to classify evil-twins and authorized APs. As a result, we can detect evil-twins with the probability of up to 96.5% and at least 89.75% even when the network is congested.

ON EQUIVALENT NORMS TO BLOCH NORM IN ℂn

  • Choi, Ki Seong
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.19 no.4
    • /
    • pp.325-334
    • /
    • 2006
  • For $f{\in}L^2(B,d{\nu})$, ${\parallel}f{\parallel}_{BMO}=\widetilde{{\mid}f{\mid}^2}(z)-{\mid}{\tilde{f}}(z){\mid}^2$. For f continuous on B, ${\parallel}f{\parallel}_{BO}=sup\{w(f)(z):z{\in}B\}$ where $w(f)(z)=sup\{{\mid}f(z)-f(w){\mid}:{\beta}(z,w){\leq}1\}$. In this paper, we will show that if $f{\in}BMO$, then ${\parallel}f{\parallel}_{BO}{\leq}M{\parallel}f{\parallel}_{BMO}$. We will also show that if $f{\in}BO$, then ${\parallel}f{\parallel}_{BMO}{\leq}M{\parallel}f{\parallel}_{BO}^2$. A homomorphic function $f:B{\rightarrow}{\mathbb{C}}$ is called a Bloch function ($f{\in}{\mathcal{B}}$) if ${\parallel}f{\parallel}_{\mathcal{B}}=sup_{z{\in}B}\;Qf(z)$<${\infty}$. In this paper, we will show that if $f{\in}{\mathcal{B}}$, then ${\parallel}f{\parallel}_{BO}{\leq}{\parallel}f{\parallel}_{\mathcal{B}}$. We will also show that if $f{\in}BMO$ and f is holomorphic, then ${\parallel}f{\parallel}_{\mathcal{B}}^2{\leq}M{\parallel}f{\parallel}_{BMO}$.

  • PDF

Development of Evaluation System for Defense Informatization Level

  • Sim, Seungbae;Lee, Sangho
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.271-282
    • /
    • 2019
  • There is a description that you cannot manage what you do not measure. The Korea Ministry of National Defense (MND) is conducting evaluations in various fields to obtain meaningful effects from IT investments, and views that the evaluation of the defense informatization sector is divided into defense informatization policy evaluation and defense informatization project evaluation. The defense informatization level evaluation can measure the informatization level of MND and the armed forces or organizations. Since the evaluation system being studied to measure the level of defense informatization is composed mainly of qualitative metrics, it is necessary to reconstruct it based on quantitative metrics that can guarantee objectivity. In addition, for managing the level of change by evaluation objects, the evaluation system should be designed with a focus on homeostasis of metrics so that it can be measured periodically. Moreover, metrics need to be promoted in terms of performance against targets. To this end, this study proposes to measure the level of defense informatization by dividing it into defense information network, computer systems, interoperability and standardization, information security, information environment, and information system use, and suggests their metrics.

A Secure Face Cryptogr aphy for Identity Document Based on Distance Measures

  • Arshad, Nasim;Moon, Kwang-Seok;Kim, Jong-Nam
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.10
    • /
    • pp.1156-1162
    • /
    • 2013
  • Face verification has been widely studied during the past two decades. One of the challenges is the rising concern about the security and privacy of the template database. In this paper, we propose a secure face verification system which generates a unique secure cryptographic key from a face template. The face images are processed to produce face templates or codes to be utilized for the encryption and decryption tasks. The result identity data is encrypted using Advanced Encryption Standard (AES). Distance metric naming hamming distance and Euclidean distance are used for template matching identification process, where template matching is a process used in pattern recognition. The proposed system is tested on the ORL, YALEs, and PKNU face databases, which contain 360, 135, and 54 training images respectively. We employ Principle Component Analysis (PCA) to determine the most discriminating features among face images. The experimental results showed that the proposed distance measure was one the promising best measures with respect to different characteristics of the biometric systems. Using the proposed method we needed to extract fewer images in order to achieve 100% cumulative recognition than using any other tested distance measure.

A comparative study of machine learning methods for automated identification of radioisotopes using NaI gamma-ray spectra

  • Galib, S.M.;Bhowmik, P.K.;Avachat, A.V.;Lee, H.K.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.12
    • /
    • pp.4072-4079
    • /
    • 2021
  • This article presents a study on the state-of-the-art methods for automated radioactive material detection and identification, using gamma-ray spectra and modern machine learning methods. The recent developments inspired this in deep learning algorithms, and the proposed method provided better performance than the current state-of-the-art models. Machine learning models such as: fully connected, recurrent, convolutional, and gradient boosted decision trees, are applied under a wide variety of testing conditions, and their advantage and disadvantage are discussed. Furthermore, a hybrid model is developed by combining the fully-connected and convolutional neural network, which shows the best performance among the different machine learning models. These improvements are represented by the model's test performance metric (i.e., F1 score) of 93.33% with an improvement of 2%-12% than the state-of-the-art model at various conditions. The experimental results show that fusion of classical neural networks and modern deep learning architecture is a suitable choice for interpreting gamma spectra data where real-time and remote detection is necessary.