• Title/Summary/Keyword: Software capability

Search Result 599, Processing Time 0.025 seconds

Effects of variety, region and season on near infrared reflectance spectroscopic analysis of quality parameters in red wine grapes

  • Esler, Michael B.;Gishen, Mark;Francis, I.Leigh;Dambergs, Robert G.;Kambouris, Ambrosias;Cynkar, Wies U.;Boehm, David R.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1523-1523
    • /
    • 2001
  • The wine industry requires practical methods for objectively measuring the composition of both red wine grapes on the vine to determine optimal harvest time; and of freshly harvested grapes for efficient allocation to vinery process streams for particular red wine products, and to determine payment of contract grapegrowers. To be practical for industry application these methods must be rapid, inexpensive and accurate. In most cases this restricts the analyses available to measurement of TSS (total soluble solids, predominantly sugars) by refractometry and pH by electropotentiometry. These two parameters, however, do not provide a comprehensive compositional characterization for the purpose of winemaking. The concentration of anthocyanin pigment in red wine grapes is an accepted indicator of potential wine quality and price. However, routine analysis for total anthocyanins is not considered as a practical option by the wider wine industry because of the high cost and slow turnaround time of this multi-step wet chemical laboratory analysis. Recent work by this ${group}^{l,2}$ has established the capability of near infrared (NIR) spectroscopy to provide rapid, accurate and simultaneous measurement of total anthocyanins, TSS and pH in red wine grapes. The analyses may be carried out equally well using either research grade scanning spectrometers or much simpler reduced spectral range portable diode-array based instrumentation. We have recently expanded on this work by collecting thousands of red wine grape samples in Australia. The sample set spans two vintages (1999 and 2000), five distinct geographical winegrowing regions and three main red wine grape varieties used in Australia (Cabernet Sauvignon, Shiraz and Merlot). Homogenized grape samples were scanned in diffuse reflectance mode on a FOSE NIR Systems6500 spectrometer and subject to laboratory analysis by the traditional methods for total anthocyanins, TSS and pH. We report here an analysis of the correlations between the NIR spectra and the laboratory data using standard chemometric algorithms within The Unscrambler software package. In particular, various subsets of the total data set are considered in turn to elucidate the effects of vintage, geographical area and grape variety on the measurement of grape composition by NIR spectroscopy. The relative ability of discrete calibrations to predict within and across these differences is considered. The results are then used to propose an optimal calibration strategy for red wine grape analysis.

  • PDF

Safety Evaluation on Real Time Operating Systems for Safety-Critical Systems (안전필수(Safety-Critical) 시스템의 실시간 운영체제에 대한 안전성 평가)

  • Kang, Young-Doo;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.10
    • /
    • pp.3885-3892
    • /
    • 2010
  • Safety-Critical systems, such as Plant Protection Systems in nuclear power plant, plays a key role that the facilities can be operated without undue risk to the health and safety of public and environment, and those systems shall be designed, fabricated, installed, and tested to quality standards commensurate with the importance of the functions to be performed. Computer-based Instrumentation and Control Systems to perform the safety-critical function have Real Time Operating Systems to control and monitoring the sub-system and executing the application software. The safety-critical Real Time Operating Systems shall be designed, analyzed, tested and evaluated to have capability to maintain a high integrity and quality. However, local nuclear power plants have applied the real time operating systems on safety critical systems through Commercial Grade Item Dedication method, and this is the reason of lack of detailed methodology on assessing the safety of real time operating systems, expecially to the new developed one. This paper presents the methodology and experiences of safety evaluation on safety-critical Real Time Operating Systems based upon design requirements. This paper may useful to develop and evaluate the safety-critical Real Time Operating Systems in other industry to ensure the safety of public and environment.

On Flexibility Analysis of Real-Time Control System Using Processor Utilization Function (프로세서 활용도 함수를 이용한 실시간 제어시스템 유연성 분석)

  • Chae Jung-Wha;Yoo Cheol-Jung
    • The KIPS Transactions:PartA
    • /
    • v.12A no.1 s.91
    • /
    • pp.53-58
    • /
    • 2005
  • The use of computers for control and monitoring of industrial process has expanded greatly in recent years. The computer used in such applications is shared between a certain number of time-critical control and monitor function and non time-critical batch processing job stream. Embedded systems encompass a variety of hardware and software components which perform specific function in host computer. Many embedded system must respond to external events under certain timing constraints. Failure to respond to certain events on time may either seriously degrade system performance or even result in a catastrophe. In the design of real-time embedded system, decisions made at the architectural design phase greatly affect the final implementation and performance of the system. Flexibility indicates how well a particular system architecture can tolerate with respect to satisfying real-time requirements. The degree of flexibility of real-time system architecture indicates the capability of the system to tolerate perturbations in timing related specifications. Given degree of flexibility, one may compare and rank different implementations. A system with a higher degree of flexibility is more desirable. Flexibility is also an important factor in the trade-off studies between cost and performance. In this paper, it is identified the need for flexibility function and shows that the existing real-time analysis result can be effective. This paper motivated the need for a flexibility for the efficient analysis of potential design candidates in the architectural design exploration or real time embedded system.

An Effective Face Authentication Method for Resource - Constrained Devices (제한된 자원을 갖는 장치에서 효과적인 얼굴 인증 방법)

  • Lee Kyunghee;Byun Hyeran
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1233-1245
    • /
    • 2004
  • Though biometrics to authenticate a person is a good tool in terms of security and convenience, typical authentication algorithms using biometrics may not be executed on resource-constrained devices such as smart cards. Thus, to execute biometric processing on resource-constrained devices, it is desirable to develop lightweight authentication algorithm that requires only small amount of memory and computation. Also, among biological features, face is one of the most acceptable biometrics, because humans use it in their visual interactions and acquiring face images is non-intrusive. We present a new face authentication algorithm in this paper. Our achievement is two-fold. One is to present a face authentication algorithm with low memory requirement, which uses support vector machines (SVM) with the feature set extracted by genetic algorithms (GA). The other contribution is to suggest a method to reduce further, if needed, the amount of memory required in the authentication at the expense of verification rate by changing a controllable system parameter for a feature set size. Given a pre-defined amount of memory, this capability is quite effective to mount our algorithm on memory-constrained devices. The experimental results on various databases show that our face authentication algorithm with SVM whose input vectors consist of discriminating features extracted by GA has much better performance than the algorithm without feature selection process by GA has, in terms of accuracy and memory requirement. Experiment also shows that the number of the feature ttl be selected is controllable by a system parameter.

A Cellular Learning Strategy for Local Search in Hybrid Genetic Algorithms (복합 유전자 알고리즘에서의 국부 탐색을 위한 셀룰러 학습 전략)

  • Ko, Myung-Sook;Gil, Joon-Min
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.9
    • /
    • pp.669-680
    • /
    • 2001
  • Genetic Algorithms are optimization algorithm that mimics biological evolution to solve optimization problems. Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex fitness landscapes. Hybrid genetic algorithm that is combined with local search called learning can sustain the balance between exploration and exploitation. The genetic traits that each individual in the population learns through evolution are transferred back to the next generation, and when this learning is combined with genetic algorithm we can expect the improvement of the search speed. This paper proposes a genetic algorithm based Cellular Learning with accelerated learning capability for function optimization. Proposed Cellular Learning strategy is based on periodic and convergent behaviors in cellular automata, and on the theory of transmitting to offspring the knowledge and experience that organisms acquire in their lifetime. We compared the search efficiency of Cellular Learning strategy with those of Lamarckian and Baldwin Effect in hybrid genetic algorithm. We showed that the local improvement by cellular learning could enhance the global performance higher by evaluating their performance through the experiment of various test bed functions and also showed that proposed learning strategy could find out the better global optima than conventional method.

  • PDF

Spatial-temporal Assessment and Mapping of the Air Quality and Noise Pollution in a Sub-area Local Environment inside the Center of a Latin American Megacity: Universidad Nacional de Colombia - Bogotá Campus

  • Fredy Alejandro, Guevara Luna;Marco Andres, Guevara Luna;Nestor Yezid, Rojas Roa
    • Asian Journal of Atmospheric Environment
    • /
    • v.12 no.3
    • /
    • pp.232-243
    • /
    • 2018
  • The construction, development and maintenance of an economically, environmentally and socially sustainable campus involves the integration of measuring tools and technical information that invites and encourages the community to know the actual state to generate positive actions for reducing the negative impacts over the local environment. At the Universidad Nacional de Colombia - Campus $Bogot{\acute{a}}$, a public area with daily traffic of more than 25000 people, the Environmental Management Bureau has committed with the monitoring of the noise pollution and air quality, as support to the campaigns aiming to reduce the pollutant emissions associated to the student's activities and campus operation. The target of this study is based in the implementation of mobile air quality and sonometry monitoring equipment, the mapping of the actual air quality and noise pollution inside the university campus as a novel methodology for a sub-area inside a megacity. This results and mapping are proposed as planning tool for the institution administrative sections. A mobile Kunak$^{(R)}$ Air & OPC air monitoring station with the capability to measure particulate matter $PM_{10}$, $PM_{2.5}$, Ozone ($O_3$), Sulfur Oxide ($SO_2$), Carbon Monoxide (CO) and Nitrogen Oxide ($NO_2$) as well as Temperature, Relative Humidity and Latitude and Longitude coordinates for the data georeferenciation; and a sonometer Cirrus$^{(R)}$ 162B Class 2 were used to perform the measurements. The measurements took place in conditions of academic activity and without it, with the aim of identify the impacts generated by the campus operation. Using the free code geographical information software QGIS$^{(R)}$ 2.18, the maps of each variable measured were developed, and the impacts generated by the operation of the campus were identified qualitative and quantitively. For the measured variables, an increase of around 21% for the $L_{Aeq}$ noise level and around 80% to 90% for air pollution were detected during the operation period.

A Study on the Effect of Engineering Computer Programming Instruction Using Project Learning (프로젝트 학습을 적용한 공학컴퓨터프로그래밍 수업 효과 연구)

  • Chae Su-Jin;Hwang Sung-Ho
    • Journal of Engineering Education Research
    • /
    • v.8 no.3
    • /
    • pp.57-68
    • /
    • 2005
  • The purpose of this study was to analyze the effect of Engineering computer programming instruction using project learning, to find out the ways to solve some revealed problems, and to improve the instruction. Unlike traditional lecture courses, students are encouraged to cultivate problem solving and teamwork skills through the programming project. In order to examine the effect of project learning, a survey was conducted with 49 students. The questionnaire consisted of 20 items with 5-grade scale each, the contents of which included learning value, workload, skills acquirement, assignment and comment. The SPSS, statistical analysis software was used to get statistics such as ANOVA, correlation and mean, etc. The results of this study showed (1) project learning was more efficient to acquire problem solving and teamwork skills compared with lecture learning, (2) there was significant correlation between self directed learning skill and information collecting skill, (3) cyber education system(i-campus) was helpful for students' self learning. But the results also showed that (4) students did not give high scores on items of the workload or difficulty of assignments. So we can conclude that it is necessary to develop the suitable projects for the capability of students to make the better project learning.

Technology Analysis on Automatic Detection and Defense of SW Vulnerabilities (SW 보안 취약점 자동 탐색 및 대응 기술 분석)

  • Oh, Sang-Hwan;Kim, Tae-Eun;Kim, HwanKuk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.11
    • /
    • pp.94-103
    • /
    • 2017
  • As automatic hacking tools and techniques have been improved, the number of new vulnerabilities has increased. The CVE registered from 2010 to 2015 numbered about 80,000, and it is expected that more vulnerabilities will be reported. In most cases, patching a vulnerability depends on the developers' capability, and most patching techniques are based on manual analysis, which requires nine months, on average. The techniques are composed of finding the vulnerability, conducting the analysis based on the source code, and writing new code for the patch. Zero-day is critical because the time gap between the first discovery and taking action is too long, as mentioned. To solve the problem, techniques for automatically detecting and analyzing software (SW) vulnerabilities have been proposed recently. Cyber Grand Challenge (CGC) held in 2016 was the first competition to create automatic defensive systems capable of reasoning over flaws in binary and formulating patches without experts' direct analysis. Darktrace and Cylance are similar projects for managing SW automatically with artificial intelligence and machine learning. Though many foreign commercial institutions and academies run their projects for automatic binary analysis, the domestic level of technology is much lower. This paper is to study developing automatic detection of SW vulnerabilities and defenses against them. We analyzed and compared relative works and tools as additional elements, and optimal techniques for automatic analysis are suggested.

Techniques to Transform EJB 2.1 Components to EJB 3.0 for Performance Improvement and Component Reusability (컴포넌트의 성능향상과 재사용을 위한 EJB 2.1 컴포넌트에서 EJB 3.0로의 변환기법)

  • Lee, Hoo-Jae;Kim, Ji-Hyeok;Rhew, Sung-Yul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.261-272
    • /
    • 2009
  • The EJB 3.0 specifications, which were improved in terms of performance and ease of development, were recently announced. Accordingly, for the EJB 3.0 application environment, developers generally prefer the gradual transformation of components whose performance must be improved to the complete transformation of all the EJB 2.1 components into EJB 3.0 components. Previous studies, however, did not consider the service of the application and did not ensure the compatibility and reusability of the components in the full replacement of EJB 3.0 due to the transformation using different specifications. This study proposed three transformation techniques that consider the service supported in the existing application, wherein the compatibility and reusability of the components are ensured in the case of the full replacement of EJB 3.0. The proposed transformation techniques are techniques for gradual transformation, such as direct transformation that directly connects components, indirect transformation that uses the EJB connector, and indirect template transformation wherein the template pattern is applied to the indirect transformation. The proposed transformation techniques were verified by comparing the reusability and processing capability of the components per second, and the standards for selecting a technique were provided based on the characteristics of the transformation into EJB 3.0 that were found in this study.

Training Method of Artificial Neural Networks for Implementation of Automatic Composition Systems (자동작곡시스템 구현을 위한 인공신경망의 학습방법)

  • Cho, Jae-Min;Ryu, Eun Mi;Oh, Jin-Woo;Jung, Sung Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.315-320
    • /
    • 2014
  • Composition is a creative activity of a composer in order to express his or her emotion into melody based on their experience. However, it is very hard to implement an automatic composition program whose composition process is the same as the composer. On the basis that the creative activity is possible from the imitation we propose a method to implement an automatic composition system using the learning capability of ANN(Artificial Neural Networks). First, we devise a method to convert a melody into time series that ANN can train and then another method to learn the repeated melody with melody bar for correct training of ANN. After training of the time series to ANN, we feed a new time series into the ANN, then the ANN produces a full new time series which is converted a new melody. But post processing is necessary because the produced melody does not fit to the tempo and harmony of music theory. In this paper, we applied a tempo post processing using tempo post processing program, but the harmony post processing is done by human because it is difficult to implement. We will realize the harmony post processing program as a further work.