• Title/Summary/Keyword: Design of Information Systems

Search Result 7,705, Processing Time 0.042 seconds

Fabrication of Portable Self-Powered Wireless Data Transmitting and Receiving System for User Environment Monitoring (사용자 환경 모니터링을 위한 소형 자가발전 무선 데이터 송수신 시스템 개발)

  • Jang, Sunmin;Cho, Sumin;Joung, Yoonsu;Kim, Jaehyoung;Kim, Hyeonsu;Jang, Dayeon;Ra, Yoonsang;Lee, Donghan;La, Moonwoo;Choi, Dongwhi
    • Korean Chemical Engineering Research
    • /
    • v.60 no.2
    • /
    • pp.249-254
    • /
    • 2022
  • With the rapid advance of the semiconductor and Information and communication technologies, remote environment monitoring technology, which can detect and analyze surrounding environmental conditions with various types of sensors and wireless communication technologies, is also drawing attention. However, since the conventional remote environmental monitoring systems require external power supplies, it causes time and space limitations on comfortable usage. In this study, we proposed the concept of the self-powered remote environmental monitoring system by supplying the power with the levitation-electromagnetic generator (L-EMG), which is rationally designed to effectively harvest biomechanical energy in consideration of the mechanical characteristics of biomechanical energy. In this regard, the proposed L-EMG is designed to effectively respond to the external vibration with the movable center magnet considering the mechanical characteristics of the biomechanical energy, such as relatively low-frequency and high amplitude of vibration. Hence the L-EMG based on the fragile force equilibrium can generate high-quality electrical energy to supply power. Additionally, the environmental detective sensor and wireless transmission module are composed of the micro control unit (MCU) to minimize the required power for electronic device operation by applying the sleep mode, resulting in the extension of operation time. Finally, in order to maximize user convenience, a mobile phone application was built to enable easy monitoring of the surrounding environment. Thus, the proposed concept not only verifies the possibility of establishing the self-powered remote environmental monitoring system using biomechanical energy but further suggests a design guideline.

ICT Medical Service Provider's Knowledge and level of recognizing how to cope with fire fighting safety (ICT 의료시설 기반에서 종사자의 소방안전 지식과 대처방법 인식수준)

  • Kim, Ja-Sook;Kim, Ja-Ok;Ahn, Young-Joon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.1
    • /
    • pp.51-60
    • /
    • 2014
  • In this study, ICT medical service provider's level of knowledge fire fighting safety and methods on coping with fires in the regions of Gwangju and Jeonam Province of Korea were investigated to determine the elements affecting such levels and provide basic information on the manuals for educating how to cope with the fire fighting safety in medical facilities. The data were analyzed using SPSS Win 14.0. The scores of level of knowledge fire fighting safety of ICT medical service provider's were 7.06(10 point scale), and the scores of level of recognizing how to cope with fire fighting safety were 6.61(11 point scale). level of recognizing how to cope with fire fighting safety were significantly different according to gender(t=4.12, p<.001), age(${\chi}^2$=17.24, p<.001), length of career(${\chi}^2$=22.76, p<.001), experience with fire fighting safety education(t=6.10, p<.001), level of subjective knowledge on fire fighting safety(${\chi}^2$=53.83, p<.001). In order to enhance the level of understanding of fire fighting safety and methods of coping by the ICT medical service providers it is found that: self-directed learning through avoiding the education just conveying knowledge by lecture tailored learning for individuals fire fighting education focused on experiencing actual work by developing various contents emphasizing cooperative learning deploying patients by classification systems using simulations and a study on the implementation of digital anti-fire monitoring system with multipoint communication protocol, a design and development of the smoke detection system using infra-red laser for fire detection in the wide space, video based fire detection algorithm using gaussian mixture mode developing an education manual for coping with fire fighting safety through multi learning approach at the medical facilities are required.

Development of a Feasibility Evaluation Model for Apartment Remodeling with the Number of Households Increasing at the Preliminary Stage (노후공동주택 세대수증가형 리모델링 사업의 기획단계 사업성평가 모델 개발)

  • Koh, Won-kyung;Yoon, Jong-sik;Yu, Il-han;Shin, Dong-woo;Jung, Dae-woon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.20 no.4
    • /
    • pp.22-33
    • /
    • 2019
  • The government has steadily revised and developed laws and systems for activating remodeling of apartments in response to the problems of aged apartments. However, despite such efforts, remodeling has yet to be activated. For many reasons, this study noted that there were no tools for reasonable profitability judgements and decision making in the preliminary stages of the remodeling project. Thus, the feasibility evaluation model was developed. Generally, the profitability judgements are made after the conceptual design. However, decisions to drive remodeling projects are made at the preliminary stage. So a feasibility evaluation model is required at the preliminary stage. Accordingly, In this study, a feasibility evaluation model was developed for determining preliminary stage profitability. Construction costs, business expenses, financial expenses, and generally sales revenue were calculated using the initial available information and remodeling variables derived through the existing cases. Through this process, we developed an algorithm that can give an overview of the return on investment. In addition, the preliminary stage feasibility evaluation model developed was applied to three cases to verify the applicability of the model. Although applied in three cases, the difference between the model's forecast and actual case values is less than 5%, which is considered highly applicable. If cases are expanded in the future, it will be a useful tool that can be used in actual work. The feasibility evaluation model developed in this study will support decision making by union members, and if the model is applied in different regions, it will be expected to help local governments to understand the size of possible remodeling projects.

PST Member Behavior Analysis Based on Three-Dimensional Finite Element Analysis According to Load Combination and Thickness of Grouting Layer (하중조합과 충전층 두께에 따른 3차원 유한요소 해석에 의한 PST 부재의 거동 분석)

  • Seo, Hyun-Su;Kim, Jin-Sup;Kwon, Min-Ho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.22 no.6
    • /
    • pp.53-62
    • /
    • 2018
  • Follofwing the accelerating speed-up of trains and rising demand for large-volume transfer capacity, not only in Korea, but also around the world, track structures for trains have been improving consistently. Precast concrete slab track (PST), a concrete structure track, was developed as a system that can fulfil new safety and economic requirements for railroad traffic. The purpose of this study is to provide the information required for the development and design of the system in the future, by analyzing the behavior of each structural member of the PST system. The stress distribution result for different combinations of appropriate loads according to the KRL-2012 train load and KRC code was analyzed by conducting a three-dimensional finite element analysis, while the result for different thicknesses of the grouting layer is also presented. Among the structural members, the largest stress took place on the grouting layer. The stress changed sensitively following the thickness and the combination of loads. When compared with a case of applying only a vertical KRL-2012 load, the stress increased by 3.3 times and 14.1 times on a concrete panel and HSB, respectively, from the starting load and temperature load. When the thickness of the grouting layer increased from 20 mm to 80 mm, the stress generated on the concrete panel decreased by 4%, while the stress increased by 24% on the grouting layer. As for the cracking condition, tension cracking was caused locally on the grouting layer. Such a result indicates that more attention should be paid to the flexure and tension behavior from horizontal loads rather than from vertical loads when developing PST systems. In addition, the safety of each structural member must be ensured by maintaining the thickness of the grouting layer at 40 mm or more.

Privilege and Immunity of Information and Data from Aviation Safety Program in Unites States (미국 항공안전데이터 프로그램의 비공개 특권과 제재 면제에 관한 연구)

  • Moon, Joon-Jo
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.23 no.2
    • /
    • pp.137-172
    • /
    • 2008
  • The earliest safety data programs, the FDR and CVR, were electronic reporting systems that generate data "automatically." The FDR program, originally instituted in 1958, had no publicly available restrictions for protections against sanctions by the FAA or an airline, although there are agreements and union contracts forbidding the use of FDR data for FAA enforcement actions. This FDR program still has the least formalized protections. With the advent of the CVR program in 1966, the precursor to the current FAR 91.25 was already in place, having been promulgated in 1964. It stated that the FAA would not use CVR data for enforcement actions. In 1982, Congress began restricting the disclosure of the CVR tape and transcripts. Congress added further clarification of the availability of discovery in civil litigation in 1994. Thus, the CVR data have more definitive protections in place than do FDR data. The ASRS was the first non-automatic reporting system; and built into its original design in 1975 was a promise of limited protection from enforcement sanctions. That promise was further codified in an FAR in 1979. As with the CVR, from its inception, the ASRS had some protections built in for the person who might have had a safety problem. However, the program did not (and to this day does not) explicitly deal with issues of use by airlines, litigants, or the public media, although it appears that airlines will either take a non-punitive stance if an ASRS report is filed, or the airline may ignore the fact that it has been filed at all. The FAA worked with several U.S. airlines in the early 1990s on developing ASAP programs, and the FAA issued an Advisory Circular about the program in 1997. From its inception, the ASAP program contained some FAA enforcement protections and company discipline protections, although some protection against litigation disclosure and public disclosure was not added until 2003, when FAA Order 8000.82 was promulgated, placing the program under the protections of FAR 193, which had been added in 2001. The FOQA program, when it was first instituted through a demonstration program in 1995, did not contain protections against sanctions. Now, however, the FAA cannot take enforcement action based on FOQA safety data, and an airline is limited to "corrective action" under the program. Union contracts can exclude FOQA from the realm of disciplinary action, although airline practice may be for airlines to require retraining if there is no contract in place forbidding it. The data is protected against disclosure for litigation and public media purposes by FAA Order 8000.81, issued in 2003, which placed FOQA under the protections of FAR 193. The figure on the next page shows when each program began, and when each statute, regulation, or order became effective for that program.

  • PDF

Multi-Variate Tabular Data Processing and Visualization Scheme for Machine Learning based Analysis: A Case Study using Titanic Dataset (기계 학습 기반 분석을 위한 다변량 정형 데이터 처리 및 시각화 방법: Titanic 데이터셋 적용 사례 연구)

  • Juhyoung Sung;Kiwon Kwon;Kyoungwon Park;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.121-130
    • /
    • 2024
  • As internet and communication technology (ICT) is improved exponentially, types and amount of available data also increase. Even though data analysis including statistics is significant to utilize this large amount of data, there are inevitable limits to process various and complex data in general way. Meanwhile, there are many attempts to apply machine learning (ML) in various fields to solve the problems according to the enhancement in computational performance and increase in demands for autonomous systems. Especially, data processing for the model input and designing the model to solve the objective function are critical to achieve the model performance. Data processing methods according to the type and property have been presented through many studies and the performance of ML highly varies depending on the methods. Nevertheless, there are difficulties in deciding which data processing method for data analysis since the types and characteristics of data have become more diverse. Specifically, multi-variate data processing is essential for solving non-linear problem based on ML. In this paper, we present a multi-variate tabular data processing scheme for ML-aided data analysis by using Titanic dataset from Kaggle including various kinds of data. We present the methods like input variable filtering applying statistical analysis and normalization according to the data property. In addition, we analyze the data structure using visualization. Lastly, we design an ML model and train the model by applying the proposed multi-variate data process. After that, we analyze the passenger's survival prediction performance of the trained model. We expect that the proposed multi-variate data processing and visualization can be extended to various environments for ML based analysis.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

Hybrid Scheme of Data Cache Design for Reducing Energy Consumption in High Performance Embedded Processor (고성능 내장형 프로세서의 에너지 소비 감소를 위한 데이타 캐쉬 통합 설계 방법)

  • Shim, Sung-Hoon;Kim, Cheol-Hong;Jhang, Seong-Tae;Jhon, Chu-Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.166-177
    • /
    • 2006
  • The cache size tends to grow in the embedded processor as technology scales to smaller transistors and lower supply voltages. However, larger cache size demands more energy. Accordingly, the ratio of the cache energy consumption to the total processor energy is growing. Many cache energy schemes have been proposed for reducing the cache energy consumption. However, these previous schemes are concerned with one side for reducing the cache energy consumption, dynamic cache energy only, or static cache energy only. In this paper, we propose a hybrid scheme for reducing dynamic and static cache energy, simultaneously. for this hybrid scheme, we adopt two existing techniques to reduce static cache energy consumption, drowsy cache technique, and to reduce dynamic cache energy consumption, way-prediction technique. Additionally, we propose a early wake-up technique based on program counter to reduce penalty caused by applying drowsy cache technique. We focus on level 1 data cache. The hybrid scheme can reduce static and dynamic cache energy consumption simultaneously, furthermore our early wake-up scheme can reduce extra program execution cycles caused by applying the hybrid scheme.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.