• Title/Summary/Keyword: Information input algorithm

Search Result 2,444, Processing Time 0.033 seconds

ICT inspection System for Flexible PCB using Pin-driver and Ground Guarding Method (핀 드라이버와 접지가딩 기법을 적용한 모바일 디스플레이용 연성회로기판의 ICT검사 시스템)

  • Han, Joo-Dong;Choi, Kyung-Jin;Lee, Young-Hyun;Kim, Dong-Han
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.6
    • /
    • pp.97-104
    • /
    • 2010
  • In this paper, ICT (in circuit tester) inspection system and inspection algorithm is proposed and detects whether inferiority exists or not in the mounted device on the flexible PCB in cell phones or mobile display devices. The system is composed of PD (pin-driver) and GGM (ground guarding method). The structural characteristics of these flexible PCB are analyzed, which is needed to input or output the test signal. Test signal to investigate the characteristics of passive components is generated using modified circuit diagram and proposed inspection algorithm. PM (pin-map) is decided on the basis of circuit diagram and has the information about the kind of test signal to be applied and the pad number for the test signal to be connected. PD is designed to load a proper test signal for a specific pad and is adjusted according to PM so that the reconstructed circuit has minimum node and mash. The proposed ICT inspection system is realized using PD and GGM. Using the system, an experiment for each passive component is done to investigate the measurement accuracy of the developed system and an experiment for real flexible PCB model is done to verity the effectiveness of the system.

Skin Region Detection Using Histogram Approximation Based Mean Shift Algorithm (Mean Shift 알고리즘 기반의 히스토그램 근사화를 이용한 피부 영역 검출)

  • Byun, Ki-Won;Joo, Jae-Heum;Nam, Ki-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.21-29
    • /
    • 2011
  • At existing skin detection methods using skin color information defined based on the prior knowldege, threshold value to be used at the stage of dividing the backround and the skin region was decided on a subjective point of view through experiments. Also, threshold value was selected in a passive manner according to their background and illumination environments in these existing methods. These existing methods displayed a drawback in that their performance was fully influenced by the threshold value estimated through repetitive experiments. To overcome the drawback of existing methods, this paper propose a skin region detection method using a histogram approximation based on the mean shift algorithm. The proposed method is to divide the background region and the skin region by using the mean shift method at the histogram of the skin-map of the input image generated by the comparison of the similarity with the standard skin color at the CbCr color space and actively finding the maximum value converged by brightness level. Since the histogram has a form of discontinuous function accumulated according to the brightness value of the pixel, it gets approximated as a Gaussian Mixture Model (GMM) using the Bezier Curve method. Thus, the proposed method detects the skin region by using the mean shift method and actively finding the maximum value which eventually becomes the dividing point, not by using the manually selected threshold value unlike other existing methods. This method detects the skin region high performance effectively through experiments.

A Program Transformational Approach for Rule-Based Hangul Automatic Programming (규칙기반 한글 자동 프로그램을 위한 프로그램 변형기법)

  • Hong, Seong-Su;Lee, Sang-Rak;Sim, Jae-Hong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.114-128
    • /
    • 1994
  • It is very difficult for a nonprofessional programmer in Koera to write a program with very High Level Language such as, V,REFINE, GIST, and SETL, because the semantic primitives of these languages are based on predicate calculus, set, mapping, or testricted natural language. And it takes time to be familiar with these language. In this paper, we suggest a method to reduce such difficulties by programming with the declarative, procedural constructs, and aggregate constructs. And we design and implement an experimental knowledge-based automatic programming system. called HAPS(Hangul Automatic Program System). HAPS, whose input is specification such as Hangul abstract algorithm and datatype or Hangul procedural constructs, and whose output is C program. The method of operation is based on rule-based and program transformation technique, and the problem transformation technique. The problem area is general problem. The control structure of HAPS accepts the program specification, transforms this specification according to the proper rule in the rule-base, and stores the transformed program specification on the global data base. HAPS repeats these procedures until the target C program is fully constructed.

  • PDF

Fast Median Filtering Algorithms for Real-Valued 2-dimensional Data (실수형 2차원 데이터를 위한 고속 미디언 필터링 알고리즘)

  • Cho, Tai-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2715-2720
    • /
    • 2014
  • Median filtering is very effective to remove impulse type noises, so it has been widely used in many signal processing applications. However, due to the time complexity of its non-linearity, median filtering is often used using a small filter window size. A lot of work has been done on devising fast median filtering algorithms, but most of them can be efficiently applied to input data with finite integer values like images. Little work has been carried out on fast 2-d median filtering algorithms that can deal with real-valued 2-d data. In this paper, a fast and simple median 2-d filter is presented, and its performance is compared with the Matlab's 2-d median filter and a heap-based 2-d median filter. The proposed algorithm is shown to be much faster than the Matlab's 2-d median filter and consistently faster than the heap-based algorithm that is much more complicated than the proposed one. Also, a more efficient median filtering scheme for 2-d real valued data with a finite range of values is presented that uses higher-bit integer 2-d median filtering with negligible quantization errors.

Generative optical flow based abnormal object detection method using a spatio-temporal translation network

  • Lim, Hyunseok;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.11-19
    • /
    • 2021
  • An abnormal object refers to a person, an object, or a mechanical device that performs abnormal and unusual behavior and needs observation or supervision. In order to detect this through artificial intelligence algorithm without continuous human intervention, a method of observing the specificity of temporal features using optical flow technique is widely used. In this study, an abnormal situation is identified by learning an algorithm that translates an input image frame to an optical flow image using a Generative Adversarial Network (GAN). In particular, we propose a technique that improves the pre-processing process to exclude unnecessary outliers and the post-processing process to increase the accuracy of identification in the test dataset after learning to improve the performance of the model's abnormal behavior identification. UCSD Pedestrian and UMN Unusual Crowd Activity were used as training datasets to detect abnormal behavior. For the proposed method, the frame-level AUC 0.9450 and EER 0.1317 were shown in the UCSD Ped2 dataset, which shows performance improvement compared to the models in the previous studies.

A Study on the Utilization of Drilling Investigation Information (시추조사 정보 활용방안에 관한 연구)

  • Jinhwan Kim;Yong Baek;Jong-Hyun Lee;Gyuphil Lee;Woo-Seok Kim
    • The Journal of Engineering Geology
    • /
    • v.33 no.4
    • /
    • pp.531-541
    • /
    • 2023
  • The most important thing in the 4th industry, AI era, and smart construction era is digital data. Basic data in the civil engineering field begins with ground investigation. The Ministry of Land, Infrastructure and Transport operates the Geotechnical Information Database Center to manage ground survey data, including drilling but the focus is on data distribution. This study seeks to devise a plan for long-term use of the results of drilling investigation conducted for the design and construction of various construction projects. For this purpose, a pilot area was set up and a 'geotechnical design parameters digital map' was created using some geotechnical design parameters from the drilling investigation data. Using the developed algorithm, a digital map of friction angle and permeability coefficient for the hard rock stratum in the pilot area was created. Geotechnical design parameters digital map can identify the overall condition of the ground, but reliability needs to be improved due to the lack of initial data input. Through additional research, it will be possible to produce a more complete geotechnical design parameters digital map.

A Study on Unsupervised Learning Method of RAM-based Neural Net (RAM 기반 신경망의 비지도 학습에 관한 연구)

  • Park, Sang-Moo;Kim, Seong-Jin;Lee, Dong-Hyung;Lee, Soo-Dong;Ock, Cheol-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.1
    • /
    • pp.31-38
    • /
    • 2011
  • A RAM-based Neural Net is a weightless neural network based on binary neural network. 3-D neural network using this paper is binary neural network with multiful information bits and store counts of training. Recognition method by MRD technique is based on the supervised learning. Therefore neural network by itself can not distinguish between the categories and well-separated categories of training data can achieve only through the performance. In this paper, unsupervised learning algorithm is proposed which is trained existing 3-D neural network without distinction of data, to distinguish between categories depending on the only input training patterns. The training data for proposed unsupervised learning provided by the NIST handwritten digits of MNIST which is consist of 0 to 9 multi-pattern, a randomly materials are used as training patterns. Through experiments, neural network is to determine the number of discriminator which each have an idea of the handwritten digits that can be interpreted.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Feature-based Non-rigid Registration between Pre- and Post-Contrast Lung CT Images (조영 전후의 폐 CT 영상 정합을 위한 특징 기반의 비강체 정합 기법)

  • Lee, Hyun-Joon;Hong, Young-Taek;Shim, Hack-Joon;Kwon, Dong-Jin;Yun, Il-Dong;Lee, Sang-Uk;Kim, Nam-Kug;Seo, Joon-Beom
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.3
    • /
    • pp.237-244
    • /
    • 2011
  • In this paper, a feature-based registration technique is proposed for pre-contrast and post-contrast lung CT images. It utilizes three dimensional(3-D) features with their descriptors and estimates feature correspondences by nearest neighborhood matching in the feature space. We design a transformation model between the input image pairs using a free form deformation(FFD) which is based on B-splines. Registration is achieved by minimizing an energy function incorporating the smoothness of FFD and the correspondence information through a non-linear gradient conjugate method. To deal with outliers in feature matching, our energy model integrates a robust estimator which discards outliers effectively by iteratively reducing a radius of confidence in the minimization process. Performance evaluation was carried out in terms of accuracy and efficiency using seven pairs of lung CT images of clinical practice. For a quantitative assessment, a radiologist specialized in thorax manually placed landmarks on each CT image pair. In comparative evaluation to a conventional feature-based registration method, our algorithm showed improved performances in both accuracy and efficiency.