• Title/Summary/Keyword: Matching Network

Search Result 658, Processing Time 0.025 seconds

A Study on MRD Methods of A RAM-based Neural Net (RAM 기반 신경망의 MRD 기법에 관한 연구)

  • Lee, Dong-Hyung;Kim, Seong-Jin;Park, Sang-Moo;Lee, Soo-Dong;Ock, Cheol-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.9
    • /
    • pp.11-19
    • /
    • 2009
  • A RAM-based Neural Net(RBNN) which has multi-discriminators is more effective than RBNN with a discriminator. Experience Sensitive Cumulative Neural Network and 3-D Neuro System(3DNS) that accumulate the features point improved the performance of BNN, which were enabled to train additional and repeated patterns and extract a generalized pattern. In recognition process of Neural Net with multi-discriminator, the selection of class was decided by the value of MRD which calculates the accumulated sum of each class. But they had a saturation problem of its memory cells caused by learning volume increment. Therefore, the decision of MRD has a low performance because recognition rate is decreased by saturation. In this paper, we propose the method which improve the MRD ability. The method consists of the optimum MRD and the matching ratio prototype to generalized image, the cumulative filter ratio, the gap of prototype response MRD. We experimented the performance using NIST database of NIST without preprocessor, and compared this model with 3DNS. The proposed MRD method has more performance of recognition rate and more stable system for distortion of input pattern than 3DNS.

An Occupancy based O/D Data Construction Methodology for Expressway Network (고속도로를 대상으로 한 재차인원별 O/D 구축방법론 연구)

  • Choi, Keechoo;Lee, Jungwoo;Yi, Yongju;Baek, Seungkirl
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.6D
    • /
    • pp.569-575
    • /
    • 2010
  • The occupancy based O/D is essential for measuring efficiency of various transportation policies like HOV/HOT lane, ramp metering, and public parking station. There has been many studies on occupancy survey methodology and O/D estimation using TCS (Toll Collection System) data separately. The occupancy O/D estimation methodology using TCS data has not been attempted thus far. An overall process from data collection stage to the occupancy O/D estimation stage has been suggested. Field survey was performed at the northbound Seoul toll station of Gyeongbu Expressway by each 2 hours of AM peak, PM non-peak, PM peak, midnight periods on a day. The process of matching the TCS data and field survey data classified by tollbooth ID, car type/mode, and arrival time was also performed. One typical output of the results showed that the ratio of single occupancy vehicles bounding for Seoul during the AM peak amounted to 60%. With the key output of this study and the specific O/D estimation methodology suggested, the whole centroid-to-centroid occupancy O/D of the country could be available, and then various applications in which the occupancy information is required could be possible.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Evaluation of Web Service Similarity Assessment Methods (웹서비스 유사성 평가 방법들의 실험적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction and integration both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of techniques for Web service discovery have been proposed, but the discovery challenge has not been satisfactorily addressed. Unfortunately, most existing solutions are either too rudimentary to be useful or too domain dependent to be generalizable. In this paper, we propose a Web service organizing framework that combines clustering techniques with string matching and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. Our proposed approach has several appealing features : (1) It minimizes the requirement of prior knowledge from both service consumers and publishers; (2) It avoids exploiting domain dependent ontologies; and (3) It is able to visualize the semantic relationships among Web services. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service registries. We report on some preliminary results demonstrating the efficacy of the proposed approach.

  • PDF

A Study On Design of ZigBee Chip Communication Module for Remote Radiation Measurement (원격 방사선 측정을 위한 ZigBee 원칩형 통신 모듈 설계에 대한 연구)

  • Lee, Joo-Hyun;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.552-558
    • /
    • 2014
  • This paper suggests how to design a ZigBee-chip-based communication module to remotely measure radiation level. The suggested communication module consists of two control processors for the chip as generally required to configure a ZigBee system, and one chip module to configure a ZigBee RF device. The ZigBee-chip-based communication module for remote radiation measurement consists of a wireless communication controller; sensor and high-voltage generator; charger and power supply circuit; wired communication part; and RF circuit and antenna. The wireless communication controller is to control wireless communication for ZigBee and to measure radiation level remotely. The sensor and high-voltage generator generates 500 V in two consecutive series to amplify and filter pulses of radiation detected by G-M Tube. The charger and power supply circuit part is to charge lithium-ion battery and supply power to one-chip processors. The wired communication part serves as a RS-485/422 interface to enable USB interface and wired remote communication for interfacing with PC and debugging. RF circuit and antenna applies an RLC passive component for chip antenna to configure BALUN and antenna impedance matching circuit, allowing wireless communication. After configuring the ZigBee-chip-based communication module, tests were conducted to measure radiation level remotely: data were successfully transmitted in 10-meter and 100-meter distances, measuring radiation level in a remote condition. The communication module allows an environment where radiation level can be remotely measured in an economically beneficial way as it not only consumes less electricity but also costs less. By securing linearity of a radiation measuring device and by minimizing the device itself, it is possible to set up an environment where radiation can be measured in a reliable manner, and radiation level is monitored real-time.

Compact Orthomode Transducer for Field Experiments of Radar Backscatter at L-band (L-밴드 대역 레이더 후방 산란 측정용 소형 직교 모드 변환기)

  • Hwang, Ji-Hwan;Kwon, Soon-Gu;Joo, Jeong-Myeong;Oh, Yi-Sok
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.22 no.7
    • /
    • pp.711-719
    • /
    • 2011
  • A study of miniaturization of an L-band orthomode transducer(OMT) for field experiments of radar backscatter is presented in this paper. The proposed OMT is not required the additional waveguide taper structures to connect with a standard adaptor by the newly designed junction structure which bases on a waveguide taper. Total length of the OMT for L-band is about 1.2 ${\lambda}_o$(310 mm) and it's a size of 60 % of the existing OMTs. And, to increase the matching and isolation performances of each polarization, two conducting posts are inserted. The bandwidth of 420 MHz and the isolation level of about 40 dB are measured in the operating frequency. The L-band scatterometer consisting of the manufactured OMT, a horn-antenna and network analyzer(Agilent 8753E) was used STCT and 2DTST to analysis the measurement accuracy of radar backscatter. The full-polarimetric RCSs of test-target, 55 cm trihedral corner reflector, measured by the calibrated scatterometer have errors of -0.2 dB and 0.25 dB for vv-/hh-polarization, respectively. The effective isolation level is about 35.8 dB in the operating frequency. Then, the horn-antenna used to measure has the length of 300 mm, the aperture size of $450{\times}450\;mm^2$, and HPBWs of $29.5^{\circ}$ and $36.5^{\circ}$ on the principle E-/H-planes.

A New Bias Scheduling Method for Improving Both Classification Performance and Precision on the Classification and Regression Problems (분류 및 회귀문제에서의 분류 성능과 정확도를 동시에 향상시키기 위한 새로운 바이어스 스케줄링 방법)

  • Kim Eun-Mi;Park Seong-Mi;Kim Kwang-Hee;Lee Bae-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1021-1028
    • /
    • 2005
  • The general solution for classification and regression problems can be found by matching and modifying matrices with the information in real world and then these matrices are teaming in neural networks. This paper treats primary space as a real world, and dual space that Primary space matches matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Further more the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-Curve, which are well known for getting regularization parameter, and kernel methods. Both GCV and L-Curve have excellent performance to get regularization parameters, and the performances are similar although they show little bit different results from the different condition of problems. However, these methods are two-step solution because both have to calculate the regularization parameters to solve given problems, and then those problems can be applied to other solving methods. Compared with UV and L-Curve, kernel methods are one-step solution which is simultaneously teaming a regularization parameter within the teaming process of pattern weights. This paper also suggests dynamic momentum which is leaning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV and L-Curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problems.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.