• 제목/요약/키워드: binary processing

Search Result 726, Processing Time 0.029 seconds

Reproducing Rhythmic Idioms: A Comparison Between Healthy Older Adults and Older Adults With Mild Cognitive Impairment (리듬꼴에 따른 건강 노인과 경도인지장애 노인의 리듬 재산출 수행력 비교)

  • Chong, Hyun Ju;Lee, Eun Ji
    • Journal of Music and Human Behavior
    • /
    • v.16 no.1
    • /
    • pp.73-88
    • /
    • 2019
  • This research was conducted to compare the rhythm reproduction abilities between older adults with and without mild cognitive impairment (MCI) and analyze the abilities depending on the rhythm idiom. Participants between 60-85 years of age were recruited from senior community centers, dementia prevention centers, and senior welfare centers. A total of 57 participants were included in this study: 27 diagnosed with MCI and 30 healthy older adults (HOA). The experiment was conducted individually in a private room in which a participant was given random binary time rhythm idioms and instructed to reproduce the rhythmic idioms with finger tapping. Each participant's beat production was recorded with the Beat Processing Device (BPD) for iPad. BPD calculated rhythm reproduction as measured through rhythm ratio and error among beats. Results showed marginal differences between the two groups in terms of mean scores of rhythm reproduction abilities. In terms of the rhythm ratio among beats, both groups' highest rhythm reproduction rate was for <♩ ♩>, and their lowest reproduction rate was for <♩. ♪>. In conclusion, there was no significant difference in rhythm reproduction ability between the HOA and MCI groups. However, the study found an interesting result related to performance level of rhythmic idioms. This result provides therapeutic insight for formulating rhythm tasks for older adults.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Patient Setup Aid with Wireless CCTV System in Radiation Therapy (무선 CCTV 시스템을 이용한 환자 고정 보조기술의 개발)

  • Park, Yang-Kyun;Ha, Sung-Whan;Ye, Sung-Joon;Cho, Woong;Park, Jong-Min;Park, Suk-Won;Huh, Soon-Nyung
    • Radiation Oncology Journal
    • /
    • v.24 no.4
    • /
    • pp.300-308
    • /
    • 2006
  • $\underline{Purpose}$: To develop a wireless CCTV system in semi-beam's eye view (BEV) to monitor daily patient setup in radiation therapy. $\underline{Materials\;and\;Methods}$: In order to get patient images in semi-BEV, CCTV cameras are installed in a custom-made acrylic applicator below the treatment head of a linear accelerator. The images from the cameras are transmitted via radio frequency signal (${\sim}2.4\;GHz$ and 10 mW RF output). An expected problem with this system is radio frequency interference, which is solved utilizing RF shielding with Cu foils and median filtering software. The images are analyzed by our custom-made software. In the software, three anatomical landmarks in the patient surface are indicated by a user, then automatically the 3 dimensional structures are obtained and registered by utilizing a localization procedure consisting mainly of stereo matching algorithm and Gauss-Newton optimization. This algorithm is applied to phantom images to investigate the setup accuracy. Respiratory gating system is also researched with real-time image processing. A line-laser marker projected on a patient's surface is extracted by binary image processing and the breath pattern is calculated and displayed in real-time. $\underline{Results}$: More than 80% of the camera noises from the linear accelerator are eliminated by wrapping the camera with copper foils. The accuracy of the localization procedure is found to be on the order of $1.5{\pm}0.7\;mm$ with a point phantom and sub-millimeters and degrees with a custom-made head/neck phantom. With line-laser marker, real-time respiratory monitoring is possible in the delay time of ${\sim}0.17\;sec$. $\underline{Conclusion}$: The wireless CCTV camera system is the novel tool which can monitor daily patient setups. The feasibility of respiratory gating system with the wireless CCTV is hopeful.

Comparison of the wall clock time for extracting remote sensing data in Hierarchical Data Format using Geospatial Data Abstraction Library by operating system and compiler (운영 체제와 컴파일러에 따른 Geospatial Data Abstraction Library의 Hierarchical Data Format 형식 원격 탐사 자료 추출 속도 비교)

  • Yoo, Byoung Hyun;Kim, Kwang Soo;Lee, Jihye
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.65-73
    • /
    • 2019
  • The MODIS (Moderate Resolution Imaging Spectroradiometer) data in Hierarchical Data Format (HDF) have been processed using the Geospatial Data Abstraction Library (GDAL). Because of a relatively large data size, it would be preferable to build and install the data analysis tool with greater computing performance, which would differ by operating system and the form of distribution, e.g., source code or binary package. The objective of this study was to examine the performance of the GDAL for processing the HDF files, which would guide construction of a computer system for remote sensing data analysis. The differences in execution time were compared between environments under which the GDAL was installed. The wall clock time was measured after extracting data for each variable in the MODIS data file using a tool built lining against GDAL under a combination of operating systems (Ubuntu and openSUSE), compilers (GNU and Intel), and distribution forms. The MOD07 product, which contains atmosphere data, were processed for eight 2-D variables and two 3-D variables. The GDAL compiled with Intel compiler under Ubuntu had the shortest computation time. For openSUSE, the GDAL compiled using GNU and intel compilers had greater performance for 2-D and 3-D variables, respectively. It was found that the wall clock time was considerably long for the GDAL complied with "--with-hdf4=no" configuration option or RPM package manager under openSUSE. These results indicated that the choice of the environments under which the GDAL is installed, e.g., operation system or compiler, would have a considerable impact on the performance of a system for processing remote sensing data. Application of parallel computing approaches would improve the performance of the data processing for the HDF files, which merits further evaluation of these computational methods.

A Design of Pipelined-parallel CABAC Decoder Adaptive to HEVC Syntax Elements (HEVC 구문요소에 적응적인 파이프라인-병렬 CABAC 복호화기 설계)

  • Bae, Bong-Hee;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.155-164
    • /
    • 2015
  • This paper describes a design and implementation of CABAC decoder, which would handle HEVC syntax elements in adaptively pipelined-parallel computation manner. Even though CABAC offers the high compression rate, it is limited in decoding performance due to context-based sequential computation, and strong data dependency between context models, as well as decoding procedure bin by bin. In order to enhance the decoding computation of HEVC CABAC, the flag-type syntax elements are adaptively pipelined by precomputing consecutive flag-type ones; and multi-bin syntax elements are decoded by processing bins in parallel up to three. Further, in order to accelerate Binary Arithmetic Decoder by reducing the critical path delay, the update and renormalization of context modeling are precomputed parallel for the cases of LPS as well as MPS, and then the context modeling renewal is selected by the precedent decoding result. It is simulated that the new HEVC CABAC architecture could achieve the max. performance of 1.01 bins/cycle, which is two times faster with respect to the conventional approach. In ASIC design with 65nm library, the CABAC architecture would handle 224 Mbins/sec, which could decode QFHD HEVC video data in real time.

Embedded Multi-LED Display System based on Wireless Internet using Otsu Algorithm (오츠 알고리즘을 활용한 무선인터넷 기반 임베디드 다중 LED 전광판 시스템)

  • Jang, Ho-Min;Kim, Eui-Ryong;Oh, Se-Chun;Kim, Sin-Ryeong;Kim, Young-Gon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.6
    • /
    • pp.329-336
    • /
    • 2016
  • In the outdoor advertising and industrial sites, are trying to implement the LED electric bulletin board system that is based on image processing in order to express a variety of intention in real time. Recently, in various field, rather than simple text representation, the importance of intuitive communication using images is increasing. Thus, instead of outputting the simple input information for communication, a system that can output a real-time information being sought. Therefore, the system is directed to overcoming by converting the problem of mapping an image on a variety of conventional LED display that can not be output images, the possible image output formats. Using an LED of low power, it has developed to output the efficient messages and images within a limited resources. This paper provides a system capable of managing the LED display on the wireless network. Atmega2560, Wi-Fi module, using the server and Android applications client, rather than printing a text only, it is a system to reduce the load generated image output character output in to the conversion process as can be managed by the server.

License Plate Recognition System Using Hotelling Transform (호텔링 변환을 이용한 자동차 번호판 인식시스템에 관한 연구)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.1
    • /
    • pp.29-35
    • /
    • 2009
  • In this paper by using the image taken from the rear of the vehicle to effectively extract the license plate and how to recognize the characters appearing in the offer. How to existing research on the entire video by following the pre-edge (edge) images to obtain yijinhwa. Qualified heopeu in a binary image (Hough) to convert the horizontal and vertical lines to obtain, using the characteristics of the plates to extract the license plate area. The problem with this method, the processing time is so difficult to handle real-time status of irregular points, and visual contrast with yagangwan border does not appear in the plates to extract the license plate area is that it is not. In addition, the rear of the vehicle license plate area from images taken using the characteristics of the plates myeongamgap changes sutjapok in the area, background area and the number number area of the region confirmed the contrast of the car and identified the number and the number of 42 of distance to extract the license plate area. How to research, the existing damage to the border of the plate to fail to extract the license plate area, a matter of hours to resolve problems in real-time, practical application is processed. Chapter 100 as the results of the experiment the sample video image in a car that far experiment results automatically read license plates have been able to extract the license plate and failing to represent 13% of images, character recognition result of failing to represent the image was 0.4%

  • PDF

A Region-based Comparison Algorithm of k sets of Trapezoids (k 사다리꼴 셋의 영역 중심 비교 알고리즘)

  • Jung, Hae-Jae
    • The KIPS Transactions:PartA
    • /
    • v.10A no.6
    • /
    • pp.665-670
    • /
    • 2003
  • In the applications like automatic masks generation for semiconductor production, a drawing consists of lots of polygons that are partitioned into trapezoids. The addition/deletion of a polygon to/from the drawing is performed through geometric operations such as insertion, deletion, and search of trapezoids. Depending on partitioning algorithm being used, a polygon can be partitioned differently in terms of shape, size, and so on. So, It's necessary to invent some comparison algorithm of sets of trapezoids in which each set represents interested parts of a drawing. This comparison algorithm, for example, may be used to verify a software program handling geometric objects consisted of trapezoids. In this paper, given k sets of trapezoids in which each set forms the regions of interest of each drawing, we present how to compare the k sets to see if all k sets represent the same geometric scene. When each input set has the same number n of trapezoids, the algorithm proposed has O(2$^{k-2}$ $n^2$(log n+k)) time complexity. It is also shown that the algorithm suggested has the same time complexity O( $n^2$ log n) as the sweeping-based algorithm when the number k(<< n) of input sets is small. Furthermore, the proposed algorithm can be kn times faster than the sweeping-based algorithm when all the trapezoids in the k input sets are almost the same.