• Title/Summary/Keyword: Binary Patterns

Search Result 262, Processing Time 0.027 seconds

Real-time Watermarking Algorithm using Multiresolution Statistics for DWT Image Compressor (DWT기반 영상 압축기의 다해상도의 통계적 특성을 이용한 실시간 워터마킹 알고리즘)

  • 최순영;서영호;유지상;김대경;김동욱
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.13 no.6
    • /
    • pp.33-43
    • /
    • 2003
  • In this paper, we proposed a real-time watermarking algorithm to be combined and to work with a DWT(Discrete Wavelet Transform)-based image compressor. To reduce the amount of computation in selecting the watermarking positions, the proposed algorithm uses a pre-established look-up table for critical values, which was established statistically by computing the correlation according to the energy values of the corresponding wavelet coefficients. That is, watermark is embedded into the coefficients whose values are greater than the critical value in the look-up table which is searched on the basis of the energy values of the corresponding level-1 subband coefficients. Therefore, the proposed algorithm can operate in a real-time because the watermarking process operates in parallel with the compression procession without affecting the operation of the image compression. Also it improved the property of losing the watermark and the efficiency of image compression by watermark inserting, which results from the quantization and Huffman-Coding during the image compression. Visual recognizable patterns such as binary image were used as a watermark The experimental results showed that the proposed algorithm satisfied the properties of robustness and imperceptibility that are the major conditions of watermarking.

Learning acoustic cue weights for Korean stops through L2 perception training (지각 훈련을 통한 한국어 폐쇄음 음향 신호 가중치의 L2 학습)

  • Oh, Eunjin
    • Phonetics and Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.9-21
    • /
    • 2021
  • This study investigated whether Korean learners improve acoustic cue weights to identify Korean lenis and aspirated stops in the direction of native values through perception training that focused on contrasting the stops in various phonetic contexts. Nineteen native Chinese learners of Korean and two native Korean instructors for the perception training participated in the experiment. A training group and a non-training group were divided according to pretest results, and only the training group participated in the training for 5 days. To estimate the perceptual weights of the stop cues, a pretest and a posttest were conducted with stimuli whose stop cues (F0 and VOT) were systematically manipulated. Binary logistic regression analyses were performed on each learner's test results to calculate perceptual β coefficients, which estimate the perceptual weights of the acoustic cues used in identifying the stop contrast. The training group showed a statistically significant increase of 0.451 on average in the posttest for the coefficient values of the F0, which is the primary cue for the stop contrast, whereas the non-training group showed an insignificant increase of 0.246. The patterns of change in the F0 use after training varied considerably among individual learners.

A Sanitizer for Detecting Vulnerable Code Patterns in uC/OS-II Operating System-based Firmware for Programmable Logic Controllers (PLC용 uC/OS-II 운영체제 기반 펌웨어에서 발생 가능한 취약점 패턴 탐지 새니타이저)

  • Han, Seungjae;Lee, Keonyong;You, Guenha;Cho, Seong-je
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.1
    • /
    • pp.65-79
    • /
    • 2020
  • As Programmable Logic Controllers (PLCs), popular components in industrial control systems (ICS), are incorporated with the technologies such as micro-controllers, real-time operating systems, and communication capabilities. As the latest PLCs have been connected to the Internet, they are becoming a main target of cyber threats. This paper proposes two sanitizers that improve the security of uC/OS-II based firmware for a PLC. That is, we devise BU sanitizer for detecting out-of-bounds accesses to buffers and UaF sanitizer for fixing use-after-free bugs in the firmware. They can sanitize the binary firmware image generated in a desktop PC before downloading it to the PLC. The BU sanitizer can also detect the violation of control flow integrity using both call graph and symbols of functions in the firmware image. We have implemented the proposed two sanitizers as a prototype system on a PLC running uC/OS-II and demonstrated the effectiveness of them by performing experiments as well as comparing them with the existing sanitizers. These findings can be used to detect and mitigate unintended vulnerabilities during the firmware development phase.

Analyses of Spectators' Expenditure Determinants in a Professional Baseball Team (프로야구 관람객의 소비지출 결정요인 분석)

  • Cho, Woo-Jeong;Choi, Eui-Yul
    • 한국체육학회지인문사회과학편
    • /
    • v.55 no.1
    • /
    • pp.457-467
    • /
    • 2016
  • Understanding professional baseball fans' expenditure is expected to provide fundamental marketing information that help increase each team's marketing profits and values and produce a better economic impact on its community. In this regard, this study employed a survey method with a total of 372 residents located in Changwon. A questionnaire included factors such as demographics, consumption patterns and perceived socio-psychic effect(PSE), all of which were derived from literature review. A binary logistic regression was modeled with a dichotomous dependent variable, expenditure(30,000 won more or less). The following were input in the model as the independent variables in order to see the relationships; gender, marriage, education, occupation, income, location, age, leisure type, distance, companion, transportation, interest, and PSE. The results of the logistic regression analysis are as follows. Overall, the model was statistically significant, χ²(21, N=372)=59.159, p=.000. Cox and Snell R² was reported as .147 and .200 respectively. So, the model accounted for between 14.7% and 20.0% of the variation in expenditure. Among the independent variables, income, location, companion, and PSE were found to be the significant factors to expenditure. For income, subjects with 2 million won less of income, compared to those with 4 million won more, were .38 times less likely to pay the money of 30,000 won more. For location, subjects in Masan, compared to those in Jinhae, were 3.49 times more likely to pay 30,000 won more. Subjects in Changwon, compared to those in Jinhae, were 3.05 times more likely to pay 30,000 won more. For companion, people visiting the stadium alone, compared to those with friends/colleague, were .36 times less likely to pay 30,000 won more. For PSE, the odds of 30,000 won more paid increased by 1.37 times with one-unit increase in PSE.

Computer Vision Approach for Phenotypic Characterization of Horticultural Crops (컴퓨터 비전을 활용한 토마토, 파프리카, 멜론 및 오이 작물의 표현형 특성화)

  • Seungri Yoon;Minju Shin;Jin Hyun Kim;Ho Jeong Jeong;Junyoung Park;Tae In Ahn
    • Journal of Bio-Environment Control
    • /
    • v.33 no.1
    • /
    • pp.63-70
    • /
    • 2024
  • This study explored computer vision methods using the OpenCV open-source library to characterize the phenotypes of various horticultural crops. In the case of tomatoes, image color was examined to assess ripeness, while support vector machine (SVM) and histogram of oriented gradients (HOG) methods effectively identified ripe tomatoes. For sweet pepper, we visualized the color distribution and used the Gaussian mixture model for clustering to analyze its post-harvest color characteristics. For the quality assessment of netted melons, the LAB (lightness, a, b) color space, binary images, and depth mapping were used to measure the net patterns of the melon. In addition, a combination of depth and color data proved successful in identifying flowers of different sizes and distances in cucumber greenhouses. This study highlights the effectiveness of these computer vision strategies in monitoring the growth and development, ripening, and quality assessment of fruits and vegetables. For broader applications in agriculture, future researchers and developers should enhance these techniques with plant physiological indicators to promote their adoption in both research and practical agricultural settings.

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

Tracking lead contamination sources of sediments in Lake Andong using lead isotopes (납 동위원소를 이용한 안동호 퇴적물 중의 납 오염 기원)

  • Park, Jin-Ju;Kim, Ki-Joon;Yoo, Suk-Min;Kim, Eun-Hee;Seok, Kwang-Seol;Shin, Hyung Seon;Kim, Young-Hee
    • Analytical Science and Technology
    • /
    • v.25 no.6
    • /
    • pp.429-434
    • /
    • 2012
  • The objective of this study was to identify Pb pollution sources of sediments in Lake Andong. We analysed Pb isotopes in sediments from Lake Andong, soils and mining tails from the watershed as well as sludges and wastewater from zinc smelting facilities which exists in upper stream of Lake Andong. The Pb isotope ratios ($^{207}Pb/^{206}Pb$ and $^{208}Pb/^{206}Pb$) for sediments are $0.827{\pm}0.004$ and $2.041{\pm}0.015$, which showed similar values with those of mining tails, $0.815{\pm}0.002$ and $2.016{\pm}0.006$, respectively. The isotopic ratio values of soils existed in the range of 0.756~0.881 and 1.872~2.187. In imported zinc ores, the isotopic ratio values existed in the range of 0.816~0.956 (mean 0.832) and 2.029~2.219 (mean 2.059). These values were similar to those in zinc and lead concentrate originated from Canada and South America. Additionally, isotopic ratio values for sludge and wastewater were $17.515{\pm}0.155$, $15.537{\pm}0.018$, and $37.357{\pm}0.173$, respectively. The Pb isotopic ratio of sediments showed binary combination patterns with soil and mining tails, which were similar to those for Korean Pb ore.

Managing Duplicate Memberships of Websites : An Approach of Social Network Analysis (웹사이트 중복회원 관리 : 소셜 네트워크 분석 접근)

  • Kang, Eun-Young;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.153-169
    • /
    • 2011
  • Today using Internet environment is considered absolutely essential for establishing corporate marketing strategy. Companies have promoted their products and services through various ways of on-line marketing activities such as providing gifts and points to customers in exchange for participating in events, which is based on customers' membership data. Since companies can use these membership data to enhance their marketing efforts through various data analysis, appropriate website membership management may play an important role in increasing the effectiveness of on-line marketing campaign. Despite the growing interests in proper membership management, however, there have been difficulties in identifying inappropriate members who can weaken on-line marketing effectiveness. In on-line environment, customers tend to not reveal themselves clearly compared to off-line market. Customers who have malicious intent are able to create duplicate IDs by using others' names illegally or faking login information during joining membership. Since the duplicate members are likely to intercept gifts and points that should be sent to appropriate customers who deserve them, this can result in ineffective marketing efforts. Considering that the number of website members and its related marketing costs are significantly increasing, it is necessary for companies to find efficient ways to screen and exclude unfavorable troublemakers who are duplicate members. With this motivation, this study proposes an approach for managing duplicate membership based on the social network analysis and verifies its effectiveness using membership data gathered from real websites. A social network is a social structure made up of actors called nodes, which are tied by one or more specific types of interdependency. Social networks represent the relationship between the nodes and show the direction and strength of the relationship. Various analytical techniques have been proposed based on the social relationships, such as centrality analysis, structural holes analysis, structural equivalents analysis, and so on. Component analysis, one of the social network analysis techniques, deals with the sub-networks that form meaningful information in the group connection. We propose a method for managing duplicate memberships using component analysis. The procedure is as follows. First step is to identify membership attributes that will be used for analyzing relationship patterns among memberships. Membership attributes include ID, telephone number, address, posting time, IP address, and so on. Second step is to compose social matrices based on the identified membership attributes and aggregate the values of each social matrix into a combined social matrix. The combined social matrix represents how strong pairs of nodes are connected together. When a pair of nodes is strongly connected, we expect that those nodes are likely to be duplicate memberships. The combined social matrix is transformed into a binary matrix with '0' or '1' of cell values using a relationship criterion that determines whether the membership is duplicate or not. Third step is to conduct a component analysis for the combined social matrix in order to identify component nodes and isolated nodes. Fourth, identify the number of real memberships and calculate the reliability of website membership based on the component analysis results. The proposed procedure was applied to three real websites operated by a pharmaceutical company. The empirical results showed that the proposed method was superior to the traditional database approach using simple address comparison. In conclusion, this study is expected to shed some light on how social network analysis can enhance a reliable on-line marketing performance by efficiently and effectively identifying duplicate memberships of websites.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Analysis of Image Processing Characteristics in Computed Radiography System by Virtual Digital Test Pattern Method (Virtual Digital Test Pattern Method를 이용한 CR 시스템의 영상처리 특성 분석)

  • Choi, In-Seok;Kim, Jung-Min;Oh, Hye-Kyong;Kim, You-Hyun;Lee, Ki-Sung;Jeong, Hoi-Woun;Choi, Seok-Yoon
    • Journal of radiological science and technology
    • /
    • v.33 no.2
    • /
    • pp.97-107
    • /
    • 2010
  • The objectives of this study is to figure out the unknown image processing methods of commercial CR system. We have implemented the processing curve of each Look up table(LUT) in REGIUS 150 CR system by using virtual digital test pattern method. The characteristic of Dry Imager was measured also. First of all, we have generated the virtual digital test pattern file with binary file editor. This file was used as an input data of CR system (REGIUS 150 CR system, KONICA MINOLTA). The DICOM files which were automatically generated output files by the CR system, were used to figure out the processing curves of each LUT modes (THX, ST, STM, LUM, BONE, LIN). The gradation curves of Dry Imager were also measured to figure out the characteristics of hard copy image. According to the results of each parameters, we identified the characteristics of image processing parameter in CR system. The processing curves which were measured by this proposed method showed the characteristics of CR system. And we found the linearity of Dry Imager in the middle area of processing curves. With these results, we found that the relationships between the curves and each parameters. The G value is related to the slope and the S value is related to the shift in x-axis of processing curves. In conclusion, the image processing method of the each commercial CR systems are different, and they are concealed. This proposed method which uses virtual digital test pattern can measure the characteristics of parameters for the image processing patterns in the CR system. We expect that the proposed method is useful to analogize the image processing means not only for this CR system, but also for the other commercial CR systems.