• Title/Summary/Keyword: Source Code Similarity

Search Result 47, Processing Time 0.026 seconds

Enhancing the performance of code-clone detection tools using code2vec (code2vec을 이용한 유사도 감정 도구의 성능 개선)

  • Um, Taeho;Hong, Sung Moon;Yang, Joon Hyuk;Jang, Hyo Seok;Doh, Kyung-Goo
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.1
    • /
    • pp.31-40
    • /
    • 2021
  • Plagiarism refers to the act of using the original data as if it were one's own without revealing the source. The plagiarism of source code causes a variety of problems, including legal disputes. Plagiarism in software projects is usually determined by measuring similarity by comparing every pair of source code within two projects. However, blindly comparing every pair has been a huge computational burden, causing a major factor of not using tools of better accuracy. If we can only compare pairs that are probable to be clones, eliminating pairs that are impossible to be clones, we can concentrate more on improving the accuracy of detection. In this paper, we propose a method of selecting highly probable candidates of clone pairs by pre-classifying suspected source-codes using a machine-learning model called code2vec.

A Program Similarity Check by Flow Graphs of Functional Programs (흐름 그래프 형태를 이용한 함수형 프로그램 유사성 비고)

  • Seo Sunae;Han Taisook
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.4
    • /
    • pp.290-299
    • /
    • 2005
  • Stealing the source code of a program is a serious problem not only in a moral sense but also in a legal sense. However, it is not clear whether the code of a program is copied from another or not. There was a program similarity checker detecting code-copy by comparing the syntax trees of programs. However this method has a limitation that it cannot detect the code-copy attacks when the attacker modifies the syntax of the program on purpose. We propose a program similarity check by program control graph, which reveals not only syntax information but also control dependancy. Our method can detect the code-copy attacks that do not change control dependancy Moreover, we define what code-copy means and establish the connection between code-copy and similarity of program control graph: we prove that two programs are related by copy congruence if and only if the program control graphs of these programs are equivalent. We implemented our method on a functional programming language, nML. The experimental results show us that the suggested method can detect code similarity that is not detected by the existing method.

Similarity Evaluation and Analysis of Source Code Materials for SOC System in IoT Devices (사물인터넷 디바이스의 집적회로 목적물과 소스코드의 유사성 분석 및 동일성)

  • Kim, Do-Hyeun;Lee, Kyu-Tae
    • Journal of Software Assessment and Valuation
    • /
    • v.15 no.1
    • /
    • pp.55-62
    • /
    • 2019
  • The needs for small size and low power consumption of information devices is being implemented with SOC technology that implements the program on a single chip in Internet of Thing. Copyright disputes due to piracy are increasing in semiconductor chips as well, arising from disputes in the chip implementation of the design house and chip implementation by the illegal use of the source code. However, since the final chip implementation is made in the design house, it is difficult to protect the copyright. In this paper, we deal with the analysis method for extracting similarity and the criteria for setting similarity judgment in the dispute of source code written in HDL language. Especially, the chip which is manufactured based on the same specification will be divided into the same configuration and the code type.

Similarity Detection in Object Codes and Design of Its Tool (목적 코드에서 유사도 검출과 그 도구의 설계)

  • Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.2
    • /
    • pp.1-8
    • /
    • 2020
  • The similarity detection to plagiarism or duplication of computer programs requires a different type of analysis methods and tools according to the programming language used in the implementation and the sort of code to be analyzed. In recent years, the similarity appraisal for the object code in the embedded system, which requires a considerable resource along with a more complicated procedure and advanced skill compared to the source code, is increasing. In this study, we described a method for analyzing the similarity of functional units in the assembly language through the conversion of object code using the reverse engineering approach, such as the reverse assembly technique to the object code. The instruction and operand table for comparing the similarity is generated by using the syntax analysis of the code in assembly language, and a tool for detecting the similarity is designed.

Plagiarism Detection among Source Codes using Adaptive Methods

  • Lee, Yun-Jung;Lim, Jin-Su;Ji, Jeong-Hoon;Cho, Hwaun-Gue;Woo, Gyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.6
    • /
    • pp.1627-1648
    • /
    • 2012
  • We propose an adaptive method for detecting plagiarized pairs from a large set of source code. This method is adaptive in that it uses an adaptive algorithm and it provides an adaptive threshold for determining plagiarism. Conventional algorithms are based on greedy string tiling or on local alignments of two code strings. However, most of them are not adaptive; they do not consider the characteristics of the program set, thereby causing a problem for a program set in which all the programs are inherently similar. We propose adaptive local alignment-a variant of local alignment that uses an adaptive similarity matrix. Each entry of this matrix is the logarithm of the probabilities of the keywords based on their frequency in a given program set. We also propose an adaptive threshold based on the local outlier factor (LOF), which represents the likelihood of an entity being an outlier. Experimental results indicate that our method is more sensitive than JPlag, which uses greedy string tiling for detecting plagiarism-suspected code pairs. Further, the adaptive threshold based on the LOF is shown to be effective, and the detection performance shows high sensitivity with negligible loss of specificity, compared with that using a fixed threshold.

A Technique to Detect Change-Coupled Files Using the Similarity of Change Types and Commit Time (변경 유형의 유사도 및 커밋 시간을 이용한 파일 변경 결합도)

  • Kim, Jung Il;Lee, Eun Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.2
    • /
    • pp.65-72
    • /
    • 2014
  • Change coupling is a measure to show how strongly change-related two entities are. When two source files have been frequently changed together, they are regarded as change-coupled files and they will probably be changed together in the near future. In the previous studies, the change coupling between two files is defined with the number of common changed time, that is, common commit time of the files. However, the frequency-based technique has limitations because of 'tangled changes', which frequently happens in the development environments with version control systems. The tangled change means that several code hunks have been changed at the same time, though they have no relation with each other. In this paper, the change types of the code hunks are also used to define change coupling, in addition to the common commit time of target files. First, the frequency vector based on change types are defined with the extracted change types, and then, the similarity of change patterns are calculated using the cosine similarity measure. We conducted experiments on open source project Eclipse JDT and CDT for case studies. The result shows that the applicability of the proposed method, compared to the previous studies.

Web Page Similarity based on Size and Frequency of Tokens (토큰 크기 및 출현 빈도에 기반한 웹 페이지 유사도)

  • Lee, Eun-Joo;Jung, Woo-Sung
    • Journal of Information Technology Services
    • /
    • v.11 no.4
    • /
    • pp.263-275
    • /
    • 2012
  • It is becoming hard to maintain web applications because of high complexity and duplication of web pages. However, most of research about code clone is focusing on code hunks, and their target is limited to a specific language. Thus, we propose GSIM, a language-independent statistical approach to detect similar pages based on scarcity and frequency of customized tokens. The tokens, which can be obtained from pages splitted by a set of given separators, are defined as atomic elements for calculating similarity between two pages. In this paper, the domain definition for web applications and algorithms for collecting tokens, making matrics, calculating similarity are given. We also conducted experiments on open source codes for evaluation, with our GSIM tool. The results show the applicability of the proposed method and the effects of parameters such as threshold, toughness, length of tokens, on their quality and performance.

Java source code Similarity Measurement System (자바소스코드 유사도 측정 시스템)

  • Kim, Eun-Hye;Lee, Song-A;Heo, Jun;Han, Kyung-Sook;Oh, Yong-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.536-539
    • /
    • 2007
  • JSMS(Java source code Similarity Measurement System)는 자바 소스 코드의 유사도를 측정하고 이와 관련한 소스코드의 정보를 시각적으로 표시하는 시스템이다. 기존의 표절 검사 시스템은 소스코드의 구조적 특징을 반영하지 못해 유사도 결과의 신뢰성이 낮고 대부분 편리성과 가독성이 좋지 않아 사용하기 불편하였다. 본 논문에서 제안하는 JSMS는 이러한 단점을 보완하기 위해 함수 선형화를 사용하여 소스코드의 구조적 특징을 반영하였다. 또한 쉽고 간단한 조작으로 편리성을 제공하며, 관련 정보와 유사 구간을 시각적으로 표시하여 가독성을 높였다. 향후 다양한 언어 지원과 폭넓은 시각적 정보 제공을 보완하여 사용자의 학습 자료로 사용할 수 있으며, 소스코드 표절의 객관적 기준이 되는 도구로 활용 가능하다.

  • PDF

Appraisal Method for Similarity of Large File Transfer Software (대용량 파일 전송 소프트웨어의 동일성 감정 방법)

  • Chun, Byung-Tae
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.1
    • /
    • pp.11-16
    • /
    • 2021
  • The importance of software is increasing due to the development of information and communication, and software copyright disputes are also increasing. In this paper, the source of the submitted programs and the files necessary for the execution of the program were taken as the scope of analysis. The large-capacity file transfer solution program to be analyzed provides additional functions such as confidentiality, integrity, user authentication, and non-repudiation functions through digital signature and encryption of data.In this paper, we analyze the program A, program B, and the program C. In order to calculate the program similarity rate, the following contents are analyzed. Analyze the similarity of the package structure, package name, source file name in each package, variable name in source file, function name, function implementation source code, and product environment variable information. It also calculates the overall similarity rate of the program. In order to check the degree of agreement between the package structure and the package name, the similarity was determined by comparing the folder structure. It also analyzes the extent to which the package structure and package name match and the extent to which the source file (class) name within each package matches.

A Plagiarism Detection Technique for Source Codes Considering Data Structures (데이터 구조를 고려한 소스코드 표절 검사 기법)

  • Lee, Kihwa;Kim, Yeoneo;Woo, Gyun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.6
    • /
    • pp.189-196
    • /
    • 2014
  • Though the plagiarism is illegal and should be avoided, it still occurs frequently. Particularly, the plagiarism of source codes is more frequently committed than others since it is much easier to copy them because of their digital nature. To prevent code plagiarism, there have been reported a variety of studies. However, previous studies for plagiarism detection techniques on source codes do not consider the data structures although a source code consists both of data structures and algorithms. In this paper, a plagiarism detection technique for source codes considering data structures is proposed. Specifically, the data structures of two source codes are represented as sets of trees and compared with each other using Hungarian Method. To show the usefulness of this technique, an experiment has been performed on 126 source codes submitted as homework results in an object-oriented programming course. When both the data structures and the algorithms of the source codes are considered, the precision and the F-measure score are improved 22.6% and 19.3%, respectively, than those of the case where only the algorithms are considered.