• 제목/요약/키워드: Computer Algorithms

Search Result 3,809, Processing Time 0.03 seconds

Mean Teacher Learning Structure Optimization for Semantic Segmentation of Crack Detection (균열 탐지의 의미론적 분할을 위한 Mean Teacher 학습 구조 최적화 )

  • Seungbo Shim
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.5
    • /
    • pp.113-119
    • /
    • 2023
  • Most infrastructure structures were completed during periods of economic growth. The number of infrastructure structures reaching their lifespan is increasing, and the proportion of old structures is gradually increasing. The functions and performance of these structures at the time of design may deteriorate and may even lead to safety accidents. To prevent this repercussion, accurate inspection and appropriate repair are requisite. To this end, demand is increasing for computer vision and deep learning technology to accurately detect even minute cracks. However, deep learning algorithms require a large number of training data. In particular, label images indicating the location of cracks in the image are required. To secure a large number of those label images, a lot of labor and time are consumed. To reduce these costs as well as increase detection accuracy, this study proposed a learning structure based on mean teacher method. This learning structure was trained on a dataset of 900 labeled image dataset and 3000 unlabeled image dataset. The crack detection network model was evaluated on over 300 labeled image dataset, and the detection accuracy recorded a mean intersection over union of 89.23% and an F1 score of 89.12%. Through this experiment, it was confirmed that detection performance was improved compared to supervised learning. It is expected that this proposed method will be used in the future to reduce the cost required to secure label images.

A Study on Generation Quality Comparison of Concrete Damage Image Using Stable Diffusion Base Models (Stable diffusion의 기저 모델에 따른 콘크리트 손상 영상의 생성 품질 비교 연구)

  • Seung-Bo Shim
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.4
    • /
    • pp.55-61
    • /
    • 2024
  • Recently, the number of aging concrete structures is steadily increasing. This is because many of these structures are reaching their expected lifespan. Such structures require accurate inspections and persistent maintenance. Otherwise, their original functions and performance may degrade, potentially leading to safety accidents. Therefore, research on objective inspection technologies using deep learning and computer vision is actively being conducted. High-resolution images can accurately observe not only micro cracks but also spalling and exposed rebar, and deep learning enables automated detection. High detection performance in deep learning is only guaranteed with diverse and numerous training datasets. However, surface damage to concrete is not commonly captured in images, resulting in a lack of training data. To overcome this limitation, this study proposed a method for generating concrete surface damage images, including cracks, spalling, and exposed rebar, using stable diffusion. This method synthesizes new damage images by paired text and image data. For this purpose, a training dataset of 678 images was secured, and fine-tuning was performed through low-rank adaptation. The quality of the generated images was compared according to three base models of stable diffusion. As a result, a method to synthesize the most diverse and high-quality concrete damage images was developed. This research is expected to address the issue of data scarcity and contribute to improving the accuracy of deep learning-based damage detection algorithms in the future.

Ethical and Legal Implications of AI-based Human Resources Management (인공지능(AI) 기반 인사관리의 윤리적·법적 영향)

  • Jungwoo Lee;Jungsoo Lee;Ji Hun kwon;Minyi Cha;Kyu Tae Kim
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.25 no.2
    • /
    • pp.100-112
    • /
    • 2024
  • This study investigates the ethical and legal implications of utilizing artificial intelligence (AI) in human resource management, with a particular focus on AI interviews in the recruitment process. AI, defined as the capability of computer programs to perform tasks associated with human intelligence such as reasoning, learning, and adapting, is increasingly being integrated into HR practices. The deployment of AI in recruitment, specifically through AI-driven interviews, promises efficiency and objectivity but also raises significant ethical and legal concerns. These concerns include potential biases in AI algorithms, transparency in AI decision-making processes, data privacy issues, and compliance with existing labor laws and regulations. By analyzing case studies and reviewing relevant literature, this paper aims to provide a comprehensive understanding of these challenges and propose recommendations for ensuring ethical and legal compliance in AI-based HR practices. The findings suggest that while AI can enhance recruitment efficiency, it is imperative to establish robust ethical guidelines and legal frameworks to mitigate risks and ensure fair and transparent hiring practices.

Context Sharing Framework Based on Time Dependent Metadata for Social News Service (소셜 뉴스를 위한 시간 종속적인 메타데이터 기반의 컨텍스트 공유 프레임워크)

  • Ga, Myung-Hyun;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.39-53
    • /
    • 2013
  • The emergence of the internet technology and SNS has increased the information flow and has changed the way people to communicate from one-way to two-way communication. Users not only consume and share the information, they also can create and share it among their friends across the social network service. It also changes the Social Media behavior to become one of the most important communication tools which also includes Social TV. Social TV is a form which people can watch a TV program and at the same share any information or its content with friends through Social media. Social News is getting popular and also known as a Participatory Social Media. It creates influences on user interest through Internet to represent society issues and creates news credibility based on user's reputation. However, the conventional platforms in news services only focus on the news recommendation domain. Recent development in SNS has changed this landscape to allow user to share and disseminate the news. Conventional platform does not provide any special way for news to be share. Currently, Social News Service only allows user to access the entire news. Nonetheless, they cannot access partial of the contents which related to users interest. For example user only have interested to a partial of the news and share the content, it is still hard for them to do so. In worst cases users might understand the news in different context. To solve this, Social News Service must provide a method to provide additional information. For example, Yovisto known as an academic video searching service provided time dependent metadata from the video. User can search and watch partial of video content according to time dependent metadata. They also can share content with a friend in social media. Yovisto applies a method to divide or synchronize a video based whenever the slides presentation is changed to another page. However, we are not able to employs this method on news video since the news video is not incorporating with any power point slides presentation. Segmentation method is required to separate the news video and to creating time dependent metadata. In this work, In this paper, a time dependent metadata-based framework is proposed to segment news contents and to provide time dependent metadata so that user can use context information to communicate with their friends. The transcript of the news is divided by using the proposed story segmentation method. We provide a tag to represent the entire content of the news. And provide the sub tag to indicate the segmented news which includes the starting time of the news. The time dependent metadata helps user to track the news information. It also allows them to leave a comment on each segment of the news. User also may share the news based on time metadata as segmented news or as a whole. Therefore, it helps the user to understand the shared news. To demonstrate the performance, we evaluate the story segmentation accuracy and also the tag generation. For this purpose, we measured accuracy of the story segmentation through semantic similarity and compared to the benchmark algorithm. Experimental results show that the proposed method outperforms benchmark algorithms in terms of the accuracy of story segmentation. It is important to note that sub tag accuracy is the most important as a part of the proposed framework to share the specific news context with others. To extract a more accurate sub tags, we have created stop word list that is not related to the content of the news such as name of the anchor or reporter. And we applied to framework. We have analyzed the accuracy of tags and sub tags which represent the context of news. From the analysis, it seems that proposed framework is helpful to users for sharing their opinions with context information in Social media and Social news.

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.

Function of the Korean String Indexing System for the Subject Catalog (주제목록을 위한 한국용어열색인 시스템의 기능)

  • Yoon Kooho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.225-266
    • /
    • 1988
  • Various theories and techniques for the subject catalog have been developed since Charles Ammi Cutter first tried to formulate rules for the construction of subject headings in 1876. However, they do not seem to be appropriate to Korean language because the syntax and semantics of Korean language are different from those of English and other European languages. This study therefore attempts to develop a new Korean subject indexing system, namely Korean String Indexing System(KOSIS), in order to increase the use of subject catalogs. For this purpose, advantages and disadvantages between the classed subject catalog nd the alphabetical subject catalog, which are typical subject ca-alogs in libraries, are investigated, and most of remarkable subject indexing systems, in particular the PRECIS developed by the British National Bibliography, are reviewed and analysed. KOSIS is a string indexing based on purely the syntax and semantics of Korean language, even though considerable principles of PRECIS are applied to it. The outlines of KOSIS are as follows: 1) KOSIS is based on the fundamentals of natural language and an ingenious conjunction of human indexing skills and computer capabilities. 2) KOSIS is. 3 string indexing based on the 'principle of context-dependency.' A string of terms organized accoding to his principle shows remarkable affinity with certain patterns of words in ordinary discourse. From that point onward, natural language rather than classificatory terms become the basic model for indexing schemes. 3) KOSIS uses 24 role operators. One or more operators should be allocated to the index string, which is organized manually by the indexer's intellectual work, in order to establish the most explicit syntactic relationship of index terms. 4) Traditionally, a single -line entry format is used in which a subject heading or index entry is presented as a single sequence of words, consisting of the entry terms, plus, in some cases, an extra qualifying term or phrase. But KOSIS employs a two-line entry format which contains three basic positions for the production of index entries. The 'lead' serves as the user's access point, the 'display' contains those terms which are themselves context dependent on the lead, 'qualifier' sets the lead term into its wider context. 5) Each of the KOSIS entries is co-extensive with the initial subject statement prepared by the indexer, since it displays all the subject specificities. Compound terms are always presented in their natural language order. Inverted headings are not produced in KOSIS. Consequently, the precision ratio of information retrieval can be increased. 6) KOSIS uses 5 relational codes for the system of references among semantically related terms. Semantically related terms are handled by a different set of routines, leading to the production of 'See' and 'See also' references. 7) KOSIS was riginally developed for a classified catalog system which requires a subject index, that is an index -which 'trans-lates' subject index, that is, an index which 'translates' subjects expressed in natural language into the appropriate classification numbers. However, KOSIS can also be us d for a dictionary catalog system. Accordingly, KOSIS strings can be manipulated to produce either appropriate subject indexes for a classified catalog system, or acceptable subject headings for a dictionary catalog system. 8) KOSIS is able to maintain a constistency of index entries and cross references by means of a routine identification of the established index strings and reference system. For this purpose, an individual Subject Indicator Number and Reference Indicator Number is allocated to each new index strings and new index terms, respectively. can produce all the index entries, cross references, and authority cards by means of either manual or mechanical methods. Thus, detailed algorithms for the machine-production of various outputs are provided for the institutions which can use computer facilities.

  • PDF

A 2kβ Algorithm for Euler function 𝜙(n) Decryption of RSA (RSA의 오일러 함수 𝜙(n) 해독 2kβ 알고리즘)

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.7
    • /
    • pp.71-76
    • /
    • 2014
  • There is to be virtually impossible to solve the very large digits of prime number p and q from composite number n=pq using integer factorization in typical public-key cryptosystems, RSA. When the public key e and the composite number n are known but the private key d remains unknown in an asymmetric-key RSA, message decryption is carried out by first obtaining ${\phi}(n)=(p-1)(q-1)=n+1-(p+q)$ and then using a reverse function of $d=e^{-1}(mod{\phi}(n))$. Integer factorization from n to p,q is most widely used to produce ${\phi}(n)$, which has been regarded as mathematically hard. Among various integer factorization methods, the most popularly used is the congruence of squares of $a^2{\equiv}b^2(mod\;n)$, a=(p+q)/2,b=(q-p)/2 which is more commonly used then n/p=q trial division. Despite the availability of a number of congruence of scares methods, however, many of the RSA numbers remain unfactorable. This paper thus proposes an algorithm that directly and immediately obtains ${\phi}(n)$. The proposed algorithm computes $2^k{\beta}_j{\equiv}2^i(mod\;n)$, $0{\leq}i{\leq}{\gamma}-1$, $k=1,2,{\ldots}$ or $2^k{\beta}_j=2{\beta}_j$ for $2^j{\equiv}{\beta}_j(mod\;n)$, $2^{{\gamma}-1}$ < n < $2^{\gamma}$, $j={\gamma}-1,{\gamma},{\gamma}+1$ to obtain the solution. It has been found to be capable of finding an arbitrarily located ${\phi}(n)$ in a range of $n-10{\lfloor}{\sqrt{n}}{\rfloor}$ < ${\phi}(n){\leq}n-2{\lfloor}{\sqrt{n}}{\rfloor}$ much more efficiently than conventional algorithms.

Analysis and Performance Evaluation of Pattern Condensing Techniques used in Representative Pattern Mining (대표 패턴 마이닝에 활용되는 패턴 압축 기법들에 대한 분석 및 성능 평가)

  • Lee, Gang-In;Yun, Un-Il
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.77-83
    • /
    • 2015
  • Frequent pattern mining, which is one of the major areas actively studied in data mining, is a method for extracting useful pattern information hidden from large data sets or databases. Moreover, frequent pattern mining approaches have been actively employed in a variety of application fields because the results obtained from them can allow us to analyze various, important characteristics within databases more easily and automatically. However, traditional frequent pattern mining methods, which simply extract all of the possible frequent patterns such that each of their support values is not smaller than a user-given minimum support threshold, have the following problems. First, traditional approaches have to generate a numerous number of patterns according to the features of a given database and the degree of threshold settings, and the number can also increase in geometrical progression. In addition, such works also cause waste of runtime and memory resources. Furthermore, the pattern results excessively generated from the methods also lead to troubles of pattern analysis for the mining results. In order to solve such issues of previous traditional frequent pattern mining approaches, the concept of representative pattern mining and its various related works have been proposed. In contrast to the traditional ones that find all the possible frequent patterns from databases, representative pattern mining approaches selectively extract a smaller number of patterns that represent general frequent patterns. In this paper, we describe details and characteristics of pattern condensing techniques that consider the maximality or closure property of generated frequent patterns, and conduct comparison and analysis for the techniques. Given a frequent pattern, satisfying the maximality for the pattern signifies that all of the possible super sets of the pattern must have smaller support values than a user-specific minimum support threshold; meanwhile, satisfying the closure property for the pattern means that there is no superset of which the support is equal to that of the pattern with respect to all the possible super sets. By mining maximal frequent patterns or closed frequent ones, we can achieve effective pattern compression and also perform mining operations with much smaller time and space resources. In addition, compressed patterns can be converted into the original frequent pattern forms again if necessary; especially, the closed frequent pattern notation has the ability to convert representative patterns into the original ones again without any information loss. That is, we can obtain a complete set of original frequent patterns from closed frequent ones. Although the maximal frequent pattern notation does not guarantee a complete recovery rate in the process of pattern conversion, it has an advantage that can extract a smaller number of representative patterns more quickly compared to the closed frequent pattern notation. In this paper, we show the performance results and characteristics of the aforementioned techniques in terms of pattern generation, runtime, and memory usage by conducting performance evaluation with respect to various real data sets collected from the real world. For more exact comparison, we also employ the algorithms implementing these techniques on the same platform and Implementation level.

Learning Material Bookmarking Service based on Collective Intelligence (집단지성 기반 학습자료 북마킹 서비스 시스템)

  • Jang, Jincheul;Jung, Sukhwan;Lee, Seulki;Jung, Chihoon;Yoon, Wan Chul;Yi, Mun Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.179-192
    • /
    • 2014
  • Keeping in line with the recent changes in the information technology environment, the online learning environment that supports multiple users' participation such as MOOC (Massive Open Online Courses) has become important. One of the largest professional associations in Information Technology, IEEE Computer Society, announced that "Supporting New Learning Styles" is a crucial trend in 2014. Popular MOOC services, CourseRa and edX, have continued to build active learning environment with a large number of lectures accessible anywhere using smart devices, and have been used by an increasing number of users. In addition, collaborative web services (e.g., blogs and Wikipedia) also support the creation of various user-uploaded learning materials, resulting in a vast amount of new lectures and learning materials being created every day in the online space. However, it is difficult for an online educational system to keep a learner' motivation as learning occurs remotely, with limited capability to share knowledge among the learners. Thus, it is essential to understand which materials are needed for each learner and how to motivate learners to actively participate in online learning system. To overcome these issues, leveraging the constructivism theory and collective intelligence, we have developed a social bookmarking system called WeStudy, which supports learning material sharing among the users and provides personalized learning material recommendations. Constructivism theory argues that knowledge is being constructed while learners interact with the world. Collective intelligence can be separated into two types: (1) collaborative collective intelligence, which can be built on the basis of direct collaboration among the participants (e.g., Wikipedia), and (2) integrative collective intelligence, which produces new forms of knowledge by combining independent and distributed information through highly advanced technologies and algorithms (e.g., Google PageRank, Recommender systems). Recommender system, one of the examples of integrative collective intelligence, is to utilize online activities of the users and recommend what users may be interested in. Our system included both collaborative collective intelligence functions and integrative collective intelligence functions. We analyzed well-known Web services based on collective intelligence such as Wikipedia, Slideshare, and Videolectures to identify main design factors that support collective intelligence. Based on this analysis, in addition to sharing online resources through social bookmarking, we selected three essential functions for our system: 1) multimodal visualization of learning materials through two forms (e.g., list and graph), 2) personalized recommendation of learning materials, and 3) explicit designation of learners of their interest. After developing web-based WeStudy system, we conducted usability testing through the heuristic evaluation method that included seven heuristic indices: features and functionality, cognitive page, navigation, search and filtering, control and feedback, forms, context and text. We recruited 10 experts who majored in Human Computer Interaction and worked in the same field, and requested both quantitative and qualitative evaluation of the system. The evaluation results show that, relative to the other functions evaluated, the list/graph page produced higher scores on all indices except for contexts & text. In case of contexts & text, learning material page produced the best score, compared with the other functions. In general, the explicit designation of learners of their interests, one of the distinctive functions, received lower scores on all usability indices because of its unfamiliar functionality to the users. In summary, the evaluation results show that our system has achieved high usability with good performance with some minor issues, which need to be fully addressed before the public release of the system to large-scale users. The study findings provide practical guidelines for the design and development of various systems that utilize collective intelligence.

Improving Performance of Recommendation Systems Using Topic Modeling (사용자 관심 이슈 분석을 통한 추천시스템 성능 향상 방안)

  • Choi, Seongi;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.101-116
    • /
    • 2015
  • Recently, due to the development of smart devices and social media, vast amounts of information with the various forms were accumulated. Particularly, considerable research efforts are being directed towards analyzing unstructured big data to resolve various social problems. Accordingly, focus of data-driven decision-making is being moved from structured data analysis to unstructured one. Also, in the field of recommendation system, which is the typical area of data-driven decision-making, the need of using unstructured data has been steadily increased to improve system performance. Approaches to improve the performance of recommendation systems can be found in two aspects- improving algorithms and acquiring useful data with high quality. Traditionally, most efforts to improve the performance of recommendation system were made by the former approach, while the latter approach has not attracted much attention relatively. In this sense, efforts to utilize unstructured data from variable sources are very timely and necessary. Particularly, as the interests of users are directly connected with their needs, identifying the interests of the user through unstructured big data analysis can be a crew for improving performance of recommendation systems. In this sense, this study proposes the methodology of improving recommendation system by measuring interests of the user. Specially, this study proposes the method to quantify interests of the user by analyzing user's internet usage patterns, and to predict user's repurchase based upon the discovered preferences. There are two important modules in this study. The first module predicts repurchase probability of each category through analyzing users' purchase history. We include the first module to our research scope for comparing the accuracy of traditional purchase-based prediction model to our new model presented in the second module. This procedure extracts purchase history of users. The core part of our methodology is in the second module. This module extracts users' interests by analyzing news articles the users have read. The second module constructs a correspondence matrix between topics and news articles by performing topic modeling on real world news articles. And then, the module analyzes users' news access patterns and then constructs a correspondence matrix between articles and users. After that, by merging the results of the previous processes in the second module, we can obtain a correspondence matrix between users and topics. This matrix describes users' interests in a structured manner. Finally, by using the matrix, the second module builds a model for predicting repurchase probability of each category. In this paper, we also provide experimental results of our performance evaluation. The outline of data used our experiments is as follows. We acquired web transaction data of 5,000 panels from a company that is specialized to analyzing ranks of internet sites. At first we extracted 15,000 URLs of news articles published from July 2012 to June 2013 from the original data and we crawled main contents of the news articles. After that we selected 2,615 users who have read at least one of the extracted news articles. Among the 2,615 users, we discovered that the number of target users who purchase at least one items from our target shopping mall 'G' is 359. In the experiments, we analyzed purchase history and news access records of the 359 internet users. From the performance evaluation, we found that our prediction model using both users' interests and purchase history outperforms a prediction model using only users' purchase history from a view point of misclassification ratio. In detail, our model outperformed the traditional one in appliance, beauty, computer, culture, digital, fashion, and sports categories when artificial neural network based models were used. Similarly, our model outperformed the traditional one in beauty, computer, digital, fashion, food, and furniture categories when decision tree based models were used although the improvement is very small.