• Title/Summary/Keyword: Computer Language

Search Result 3,794, Processing Time 0.031 seconds

A Study on Deep Learning Model for Discrimination of Illegal Financial Advertisements on the Internet

  • Kil-Sang Yoo; Jin-Hee Jang;Seong-Ju Kim;Kwang-Yong Gim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.21-30
    • /
    • 2023
  • The study proposes a model that utilizes Python-based deep learning text classification techniques to detect the legality of illegal financial advertising posts on the internet. These posts aim to promote unlawful financial activities, including the trading of bank accounts, credit card fraud, cashing out through mobile payments, and the sale of personal credit information. Despite the efforts of financial regulatory authorities, the prevalence of illegal financial activities persists. By applying this proposed model, the intention is to aid in identifying and detecting illicit content in internet-based illegal financial advertisining, thus contributing to the ongoing efforts to combat such activities. The study utilizes convolutional neural networks(CNN) and recurrent neural networks(RNN, LSTM, GRU), which are commonly used text classification techniques. The raw data for the model is based on manually confirmed regulatory judgments. By adjusting the hyperparameters of the Korean natural language processing and deep learning models, the study has achieved an optimized model with the best performance. This research holds significant meaning as it presents a deep learning model for discerning internet illegal financial advertising, which has not been previously explored. Additionally, with an accuracy range of 91.3% to 93.4% in a deep learning model, there is a hopeful anticipation for the practical application of this model in the task of detecting illicit financial advertisements, ultimately contributing to the eradication of such unlawful financial advertisements.

Applying an Aggregate Function AVG to OLAP Cubes (OLAP 큐브에서의 집계함수 AVG의 적용)

  • Lee, Seung-Hyun;Lee, Duck-Sung;Choi, In-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.1
    • /
    • pp.217-228
    • /
    • 2009
  • Data analysis applications typically aggregate data across many dimensions looking for unusual patterns in data. Even though such applications are usually possible with standard structured query language (SQL) queries, the queries may become very complex. A complex query may result in many scans of the base table, leading to poor performance. Because online analytical processing (OLAP) queries are usually complex, it is desired to define a new operator for aggregation, called the data cube or simply cube. Data cube supports OLAP tasks like aggregation and sub-totals. Many aggregate functions can be used to construct a data cube. Those functions can be classified into three categories, the distributive, the algebraic, and the holistic. It has been thought that the distributive functions such as SUM, COUNT, MAX, and MIN can be used to construct a data cube, and also the algebraic function such as AVG can be used if the function is replaced to an intermediate function. It is believed that even though AVG is not distributive, but the intermediate function (SUM, COUNT) is distributive, and AVG can certainly be computed from (SUM, COUNT). In this paper, however, it is found that the intermediate function (SUM COUNT) cannot be applied to OLAP cubes, and consequently the function leads to erroneous conclusions and decisions. The objective of this study is to identify some problems in applying aggregate function AVG to OLAP cubes, and to design a process for solving these problems.

Design and Implementation of Mediterranean Electronic Cultural Atlas(MECA) for Researchers (연구자 중심의 지중해전자문화지도(MECA) 설계 및 구현)

  • Kang, Ji-Hoon;Lee, Dong-Yul;Yu, Young-Jung;Moon, Sang-Ho
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.6 no.1
    • /
    • pp.57-66
    • /
    • 2016
  • Electronics cultural atlas is a typical methodology for the Digital Humanities. Since electronic cultural atlas could visualize various cultural information based on electronic atlas, that is an information visualization, users are able to make intuitive understanding on specific areas. Therefore, it can be effectively utilized in Area Studies and also helps users to understand comprehensively information on historic event in particular area with time information, because electronic cultural atlas represents a particular subject and time information with geographical information based on map. In other words, electronic cultural atlas may be considered as a specialized system of Digital Humanities for studying the Humanities and Area Studies. In this paper, we design and implement mediterranean electronic cultural atlas(MECA) for researchers of the Mediterranean area that has cultural hybridity formed the exchange of various aspects such as civilization, religion, race and language. In detail, a 'Digital Humanities Research Support System' is constructed to visualize research outcomes related to the Mediterranean area on Electronic Cultural Atlas and to use for researches.

Comparative analysis of deep learning performance for Python and C# using Keras (Keras를 이용한 Python과 C#의 딥러닝 성능 비교 분석)

  • Lee, Sung-jin;Moon, Sang-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.360-363
    • /
    • 2022
  • According to the 2018 Kaggle ML & DS Survey, among the proportions of frameworks for machine learning and data science, TensorFlow and Keras each account for 41.82%. It was found to be 34.09%, and in the case of development programming, it is confirmed that about 82% use Python. A significant number of machine learning and deep learning structures utilize the Keras framework and Python, but in the case of Python, distribution and execution are limited to the Python script environment due to the script language, so it is judged that it is difficult to operate in various environments. This paper implemented a machine learning and deep learning system using C# and Keras running in Visual Studio 2019. Using the Mnist dataset, 100 tests were performed in Python 3.8,2 and C# .NET 5.0 environments, and the minimum time for Python was 1.86 seconds, the maximum time was 2.38 seconds, and the average time was 1.98 seconds. Time 1.78 seconds, maximum time 2.11 seconds, average time 1.85 seconds, total time 37.02 seconds. As a result of the experiment, the performance of C# improved by about 6% compared to Python, and it is expected that the utilization will be high because executable files can be extracted.

  • PDF

Ensuring the Quality of Higher Education in Ukraine

  • Olha Oseredchuk;Mykola Mykhailichenko;Nataliia Rokosovyk;Olha Komar;Valentyna Bielikova;Oleh Plakhotnik;Oleksandr Kuchai
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.142-148
    • /
    • 2023
  • The National Agency for Quality Assurance in Higher Education plays a crucial role in education in Ukraine, as an independent entity creates and ensures quality standards of higher education, which allow to properly implement the educational policy of the state, develop the economy and society as a whole.The purpose of the article: to reveal the crucial role of the National Agency for Quality Assurance in Higher Education to create quality management of higher education institutions, to show its mechanism as an independent entity that creates and ensures quality standards of higher education. and society as a whole. The mission of the National Agency for Quality Assurance in Higher Education is to become a catalyst for positive changes in higher education and the formation of a culture of its quality. The strategic goals of the National Agency are implemented in three main areas: the quality of educational services, recognition of the quality of scientific results, ensuring the systemic impact of the National Agency. The National Agency for Quality Assurance in Higher Education exercises various powers, which can be divided into: regulatory, analytical, accreditation, control, communication.The effectiveness of the work of the National Agency for Quality Assurance in Higher Education for 2020 has been proved. The results of a survey conducted by 183 higher education institutions of Ukraine conducted by the National Agency for Quality Assurance in Higher Education are shown. Emphasis was placed on the development of "Recommendations of the National Agency for Quality Assurance in Higher Education regarding the introduction of an internal quality assurance system." The international activity and international recognition of the National Agency for Quality Assurance in Higher Education are shown.

Multifaceted Evaluation Methodology for AI Interview Candidates - Integration of Facial Recognition, Voice Analysis, and Natural Language Processing (AI면접 대상자에 대한 다면적 평가방법론 -얼굴인식, 음성분석, 자연어처리 영역의 융합)

  • Hyunwook Ji;Sangjin Lee;Seongmin Mun;Jaeyeol Lee;Dongeun Lee;kyusang Lim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.55-58
    • /
    • 2024
  • 최근 각 기업의 AI 면접시스템 도입이 증가하고 있으며, AI 면접에 대한 실효성 논란 또한 많은 상황이다. 본 논문에서는 AI 면접 과정에서 지원자를 평가하는 방식을 시각, 음성, 자연어처리 3영역에서 구현함으로써, 면접 지원자를 다방면으로 분석 방법론의 적절성에 대해 평가하고자 한다. 첫째, 시각적 측면에서, 면접 지원자의 감정을 인식하기 위해, 합성곱 신경망(CNN) 기법을 활용해, 지원자 얼굴에서 6가지 감정을 인식했으며, 지원자가 카메라를 응시하고 있는지를 시계열로 도출하였다. 이를 통해 지원자가 면접에 임하는 태도와 특히 얼굴에서 드러나는 감정을 분석하는 데 주력했다. 둘째, 시각적 효과만으로 면접자의 태도를 파악하는 데 한계가 있기 때문에, 지원자 음성을 주파수로 환산해 특성을 추출하고, Bidirectional LSTM을 활용해 훈련해 지원자 음성에 따른 6가지 감정을 추출했다. 셋째, 지원자의 발언 내용과 관련해 맥락적 의미를 파악해 지원자의 상태를 파악하기 위해, 음성을 STT(Speech-to-Text) 기법을 이용하여 텍스트로 변환하고, 사용 단어의 빈도를 분석하여 지원자의 언어 습관을 파악했다. 이와 함께, 지원자의 발언 내용에 대한 감정 분석을 위해 KoBERT 모델을 적용했으며, 지원자의 성격, 태도, 직무에 대한 이해도를 파악하기 위해 객관적인 평가지표를 제작하여 적용했다. 논문의 분석 결과 AI 면접의 다면적 평가시스템의 적절성과 관련해, 시각화 부분에서는 상당 부분 정확도가 객관적으로 입증되었다고 판단된다. 음성에서 감정분석 분야는 면접자가 제한된 시간에 모든 유형의 감정을 드러내지 않고, 또 유사한 톤의 말이 진행되다 보니 특정 감정을 나타내는 주파수가 다소 집중되는 현상이 나타났다. 마지막으로 자연어처리 영역은 면접자의 발언에서 나오는 말투, 특정 단어의 빈도수를 넘어, 전체적인 맥락과 느낌을 이해할 수 있는 자연어처리 분석모델의 필요성이 더욱 커졌음을 판단했다.

  • PDF

Users' Attachment Styles and ChatGPT Interaction: Revealing Insights into User Experiences

  • I-Tsen Hsieh;Chang-Hoon Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.21-41
    • /
    • 2024
  • This study explores the relationship between users' attachment styles and their interactions with ChatGPT (Chat Generative Pre-trained Transformer), an advanced language model developed by OpenAI. As artificial intelligence (AI) becomes increasingly integrated into everyday life, it is essential to understand how individuals with different attachment styles engage with AI chatbots in order to build a better user experience that meets specific user needs and interacts with users in the most ideal way. Grounded in attachment theory from psychology, we are exploring the influence of attachment style on users' interaction with ChatGPT, bridging a significant gap in understanding human-AI interaction. Contrary to expectations, attachment styles did not have a significant impact on ChatGPT usage or reasons for engagement. Regardless of their attachment styles, hesitated to fully trust ChatGPT with critical information, emphasizing the need to address trust issues in AI systems. Additionally, this study uncovers complex patterns of attachment styles, demonstrating their influence on interaction patterns between users and ChatGPT. By focusing on the distinctive dynamics between users and ChatGPT, our aim is to uncover how attachment styles influence these interactions, guiding the development of AI chatbots for personalized user experiences. The introduction of the Perceived Partner Responsiveness Scale serves as a valuable tool to evaluate users' perceptions of ChatGPT's role, shedding light on the anthropomorphism of AI. This study contributes to the wider discussion on human-AI relationships, emphasizing the significance of incorporating emotional intelligence into AI systems for a user-centered future.

Optimum Size Selection and Machinery Costs Analysis for Farm Machinery Systems - Programming for Personal Computer - (농기계(農機械) 투입모형(投入模型) 설정(設定) 및 기계이용(機械利用) 비용(費用) 분석연구(分析硏究) - PC용(用) 프로그램 개발(開發) -)

  • Lee, W.Y.;Kim, S.R.;Jung, D.H.;Chang, D.I.;Lee, D.H.;Kim, Y.H.
    • Journal of Biosystems Engineering
    • /
    • v.16 no.4
    • /
    • pp.384-398
    • /
    • 1991
  • A computer program was developed to select the optimum size of farm machine and analyze its operation costs according to various farming conditions. It was written in FORTRAN 77 and BASIC languages and can be run on any personal computer having Korean Standard Complete Type and Korean Language Code. The program was developed as a user-friendly type so that users can carry out easily the costs analysis for the whole farm work or respective operation in rice production, and for plowing, rotarying and pest controlling in upland. The program can analyze simultaneously three different machines in plowing & rotarying and two machines in transplanting, pest controlling and harvesting operations. The input data are the sizes of arable lands, possible working days and number of laborers during the opimum working period, and custom rates varying depending on regions and individual farming conditions. We can find out the results such as the selected optimum combination farm machines, the overs and shorts of working days relative to the planned working period, capacities of the machines, break-even points by custom rate, fixed costs for a month, and utilization costs in a hectare.

  • PDF

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Privacy protection of seizure and search system (압수수색과 개인정보 보호의 문제)

  • Kim, Woon-Gon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.5
    • /
    • pp.123-131
    • /
    • 2015
  • Bright development of information communication is caused by usabilities and another case to our society. That is, the surveillance which is unlimited to electronic equipment is becoming a transfiguration to a possible society, and there is case that was able to lay in another disasters if manage early error. Be what is living on at traps of surveillance through the Smart phones which a door of domicile is built, and the plane western part chaps, and we who live on in these societies are installed to several places, and closed-circuit cameras (CCTV-Closed Circuit Television) and individual use. On one hand, while the asset value which was special of enterprise for marketing to enterprise became while a collection was easily stored development of information communication and individual information, the early body which would collect illegally was increased, and affair actually very occurred related to this. An investigation agency is endeavored to be considered the digital trace that inquiry is happened by commission act to the how small extent which can take aim at a duty successful of the inquiry whether you can detect in this information society in order to look this up. Therefore, procedures to be essential now became while investigating affair that confiscation search regarding employment trace of a computer or the telephone which delinquent used was procedural, and decisive element became that dividing did success or failure of inquiry whether you can collect the act and deed which was these electronic enemy. By the way, at this time a lot of, in the investigation agencies the case which is performed comprehensively blooms attachment while rummaging, and attachment is trend apprehension to infringe discretion own arbitrary information rising. Therefore, a lot of nation is letting you come into being until language called exile 'cyber' while anxiety is exposed about comprehensive confiscation search of the former information which an investigation agency does. Will review whether or not there is to have to set up confiscation search ambit of electronic information at this respect how.