• Title/Summary/Keyword: 한국컴퓨터

Search Result 36,246, Processing Time 0.075 seconds

Improvement of ISMS Certification Components for Virtual Asset Services: Focusing on CCSS Certification Comparison (안전한 가상자산 서비스를 위한 ISMS 인증항목 개선에 관한 연구: CCSS 인증제도 비교를 중심으로)

  • Kim, Eun Ji;Koo, Ja Hwan;Kim, Ung Mo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.8
    • /
    • pp.249-258
    • /
    • 2022
  • Since the advent of Bitcoin, various virtual assets have been actively traded through virtual asset services of virtual asset exchanges. Recently, security accidents have frequently occurred in virtual asset exchanges, so the government is obligated to obtain information security management system (ISMS) certification to strengthen information protection of virtual asset exchanges, and 56 additional specialized items have been established. In this paper, we compared the domain importance of ISMS and CryptoCurrency Security Standard (CCSS) which is a set of requirements for all information systems that make use of cryptocurrencies, and analyzed the results after mapping them to gain insight into the characteristics of each certification system. Improvements for 4 items of High Level were derived by classifying the priorities for improvement items into 3 stages: High, Medium, and Low. These results can provide priority for virtual asset and information system security, support method and systematic decision-making on improvement of certified items, and contribute to vitalization of virtual asset transactions by enhancing the reliability and safety of virtual asset services.

Performance Evaluation of Octonion Space-Time Coded Physical Layer Security in MIMO Systems (MIMO 시스템에서 옥토니언 시공간 부호를 이용한 물리계층 보안에 대한 성능 분석)

  • Young Ju Kim;BeomGeun Kwak;Seulmin Lim;Cheon Deok Jin
    • Journal of Broadcast Engineering
    • /
    • v.28 no.1
    • /
    • pp.145-148
    • /
    • 2023
  • Open-loop Octonion space-time block code for 4 transmit antenna system is considered and random phases are applied to 4 transmit antennas for physical layer security. When an illegal hacker estimates the random phases of 1 through 4 transmit antennas with maximum likelihood (ML), this letter analyzes the bit error rate (BER) performances versus signal-to-noise ratio (SNR). And the Octonion code in the literature[1] does not have full orthogonality so, this letter employs the perfect orthogonal Octonion code. When the hacker knows that the random phases are 2-PSK constellations and he should estimate all the 4 random phases, the hacking is impossible until 100dB. When the hacker possibly know that some of the random phases, bit error rate goes down to 10-3 so, the transmit message could be hacked.

Open Domain Machine Reading Comprehension using InferSent (InferSent를 활용한 오픈 도메인 기계독해)

  • Jeong-Hoon, Kim;Jun-Yeong, Kim;Jun, Park;Sung-Wook, Park;Se-Hoon, Jung;Chun-Bo, Sim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2022
  • An open domain machine reading comprehension is a model that adds a function to search paragraphs as there are no paragraphs related to a given question. Document searches have an issue of lower performance with a lot of documents despite abundant research with word frequency based TF-IDF. Paragraph selections also have an issue of not extracting paragraph contexts, including sentence characteristics accurately despite a lot of research with word-based embedding. Document reading comprehension has an issue of slow learning due to the growing number of parameters despite a lot of research on BERT. Trying to solve these three issues, this study used BM25 which considered even sentence length and InferSent to get sentence contexts, and proposed an open domain machine reading comprehension with ALBERT to reduce the number of parameters. An experiment was conducted with SQuAD1.1 datasets. BM25 recorded a higher performance of document research than TF-IDF by 3.2%. InferSent showed a higher performance in paragraph selection than Transformer by 0.9%. Finally, as the number of paragraphs increased in document comprehension, ALBERT was 0.4% higher in EM and 0.2% higher in F1.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

The Application Methods of FarmMap Reading in Agricultural Land Using Deep Learning (딥러닝을 이용한 농경지 팜맵 판독 적용 방안)

  • Wee Seong Seung;Jung Nam Su;Lee Won Suk;Shin Yong Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.77-82
    • /
    • 2023
  • The Ministry of Agriculture, Food and Rural Affairs established the FarmMap, an digital map of agricultural land. In this study, using deep learning, we suggest the application of farm map reading to farmland such as paddy fields, fields, ginseng, fruit trees, facilities, and uncultivated land. The farm map is used as spatial information for planting status and drone operation by digitizing agricultural land in the real world using aerial and satellite images. A reading manual has been prepared and updated every year by demarcating the boundaries of agricultural land and reading the attributes. Human reading of agricultural land differs depending on reading ability and experience, and reading errors are difficult to verify in reality because of budget limitations. The farmmap has location information and class information of the corresponding object in the image of 5 types of farmland properties, so the suitable AI technique was tested with ResNet50, an instance segmentation model. The results of attribute reading of agricultural land using deep learning and attribute reading by humans were compared. If technology is developed by focusing on attribute reading that shows different results in the future, it is expected that it will play a big role in reducing attribute errors and improving the accuracy of digital map of agricultural land.

Private Blockchain and Biometric Authentication-based Chronic Disease Management Telemedicine System for Smart Healthcare (스마트 헬스케어를 위한 프라이빗 블록체인과 생체인증기반의 만성질환관리 원격의료시스템)

  • Young-Ae Han;Hyeok Kang;Keun-Ho Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.33-39
    • /
    • 2023
  • As the number of people with chronic diseases increases due to an aging society, it is urgent to prevent and manage their diseases. Although biometric authentication methods and Telemedicine Systems have been introduced to solve these problems, it is difficult to solve the security problem of medical information and personal authentication. Since smart healthcare includes personal medical information of subjects, the security of personal information is the most important field. Therefore, in this paper, we tried to propose a Telemedicine System using a smart wearable device ECG in the form of a wristband and face personal authentication in a private blockchain environment. This system targets various medical personnel and patients with chronic diseases in all regions, and uses a private blockchain that can increase data integrity and transparency, ECG and face authentication that are difficult to forge and alter and have high personal identification to provide a system with high security and reliability. composed. Through this, it is intended to contribute to increasing the efficiency of chronic disease management by focusing on disease prevention and health management for patients with chronic diseases at home.

Quantitative Estimation Method for ML Model Performance Change, Due to Concept Drift (Concept Drift에 의한 ML 모델 성능 변화의 정량적 추정 방법)

  • Soon-Hong An;Hoon-Suk Lee;Seung-Hoon Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.6
    • /
    • pp.259-266
    • /
    • 2023
  • It is very difficult to measure the performance of the machine learning model in the business service stage. Therefore, managing the performance of the model through the operational department is not done effectively. Academically, various studies have been conducted on the concept drift detection method to determine whether the model status is appropriate. The operational department wants to know quantitatively the performance of the operating model, but concept drift can only detect the state of the model in relation to the data, it cannot estimate the quantitative performance of the model. In this study, we propose a performance prediction model (PPM) that quantitatively estimates precision through the statistics of concept drift. The proposed model induces artificial drift in the sampling data extracted from the training data, measures the precision of the sampling data, creates a dataset of drift and precision, and learns it. Then, the difference between the actual precision and the predicted precision is compared through the test data to correct the error of the performance prediction model. The proposed PPM was applied to two models, a loan underwriting model and a credit card fraud detection model that can be used in real business. It was confirmed that the precision was effectively predicted.

Fake News Detection on YouTube Using Related Video Information (관련 동영상 정보를 활용한 YouTube 가짜뉴스 탐지 기법)

  • Junho Kim;Yongjun Shin;Hyunchul Ahn
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.19-36
    • /
    • 2023
  • As advances in information and communication technology have made it easier for anyone to produce and disseminate information, a new problem has emerged: fake news, which is false information intentionally shared to mislead people. Initially spread mainly through text, fake news has gradually evolved and is now distributed in multimedia formats. Since its founding in 2005, YouTube has become the world's leading video platform and is used by most people worldwide. However, it has also become a primary source of fake news, causing social problems. Various researchers have been working on detecting fake news on YouTube. There are content-based and background information-based approaches to fake news detection. Still, content-based approaches are dominant when looking at conventional fake news research and YouTube fake news detection research. This study proposes a fake news detection method based on background information rather than content-based fake news detection. In detail, we suggest detecting fake news by utilizing related video information from YouTube. Specifically, the method detects fake news through CNN, a deep learning network, from the vectorized information obtained from related videos and the original video using Doc2vec, an embedding technique. The empirical analysis shows that the proposed method has better prediction performance than the existing content-based approach to detecting fake news on YouTube. The proposed method in this study contributes to making our society safer and more reliable by preventing the spread of fake news on YouTube, which is highly contagious.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Analysis of Research Trends in New Drug Development with Artificial Intelligence Using Text Mining (텍스트 마이닝을 이용한 인공지능 활용 신약 개발 연구 동향 분석)

  • Jae Woo Nam;Young Jun Kim
    • Journal of Life Science
    • /
    • v.33 no.8
    • /
    • pp.663-679
    • /
    • 2023
  • This review analyzes research trends related to new drug development using artificial intelligence from 2010 to 2022. This analysis organized the abstracts of 2,421 studies into a corpus, and words with high frequency and high connection centrality were extracted through preprocessing. The analysis revealed a similar word frequency trend between 2010 and 2019 to that between 2020 and 2022. In terms of the research method, many studies using machine learning were conducted from 2010 to 2020, and since 2021, research using deep learning has been increasing. Through these studies, we investigated the trends in research on artificial intelligence utilization by field and the strengths, problems, and challenges of related research. We found that since 2021, the application of artificial intelligence has been expanding, such as research using artificial intelligence for drug rearrangement, using computers to develop anticancer drugs, and applying artificial intelligence to clinical trials. This article briefly presents the prospects of new drug development research using artificial intelligence. If the reliability and safety of bio and medical data are ensured, and the development of the above artificial intelligence technology continues, it is judged that the direction of new drug development using artificial intelligence will proceed to personalized medicine and precision medicine, so we encourage efforts in that field.