• Title/Summary/Keyword: BIG4

Search Result 3,612, Processing Time 0.032 seconds

Time series and deep learning prediction study Using container Throughput at Busan Port (부산항 컨테이너 물동량을 이용한 시계열 및 딥러닝 예측연구)

  • Seung-Pil Lee;Hwan-Seong Kim
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.391-393
    • /
    • 2022
  • In recent years, technologies forecasting demand based on deep learning and big data have accelerated the smartification of the field of e-commerce, logistics and distribution areas. In particular, ports, which are the center of global transportation networks and modern intelligent logistics, are rapidly responding to changes in the global economy and port environment caused by the 4th industrial revolution. Port traffic forecasting will have an important impact in various fields such as new port construction, port expansion, and terminal operation. Therefore, the purpose of this study is to compare the time series analysis and deep learning analysis, which are often used for port traffic prediction, and to derive a prediction model suitable for the future container prediction of Busan Port. In addition, external variables related to trade volume changes were selected as correlations and applied to the multivariate deep learning prediction model. As a result, it was found that the LSTM error was low in the single-variable prediction model using only Busan Port container freight volume, and the LSTM error was also low in the multivariate prediction model using external variables.

  • PDF

Generative Adversarial Network Model for Generating Yard Stowage Situation in Container Terminal (컨테이너 터미널의 야드 장치 상태 생성을 위한 생성적 적대 신경망 모형)

  • Jae-Young Shin;Yeong-Il Kim;Hyun-Jun Cho
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.383-384
    • /
    • 2022
  • Following the development of technologies such as digital twin, IoT, and AI after the 4th industrial revolution, decision-making problems are being solved based on high-dimensional data analysis. This has recently been applied to the port logistics sector, and a number of studies on big data analysis, deep learning predictions, and simulations have been conducted on container terminals to improve port productivity. These high-dimensional data analysis techniques generally require a large number of data. However, the global port environment has changed due to the COVID-19 pandemic in 2020. It is not appropriate to apply data before the COVID-19 outbreak to the current port environment, and the data after the outbreak was not sufficiently collected to apply it to data analysis such as deep learning. Therefore, this study intends to present a port data augmentation method for data analysis as one of these problem-solving methods. To this end, we generate the container stowage situation of the yard through a generative adversarial neural network model in terms of container terminal operation, and verify similarity through statistical distribution verification between real and augmented data.

  • PDF

Study for Investments Flow Patterns in New-Product Development (신제품개발시 소요투자비 흐름의 기업특성별 연구)

  • Oh, Nakkyo;Park, Wonkoo
    • Korean small business review
    • /
    • v.40 no.3
    • /
    • pp.1-24
    • /
    • 2018
  • The purpose of this study is verifying with corporate financial data that the required investment amount flow shows a similar pattern as times passed, in new product development by start-up company. In the previous paper, the same authors proposed the required investment amount flow as a 'New Product Investment Curve (NPIC)'. In this study, we have studied further in various types of companies. The samples used are accounting data of 462 companies selected from 5,873 Korean companies which were finished external audit in 2015. The results of this study are as follows; The average investment period was 3 years for the listed companies, while 6 years for the unlisted companies. The investment payback period was 6 years for listed companies, while 17 years for unlisted companies. The investment payback period of the company supported by big affiliate company (We call 'greenhouse company') was 14~15 years, while 17 years for real venture companies. When we divide all companies into 4 groups in terms of R&D cost and variable cost ratio, NPIC explanatory power of 'high R&D and high variable cost ratio group (Automobile Assembly Business) is best. Among the eight investment cost indexes proposed to estimate the investment amount, the 'cash 1' (operating cash flow+fixed asset excluding land & building+intangible asset, deferred asset change)/year-end total assets) turned out to be the most effective index to estimate the investment flow patterns. The conclusion is that NPIC explanatory power is somewhat reduced when we estimate all companies together. However, if we estimate the sample companies by characteristics such as listed, unlisted, greenhouse, and venture company, the proposed NPIC was verified to be effective by showing the required investment amount pattern.

Research Trends in Record Management Using Unstructured Text Data Analysis (비정형 텍스트 데이터 분석을 활용한 기록관리 분야 연구동향)

  • Deokyong Hong;Junseok Heo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.23 no.4
    • /
    • pp.73-89
    • /
    • 2023
  • This study aims to analyze the frequency of keywords used in Korean abstracts, which are unstructured text data in the domestic record management research field, using text mining techniques to identify domestic record management research trends through distance analysis between keywords. To this end, 1,157 keywords of 77,578 journals were visualized by extracting 1,157 articles from 7 journal types (28 types) searched by major category (complex study) and middle category (literature informatics) from the institutional statistics (registered site, candidate site) of the Korean Citation Index (KCI). Analysis of t-Distributed Stochastic Neighbor Embedding (t-SNE) and Scattertext using Word2vec was performed. As a result of the analysis, first, it was confirmed that keywords such as "record management" (889 times), "analysis" (888 times), "archive" (742 times), "record" (562 times), and "utilization" (449 times) were treated as significant topics by researchers. Second, Word2vec analysis generated vector representations between keywords, and similarity distances were investigated and visualized using t-SNE and Scattertext. In the visualization results, the research area for record management was divided into two groups, with keywords such as "archiving," "national record management," "standardization," "official documents," and "record management systems" occurring frequently in the first group (past). On the other hand, keywords such as "community," "data," "record information service," "online," and "digital archives" in the second group (current) were garnering substantial focus.

An Analysis of the Status of National Research and Development Projects in Records Management (기록관리 분야 국가연구개발사업 현황 분석)

  • Hoemyeong Jeong;Soonhee Kim
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.23 no.4
    • /
    • pp.137-157
    • /
    • 2023
  • The scale of research and development (R&D) investment is increasing to strengthen national competitiveness through technological innovation, leading to an increased interest in investment efficiency. In records management, the National Archives of Korea has been leading the national research and development project since 2008. Accordingly, this study analyzed R&D projects in records management regarding implementing organization, performance or outcomes, and subjects, targeting 111 National Archives of Korea contract research projects from 2008 to 2022. The analysis showed that small and medium-sized enterprises (SMEs) were the most likely to conduct research, the majority of the research outcomes were academic publications, and there were some discrepancies between the reported performance in research and the actual performance. In terms of research subjects, the most common type of records are paper or print documents, establishing an electronic management system among the National Archives' works. In terms of the frequency of keywords in the records management process and research projects, it was found that research was mainly conducted on "preservation." Meanwhile, only 10 cases, or 9% of the 111 projects, were found to be relevant in terms of utilizing big data and developing intelligent technologies related to digital transformation. Therefore, the effectiveness of the R&D project must be improved through follow-up management of the results even after the research project is completed. In addition, in terms of research topics, it was identified that aside from "preservation," studies focusing on "transfer," "classification," "evaluation," and "collection," as well as research that responds to digital transformation, are needed.

Comparing Corporate and Public ESG Perceptions Using Text Mining and ChatGPT Analysis: Based on Sustainability Reports and Social Media (텍스트마이닝과 ChatGPT 분석을 활용한 기업과 대중의 ESG 인식 비교: 지속가능경영보고서와 소셜미디어를 기반으로)

  • Jae-Hoon Choi;Sung-Byung Yang;Sang-Hyeak Yoon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.347-373
    • /
    • 2023
  • As the significance of ESG (Environmental, Social, and Governance) management amplifies in driving sustainable growth, this study delves into and compares ESG trends and interrelationships from both corporate and societal viewpoints. Employing a combination of Latent Dirichlet Allocation Topic Modeling (LDA) and Semantic Network Analysis, we analyzed sustainability reports alongside corresponding social media datasets. Additionally, an in-depth examination of social media content was conducted using Joint Sentiment Topic Modeling (JST), further enriched by Semantic Network Analysis (SNA). Complementing text mining analysis with the assistance of ChatGPT, this study identified 25 different ESG topics. It highlighted differences between companies aiming to avoid risks and build trust, and the general public's diverse concerns like investment options and working conditions. Key terms like 'greenwashing,' 'serious accidents,' and 'boycotts' show that many people doubt how companies handle ESG issues. The findings from this study set the foundation for a plan that serves key ESG groups, including businesses, government agencies, customers, and investors. This study also provide to guide the creation of more trustworthy and effective ESG strategies, helping to direct the discussion on ESG effectiveness.

Factors Affecting Individual Effectiveness in Metaverse Workplaces and Moderating Effect of Metaverse Platforms: A Modified ESP Theory Perspective (메타버스 작업공간의 개인적 효과에 영향 및 메타버스 플랫폼의 조절효과에 대한 연구: 수정된 ESP 이론 관점으로)

  • Jooyeon Jeong;Ohbyung Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.207-228
    • /
    • 2023
  • After COVID-19, organizations have widely adopted platforms such as zoom or developed their proprietary online real-time systems for remote work, with recent forays into incorporating the metaverse for meetings and publicity. While ongoing studies investigate the impact of avatar customization, expansive virtual environments, and past virtual experiences on participant satisfaction within virtual reality or metaverse settings, the utilization of the metaverse as a dedicated workspace is still an evolving area. There exists a notable gap in research concerning the factors influencing the performance of the metaverse as a workspace, particularly in non-immersive work-type metaverses. Unlike studies focusing on immersive virtual reality or metaverses emphasizing immersion and presence, the majority of contemporary work-oriented metaverses tend to be non-immersive. As such, understanding the factors that contribute to the success of these existing non-immersive metaverses becomes crucial. Hence, this paper aims to empirically analyze the factors impacting personal outcomes in the non-immersive metaverse workspace and derive implications from the results. To achieve this, the study adopts the Embodied Social Presence (ESP) model as a theoretical foundation, modifying and proposing a research model tailored to the non-immersive metaverse workspace. The findings validate that the impact of presence on task engagement and task involvement exhibits a moderating effect based on the metaverse platform used. Following interviews with participants engaged in non-immersive metaverse workplaces (specifically Gather Town and Ifland), a survey was conducted to gather comprehensive insights.

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.

Safety Verification Techniques of Privacy Policy Using GPT (GPT를 활용한 개인정보 처리방침 안전성 검증 기법)

  • Hye-Yeon Shim;MinSeo Kweun;DaYoung Yoon;JiYoung Seo;Il-Gu Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.207-216
    • /
    • 2024
  • As big data was built due to the 4th Industrial Revolution, personalized services increased rapidly. As a result, the amount of personal information collected from online services has increased, and concerns about users' personal information leakage and privacy infringement have increased. Online service providers provide privacy policies to address concerns about privacy infringement of users, but privacy policies are often misused due to the long and complex problem that it is difficult for users to directly identify risk items. Therefore, there is a need for a method that can automatically check whether the privacy policy is safe. However, the safety verification technique of the conventional blacklist and machine learning-based privacy policy has a problem that is difficult to expand or has low accessibility. In this paper, to solve the problem, we propose a safety verification technique for the privacy policy using the GPT-3.5 API, which is a generative artificial intelligence. Classification work can be performed evenin a new environment, and it shows the possibility that the general public without expertise can easily inspect the privacy policy. In the experiment, how accurately the blacklist-based privacy policy and the GPT-based privacy policy classify safe and unsafe sentences and the time spent on classification was measured. According to the experimental results, the proposed technique showed 10.34% higher accuracy on average than the conventional blacklist-based sentence safety verification technique.

Unsupervised Learning-Based Threat Detection System Using Radio Frequency Signal Characteristic Data (무선 주파수 신호 특성 데이터를 사용한 비지도 학습 기반의 위협 탐지 시스템)

  • Dae-kyeong Park;Woo-jin Lee;Byeong-jin Kim;Jae-yeon Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.147-155
    • /
    • 2024
  • Currently, the 4th Industrial Revolution, like other revolutions, is bringing great change and new life to humanity, and in particular, the demand for and use of drones, which can be applied by combining various technologies such as big data, artificial intelligence, and information and communications technology, is increasing. Recently, it has been widely used to carry out dangerous military operations and missions, such as the Russia-Ukraine war and North Korea's reconnaissance against South Korea, and as the demand for and use of drones increases, concerns about the safety and security of drones are growing. Currently, a variety of research is being conducted, such as detection of wireless communication abnormalities and sensor data abnormalities related to drones, but research on real-time detection of threats using radio frequency characteristic data is insufficient. Therefore, in this paper, we conduct a study to determine whether the characteristic data is normal or abnormal signal data by collecting radio frequency signal characteristic data generated while the drone communicates with the ground control system while performing a mission in a HITL(Hardware In The Loop) simulation environment similar to the real environment. proceeded. In addition, we propose an unsupervised learning-based threat detection system and optimal threshold that can detect threat signals in real time while a drone is performing a mission.