• Title/Summary/Keyword: 인공지능산업

Search Result 1,062, Processing Time 0.029 seconds

A Study on Wearable Augmented Reality-Based Experiential Content: Focusing on AR Stone Tower Content (착용형 증강현실 기반 체험형 콘텐츠 연구: AR 돌탑 콘텐츠를 중심으로)

  • Inyoung Choi;Hieyong Jeong;Choonsung Shin
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.114-123
    • /
    • 2024
  • This paper proposes AR stone tower content, an experiential content based on wearable augmented reality (AR). Although wearable augmented reality is gaining attention, the acceptance of the technology is still focused on specialized applications such as industrial sites. On the other hand, the proposed AR stone tower content is based on the material of 'stone tower' so that general users can relate to it and easily participate in it, and it is organized to utilize space in a moving environment and find and stack stones based on natural hand gestures. The proposed AR stone tower content was implemented in the HoloLens 2 environment and evaluated by general users through a pilot exhibition in a small art museum. The evaluation results showed that the overall satisfaction with the content averaged 3.85, and the content appropriateness for the stone tower material was very high at 4.15. In particular, users were highly satisfied with content comprehension and sound, but somewhat less satisfied with object recognition, body adaptation, and object control. The above user evaluations confirm the resonance and positive response to the material, but also highlight the difficulties of the average user in experiencing and interacting with the wearable AR environment.

A Study on A Study on the University Education Plan Using ChatGPTfor University Students (ChatGPT를 활용한 대학 교육 방안 연구)

  • Hyun-ju Kim;Jinyoung Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.71-79
    • /
    • 2024
  • ChatGPT, an interactive artificial intelligence (AI) chatbot developed by Open AI in the U.S., gaining popularity with great repercussions around the world. Some academia are concerned that ChatGPT can be used by students for plagiarism, but ChatGPT is also widely used in a positive direction, such as being used to write marketing phrases or website phrases. There is also an opinion that ChatGPT could be a new future for "search," and some analysts say that the focus should be on fostering rather than excessive regulation. This study analyzed consciousness about ChatGPT for college students through a survey of their perception of ChatGPT. And, plagiarism inspection systems were prepared to establish an education support model using ChatGPT and ChatGPT. Based on this, a university education support model using ChatGPT was constructed. The education model using ChatGPT established an education model based on text, digital, and art, and then composed of detailed strategies necessary for the era of the 4th industrial revolution below it. In addition, it was configured to guide students to use ChatGPT within the permitted range by using the ChatGPT detection function provided by the plagiarism inspection system, after the instructor of the class determined the allowable range of content generated by ChatGPT according to the learning goal. By linking and utilizing ChatGPT and the plagiarism inspection system in this way, it is expected to prevent situations in which ChatGPT's excellent ability is abused in education.

With Corona Era, exploring policy measures to prevent non-face-to-face lonely deaths - Focusing on Daegu Metropolitan City's AI and IOT cases of lonely death prevention (With 코로나 시대 비대면 고독사 예방정책 방안 모색 - 대구광역시 AI, IOT 고독사 예방 사례를 중심으로)

  • Ha-Yoon Kim;Tai-Hyun Ha
    • Journal of Digital Convergence
    • /
    • v.21 no.3
    • /
    • pp.49-62
    • /
    • 2023
  • Due to social and cultural changes and the growth of aging people living as a single because of aging, lonely deaths are steadily increasing, and each local government has begun to define them as a social problem. The legal basis began to be established. In order to explore policy measures to prevent lonely deaths, this study examined cases of lonely death prevention policies using smart digital information technology (AI, IOT), which is being promoted by Daegu Metropolitan City to promote non-face-to-face policies to prevent lonely deaths. Policies related to lonely deaths are divided into two axes: lonely death prevention projects and post-excavation support projects. In order to operate these businesses efficiently, the provision of non-face-to-face services through artificial intelligence and the Internet of Things is recognized as a new service delivery system, so the importance and necessity of non-face-to-face services is increasing. It is time that multifaceted changes and preparations are needed, such as establishing a system to expand the non-face-to-face industry at the national level. In order to respond to another national disaster situation in the future, the non-face-to-face smart care system is being expanded in various welfare policies such as preventing lonely deaths. It will have to be activated.

Exploring the Effects of Passive Haptic Factors When Interacting with a Virtual Pet in Immersive VR Environment (몰입형 VR 환경에서 가상 반려동물과 상호작용에 관한 패시브 햅틱 요소의 영향 분석)

  • Donggeun KIM;Dongsik Jo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.125-132
    • /
    • 2024
  • Recently, with immersive virtual reality(IVR) technologies, various services such as education, training, entertainment, industry, healthcare and remote collaboration have been applied. In particular, researches are actively being studied to visualize and interact with virtual humans, research on virtual pets in IVR is also emerging. For interaction with the virtual pet, similar to real-world interaction scenarios, the most important thing is to provide physical contact such as haptic and non-verbal interaction(e.g., gesture). This paper investigates the effects on factors (e.g., shape and texture) of passive haptic feedbacks using mapping physical props corresponding to the virtual pet. Experimental results show significant differences in terms of immersion, co-presence, realism, and friendliness depending on the levels of texture elements when interacting with virtual pets by passive haptic feedback. Additionally, as the main findings of this study by statistical interaction between two variables, we found that there was Uncanny valley effect in terms of friendliness. With our results, we will expect to be able to provide guidelines for creating interactive contents with the virtual pet in immersive VR environments.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

A Study on the Status of Medical Equipment and Radiological Technologists using Big Data for Health Care: Based on Data for 2020-2021 (보건의료 빅데이터를 활용한 의료장비 및 방사선사 인력 현황 연구 : 2020-2021년 자료를 기준으로)

  • Jang, Hyon-Chol
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.5
    • /
    • pp.667-673
    • /
    • 2021
  • As we enter the era of the 4th industrial revolution, it is judged that the scope of work of radiologists will be further expanded according to the innovation and advancement of radiation medical technology development. In this study, the current status of medical equipment and radiology technicians was identified, and basic data were provided for the plan for nurturing talents in the field of radiation medical technology in the era of the 4th industrial revolution, as well as career and employment counseling. Data from the second quarter of 2020 and the second quarter of 2021 were analyzed using health and medical big data. As a result of comparing the status of medical equipment by type in 2021 compared to 2020, C-Arm X-ray examination equipment increased by 5.83% to 6,638 units, followed by MRI examination equipment 1,811 units 5.29%, and angiography equipment 725 units 5.22% , general X-ray examination equipment 21,557 units increased 3.99%, CT examination equipment 2,136 units 3.03%, and breast examination equipment 3,425 units increased 3.00%. As a result of a comparison of the total number of radiologists in 2021 compared to 2020, the number was 29,038, an increase of 2.73%. As a result of comparing the status of radiographers by region, the increase was highest in the Gyeonggi region with 5.96%, followed by the Gangwon region with a 5.66% increase and the Chungnam region with a 3.81% increase. In a situation where the number of medical equipment and radiologist manpower is increasing, universities are developing specialized knowledge and practical competency through subject development related to the understanding and utilization of customized artificial intelligence and big data that can be applied in the medical radiation technology field in the era of the 4th industrial revolution. It is necessary to nurture qualified radiographers, and at the level of the association, it is thought that active policies are needed to create new jobs and improve employment.

The Need and Improvement Direction of New Computer Media Classes in Landscape Architectural Education in University (대학 내 조경전공 교육과정에 있어 새로운 컴퓨터 미디어 수업의 필요와 개선방향)

  • Na, Sungjin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.1
    • /
    • pp.54-69
    • /
    • 2021
  • In 2020, civilized society's overall lifestyle showed a distinct change from consumable analog media, such as paper, to digital media with the increased penetration of cloud computing, and from wired media to wireless media. Based on these social changes, this work examines whether the use of computer media in the field of landscape architecture is appropriately applied. This study will give directions for new computer media classes in landscape architectural education in the 4th Industrial Revolution era. Landscape architecture is a field that directly proposes the realization of a positive lifestyle and the creation of a living environment and is closely connected with social change. However, there is no clear evidence that landscape architectural education is making any visible change, while the digital infrastructure of the 4th Industrial Revolution, such as Artificial Intelligence (AI), Big Data, autonomous vehicles, cloud networks, and the Internet of Things, is changing the contemporary society in terms of technology, culture, and economy among other aspects. Therefore, it is necessary to review the current state of the use of computer technology and media in landscape architectural education, and also to examine the alternative direction of the curriculum for the new digital era. First, the basis for discussion was made by studying the trends of computational design in modern landscape architecture. Next, the changes and current status of computer media classes in domestic and overseas landscape education were analyzed based on prior research and curriculum. As a result, the number and the types of computer media classes increased significantly between the study in 1994 and the current situation in 2020 in the foreign landscape department, whereas there were no obvious changes in the domestic landscape department. This shows that the domestic landscape education is passively coping with the changes in the digital era. Lastly, based on the discussions, this study examined alternatives to the new curriculum that landscape architecture department should pursue in a new degital world.

Study on High Sensitivity Metal Oxide Nanoparticle Sensors for HNS Monitoring of Emissions from Marine Industrial Facilities (해양산업시설 배출 HNS 모니터링을 위한 고감도 금속산화물 나노입자 센서에 대한 연구)

  • Changhan Lee;Sangsu An;Yuna Heo;Youngji Cho;Jiho Chang;Sangtae Lee;Sangwoo Oh;Moonjin Lee
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.spc
    • /
    • pp.30-36
    • /
    • 2022
  • A sensor is needed to continuously and automatically measure the change in HNS concentration in industrial facilities that directly discharge to the sea after water treatment. The basic function of the sensor is to be able to detect ppb levels even at room temperature. Therefore, a method for increasing the sensitivity of the existing sensor is proposed. First, a method for increasing the conductivity of a film using a conductive carbon-based additive in a nanoparticle thin film and a method for increasing ion adsorption on the surface using a catalyst metal were studied.. To improve conductivity, carbon black was selected as an additive in the film using ITO nanoparticles, and the performance change of the sensor according to the content of the additive was observed. As a result, the change in resistance and response time due to the increase in conductivity at a CB content of 5 wt% could be observed, and notably, the lower limit of detection was lowered to about 250 ppb in an experiment with organic solvents. In addition, to increase the degree of ion adsorption in the liquid, an experiment was conducted using a sample in which a surface catalyst layer was formed by sputtering Au. Notably, the response of the sensor increased by more than 20% and the average lower limit of detection was lowered to 61 ppm. This result confirmed that the chemical resistance sensor using metal oxide nanoparticles could detect HNS of several tens of ppb even at room temperature.