• Title/Summary/Keyword: CLOUD

Search Result 5,507, Processing Time 0.031 seconds

Validation of Electronic Foot Function Index in Patients with Foot and Ankle Disease: A Randomized, Prospective Multicenter Study (족부 족관절 질환 환자에서 전자식 족부 기능 지수의 인증: 임의 배정, 전향적, 다기관 연구)

  • Lee, Dong Yeon;Kim, Yu Mi;Lee, Jun Hyung;Kim, Jin;Kim, Ji-Beom;Kim, Bom Soo;Choi, Gi Won;Seo, Sang Gyo;Kim, Jun Beom;Park, Se-Jin;Kim, Yoon-Chung;Choi, Young Rak;Lee, Dong-Oh;Cho, Jae-Ho;Chun, Dong-Il;Kim, Hyong Nyun;Park, Jae-Yong
    • Journal of Korean Foot and Ankle Society
    • /
    • v.23 no.1
    • /
    • pp.24-30
    • /
    • 2019
  • Purpose: To evaluate the efficiency of the electronic foot function index (eFFI) through a prospective, random based, multi-institutional study. Materials and Methods: The study included 227 patients ranging in age from 20 to 79 years, visited for surgery in different 15 institutes, and agreed to volunteer. The patients were assigned randomly into a paper-based evaluated group (n=113) and tablet-based evaluated group (n=114). The evaluation was done on the day of hospital admission and the method was changed on the second day of surgery and re-evaluated. PADAS 2.0 (https://www.proscore.kr) was used as an electronic evaluation program. Results: There were no differences in age and sex in both groups. The intraclass correlation coefficient (ICC) evaluation revealed an eFFI ICC of 0.924, showing that both results were similar. The evaluation time was shorter in the tablet-based group than the paper-based group (paper vs tablet, $3.7{\pm}3.8$ vs $2.3{\pm}1.3minutes$). Thirty-nine patients (17.2%) preferred to use paper and 131 patients (57.7%) preferred the tablet. Fifty-seven patients (25.1%) found both ways to be acceptable. Conclusion: eFFI through tablet devices appears to be more constant than the paper-based program. In addition, it required a shorter amount of time and the patients tended to prefer the tablet-based program. Overall, tablet and cloud system can be beneficial to a clinical study.

Development Process for User Needs-based Chatbot: Focusing on Design Thinking Methodology (사용자 니즈 기반의 챗봇 개발 프로세스: 디자인 사고방법론을 중심으로)

  • Kim, Museong;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.221-238
    • /
    • 2019
  • Recently, companies and public institutions have been actively introducing chatbot services in the field of customer counseling and response. The introduction of the chatbot service not only brings labor cost savings to companies and organizations, but also enables rapid communication with customers. Advances in data analytics and artificial intelligence are driving the growth of these chatbot services. The current chatbot can understand users' questions and offer the most appropriate answers to questions through machine learning and deep learning. The advancement of chatbot core technologies such as NLP, NLU, and NLG has made it possible to understand words, understand paragraphs, understand meanings, and understand emotions. For this reason, the value of chatbots continues to rise. However, technology-oriented chatbots can be inconsistent with what users want inherently, so chatbots need to be addressed in the area of the user experience, not just in the area of technology. The Fourth Industrial Revolution represents the importance of the User Experience as well as the advancement of artificial intelligence, big data, cloud, and IoT technologies. The development of IT technology and the importance of user experience have provided people with a variety of environments and changed lifestyles. This means that experiences in interactions with people, services(products) and the environment become very important. Therefore, it is time to develop a user needs-based services(products) that can provide new experiences and values to people. This study proposes a chatbot development process based on user needs by applying the design thinking approach, a representative methodology in the field of user experience, to chatbot development. The process proposed in this study consists of four steps. The first step is 'setting up knowledge domain' to set up the chatbot's expertise. Accumulating the information corresponding to the configured domain and deriving the insight is the second step, 'Knowledge accumulation and Insight identification'. The third step is 'Opportunity Development and Prototyping'. It is going to start full-scale development at this stage. Finally, the 'User Feedback' step is to receive feedback from users on the developed prototype. This creates a "user needs-based service (product)" that meets the process's objectives. Beginning with the fact gathering through user observation, Perform the process of abstraction to derive insights and explore opportunities. Next, it is expected to develop a chatbot that meets the user's needs through the process of materializing to structure the desired information and providing the function that fits the user's mental model. In this study, we present the actual construction examples for the domestic cosmetics market to confirm the effectiveness of the proposed process. The reason why it chose the domestic cosmetics market as its case is because it shows strong characteristics of users' experiences, so it can quickly understand responses from users. This study has a theoretical implication in that it proposed a new chatbot development process by incorporating the design thinking methodology into the chatbot development process. This research is different from the existing chatbot development research in that it focuses on user experience, not technology. It also has practical implications in that companies or institutions propose realistic methods that can be applied immediately. In particular, the process proposed in this study can be accessed and utilized by anyone, since 'user needs-based chatbots' can be developed even if they are not experts. This study suggests that further studies are needed because only one field of study was conducted. In addition to the cosmetics market, additional research should be conducted in various fields in which the user experience appears, such as the smart phone and the automotive market. Through this, it will be able to be reborn as a general process necessary for 'development of chatbots centered on user experience, not technology centered'.

Application of Terrestrial LiDAR for Reconstructing 3D Images of Fault Trench Sites and Web-based Visualization Platform for Large Point Clouds (지상 라이다를 활용한 트렌치 단층 단면 3차원 영상 생성과 웹 기반 대용량 점군 자료 가시화 플랫폼 활용 사례)

  • Lee, Byung Woo;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.54 no.2
    • /
    • pp.177-186
    • /
    • 2021
  • For disaster management and mitigation of earthquakes in Korea Peninsula, active fault investigation has been conducted for the past 5 years. In particular, investigation of sediment-covered active faults integrates geomorphological analysis on airborne LiDAR data, surface geological survey, and geophysical exploration, and unearths subsurface active faults by trench survey. However, the fault traces revealed by trench surveys are only available for investigation during a limited time and restored to the previous condition. Thus, the geological data describing the fault trench sites remain as the qualitative data in terms of research articles and reports. To extend the limitations due to temporal nature of geological studies, we utilized a terrestrial LiDAR to produce 3D point clouds for the fault trench sites and restored them in a digital space. The terrestrial LiDAR scanning was conducted at two trench sites located near the Yangsan Fault and acquired amplitude and reflectance from the surveyed area as well as color information by combining photogrammetry with the LiDAR system. The scanned data were merged to form the 3D point clouds having the average geometric error of 0.003 m, which exhibited the sufficient accuracy to restore the details of the surveyed trench sites. However, we found more post-processing on the scanned data would be necessary because the amplitudes and reflectances of the point clouds varied depending on the scan positions and the colors of the trench surfaces were captured differently depending on the light exposures available at the time. Such point clouds are pretty large in size and visualized through a limited set of softwares, which limits data sharing among researchers. As an alternative, we suggested Potree, an open-source web-based platform, to visualize the point clouds of the trench sites. In this study, as a result, we identified that terrestrial LiDAR data can be practical to increase reproducibility of geological field studies and easily accessible by researchers and students in Earth Sciences.

The Posthuman Queer Body in Ghost in the Shell (1995) (<공각기동대>의 현재성과 포스트휴먼 퀴어 연구)

  • Kim, Soo-Yeon
    • Cross-Cultural Studies
    • /
    • v.40
    • /
    • pp.111-131
    • /
    • 2015
  • An unusual success engendering loyalty among cult fans in the United States, Mamoru Oshii's 1995 cyberpunk anime, Ghost in the Shell (GITS) revolves around a female cyborg assassin named Motoko Kusanagi, a.k.a. "the Major." When the news came out last year that Scarlett Johansson was offered 10 million dollars for the role of the Major in the live action remake of GITS, the frustrated fans accused DreamWorks of "whitewashing" the classic Japanimation and turning it into a PG-13 film. While it would be premature to judge a film yet to be released, it appears timely to revisit the core achievement of Oshii's film untranslatable into the Hollywood formula. That is, unlike ultimately heteronormative and humanist sci-fi films produced in Hollywood, such as the Matrix trilogy or Cloud Atlas, GITS defies a Hollywoodization by evoking much bafflement in relation to its queer, posthuman characters and settings. This essay homes in on Major Kusanagi's body in order to update prior criticism from the perspectives of posthumanism and queer theory. If the Major's voluptuous cyborg body has been read as a liberating or as a commodified feminine body, latest critical work of posthumanism and queer theory causes us to move beyond the moralistic binaries of human/non-human and male/female. This deconstruction of binaries leads to a radical rethinking of "reality" and "identity" in an image-saturated, hypermediated age. Viewed from this perspective, Major Kusanagi's body can be better understood less as a reflection of "real" women than as an embodiment of our anxieties on the loss of self and interiority in the SNS-dominated society. As is warned by many posthumanist and queer critics, queer and posthuman components are too often used to reinforce the human. I argue that the Major's hybrid body is neither a mere amalgam of human and machine nor a superficial postmodern blurring of boundaries. Rather, the compelling combination of individuality, animality, and technology embodied in the Major redefines the human as always, already posthuman. This ethical act of revision-its shifting focus from oppressive humanism to a queer coexistence-evinces the lasting power of GITS.

Analysis of News Agenda Using Text mining and Semantic Network Analysis: Focused on COVID-19 Emotions (텍스트 마이닝과 의미 네트워크 분석을 활용한 뉴스 의제 분석: 코로나 19 관련 감정을 중심으로)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.47-64
    • /
    • 2021
  • The global spread of COVID-19 around the world has not only affected many parts of our daily life but also has a huge impact on many areas, including the economy and society. As the number of confirmed cases and deaths increases, medical staff and the public are said to be experiencing psychological problems such as anxiety, depression, and stress. The collective tragedy that accompanies the epidemic raises fear and anxiety, which is known to cause enormous disruptions to the behavior and psychological well-being of many. Long-term negative emotions can reduce people's immunity and destroy their physical balance, so it is essential to understand the psychological state of COVID-19. This study suggests a method of monitoring medial news reflecting current days which requires striving not only for physical but also for psychological quarantine in the prolonged COVID-19 situation. Moreover, it is presented how an easier method of analyzing social media networks applies to those cases. The aim of this study is to assist health policymakers in fast and complex decision-making processes. News plays a major role in setting the policy agenda. Among various major media, news headlines are considered important in the field of communication science as a summary of the core content that the media wants to convey to the audiences who read it. News data used in this study was easily collected using "Bigkinds" that is created by integrating big data technology. With the collected news data, keywords were classified through text mining, and the relationship between words was visualized through semantic network analysis between keywords. Using the KrKwic program, a Korean semantic network analysis tool, text mining was performed and the frequency of words was calculated to easily identify keywords. The frequency of words appearing in keywords of articles related to COVID-19 emotions was checked and visualized in word cloud 'China', 'anxiety', 'situation', 'mind', 'social', and 'health' appeared high in relation to the emotions of COVID-19. In addition, UCINET, a specialized social network analysis program, was used to analyze connection centrality and cluster analysis, and a method of visualizing a graph using Net Draw was performed. As a result of analyzing the connection centrality between each data, it was found that the most central keywords in the keyword-centric network were 'psychology', 'COVID-19', 'blue', and 'anxiety'. The network of frequency of co-occurrence among the keywords appearing in the headlines of the news was visualized as a graph. The thickness of the line on the graph is proportional to the frequency of co-occurrence, and if the frequency of two words appearing at the same time is high, it is indicated by a thick line. It can be seen that the 'COVID-blue' pair is displayed in the boldest, and the 'COVID-emotion' and 'COVID-anxiety' pairs are displayed with a relatively thick line. 'Blue' related to COVID-19 is a word that means depression, and it was confirmed that COVID-19 and depression are keywords that should be of interest now. The research methodology used in this study has the convenience of being able to quickly measure social phenomena and changes while reducing costs. In this study, by analyzing news headlines, we were able to identify people's feelings and perceptions on issues related to COVID-19 depression, and identify the main agendas to be analyzed by deriving important keywords. By presenting and visualizing the subject and important keywords related to the COVID-19 emotion at a time, medical policy managers will be able to be provided a variety of perspectives when identifying and researching the regarding phenomenon. It is expected that it can help to use it as basic data for support, treatment and service development for psychological quarantine issues related to COVID-19.

Changes in Meteorological Variables by SO2 Emissions over East Asia using a Linux-based U.K. Earth System Model (리눅스 기반 U.K. 지구시스템모형을 이용한 동아시아 SO2 배출에 따른 기상장 변화)

  • Youn, Daeok;Song, Hyunggyu;Lee, Johan
    • Journal of the Korean earth science society
    • /
    • v.43 no.1
    • /
    • pp.60-76
    • /
    • 2022
  • This study presents a software full setup and the following test execution times in a Linux cluster for the United Kingdom Earth System Model (UKESM) and then compares the model results from control and experimental simulations of the UKESM relative to various observations. Despite its low resolution, the latest version of the UKESM can simulate tropospheric chemistry-aerosol processes and the stratospheric ozone chemistry using the United Kingdom Chemistry and Aerosol (UKCA) module. The UKESM with UKCA (UKESM-UKCA) can treat atmospheric chemistryaerosol-cloud-radiation interactions throughout the whole atmosphere. In addition to the control UKESM run with the default CMIP5 SO2 emission dataset, an experimental run was conducted to evaluate the aerosol effects on meteorology by changing atmospheric SO2 loading with the newest REAS data over East Asia. The simulation period of the two model runs was 28 years, from January 1, 1982 to December 31, 2009. Spatial distributions of monthly mean aerosol optical depth, 2-m temperature, and precipitation intensity from model simulations and observations over East Asia were compared. The spatial patterns of surface temperature and precipitation from the two model simulations were generally in reasonable agreement with the observations. The simulated ozone concentration and total column ozone also agreed reasonably with the ERA5 reanalyzed one. Comparisons of spatial patterns and linear trends led to the conclusion that the model simulation with the newest SO2 emission dataset over East Asia showed better temporal changes in temperature and precipitation over the western Pacific and inland China. Our results are in line with previous finding that SO2 emissions over East Asia are an important factor for the atmospheric environment and climate change. This study confirms that the UKESM can be installed and operated in a Linux cluster-computing environment. Thus, researchers in various fields would have better access to the UKESM, which can handle the carbon cycle and atmospheric environment on Earth with interactions between the atmosphere, ocean, sea ice, and land.

Strategies for Increasing the Value and Sustainability of Archaeological Education in the Post-COVID-19 Era (포스트 코로나 시대 고고유산 교육의 가치와 지속가능성을 위한 전략)

  • KIM, Eunkyung
    • Korean Journal of Heritage: History & Science
    • /
    • v.55 no.2
    • /
    • pp.82-100
    • /
    • 2022
  • With the crisis of the COVID-19 pandemic and the era of the 4th industrial revolution, archaeological heritage education has entered a new phase. This article responds to the trends in the post-COVID-19 era, seeking ways to develop archaeological heritage education and sustainable strategies necessary in the era of the 4th industrial revolution. The program of archaeological heritage education required in the era of the 4th industrial revolution must cultivate creative talent, solve problems, and improve self-efficacy. It should also draw attention to archaeological heritage maker education. Such maker education should be delivered based on constructivism and be designed by setting specific learning goals in consideration of various age-specific characteristics. Moreover, various ICT-based contents applying VR, AR, cloud, and drone imaging technologies should be developed and expanded, and, above all, ontact digital education(real-time virtual learning) should seek ways to revitalize communities capable of interactive communication in non-face-to-face situations. The development of such ancient heritage content needs to add AI functions that consider learners' interests, learning abilities, and learning purposes while producing various convergent contents from the standpoint of "cultural collage." Online archaeological heritage content education should be delivered following prior learning or with supplementary learning in consideration of motivation or field learning to access the real thing in the future. Ultimately, archaeological ontact education will be delivered using cutting-edge technologies that reflect the current trends. In conjunction with this, continuous efforts are needed for constructive learning that enables discovery and question-exploration.

An Implementation of OTB Extension to Produce TOA and TOC Reflectance of LANDSAT-8 OLI Images and Its Product Verification Using RadCalNet RVUS Data (Landsat-8 OLI 영상정보의 대기 및 지표반사도 산출을 위한 OTB Extension 구현과 RadCalNet RVUS 자료를 이용한 성과검증)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.449-461
    • /
    • 2021
  • Analysis Ready Data (ARD) for optical satellite images represents a pre-processed product by applying spectral characteristics and viewing parameters for each sensor. The atmospheric correction is one of the fundamental and complicated topics, which helps to produce Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance from multi-spectral image sets. Most remote sensing software provides algorithms or processing schemes dedicated to those corrections of the Landsat-8 OLI sensors. Furthermore, Google Earth Engine (GEE), provides direct access to Landsat reflectance products, USGS-based ARD (USGS-ARD), on the cloud environment. We implemented the Orfeo ToolBox (OTB) atmospheric correction extension, an open-source remote sensing software for manipulating and analyzing high-resolution satellite images. This is the first tool because OTB has not provided calibration modules for any Landsat sensors. Using this extension software, we conducted the absolute atmospheric correction on the Landsat-8 OLI images of Railroad Valley, United States (RVUS) to validate their reflectance products using reflectance data sets of RVUS in the RadCalNet portal. The results showed that the reflectance products using the OTB extension for Landsat revealed a difference by less than 5% compared to RadCalNet RVUS data. In addition, we performed a comparative analysis with reflectance products obtained from other open-source tools such as a QGIS semi-automatic classification plugin and SAGA, besides USGS-ARD products. The reflectance products by the OTB extension showed a high consistency to those of USGS-ARD within the acceptable level in the measurement data range of the RadCalNet RVUS, compared to those of the other two open-source tools. In this study, the verification of the atmospheric calibration processor in OTB extension was carried out, and it proved the application possibility for other satellite sensors in the Compact Advanced Satellite (CAS)-500 or new optical satellites.

Utilizing the Idle Railway Sites: A Proposal for the Location of Solar Power Plants Using Cluster Analysis (철도 유휴부지 활용방안: 군집분석을 활용한 태양광발전 입지 제안)

  • Eunkyung Kang;Seonuk Yang;Jiyoon Kwon;Sung-Byung Yang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.79-105
    • /
    • 2023
  • Due to unprecedented extreme weather events such as global warming and climate change, many parts of the world suffer from severe pain, and economic losses are also snowballing. In order to address these problems, 'The Paris Agreement' was signed in 2016, and an intergovernmental consultative body was formed to keep the average temperature rise of the Earth below 1.5℃. Korea also declared 'Carbon Neutrality in 2050' to prevent climate catastrophe. In particular, it was found that the increase in temperature caused by greenhouse gas emissions hurts the environment and society as a whole, as well as the export-dependent economy of Korea. In addition, as the diversification of transportation types is accelerating, the change in means of choice is also increasing. As the development paradigm in the low-growth era changes to urban regeneration, interest in idle railway sites is rising due to reduced demand for routes, improvement of alignment, and relocation of urban railways. Meanwhile, it is possible to partially achieve the solar power generation goal of 'Renewable Energy 3020' by utilizing already developed but idle railway sites and take advantage of being free from environmental damage and resident acceptance issues surrounding the location; but the actual use and plan for these solar power facilities are still lacking. Therefore, in this study, using the big data provided by the Korea National Railway and the Renewable Energy Cloud Platform, we develop an algorithm to discover and analyze suitable idle sites where solar power generation facilities can be installed and identify potentially applicable areas considering conditions desired by users. By searching and deriving these idle but relevant sites, it is intended to devise a plan to save enormous costs for facilities or expansion in the early stages of development. This study uses various cluster analyses to develop an optimal algorithm that can derive solar power plant locations on idle railway sites and, as a result, suggests 202 'actively recommended areas.' These results would help decision-makers make rational decisions from the viewpoint of simultaneously considering the economy and the environment.

Deep Learning OCR based document processing platform and its application in financial domain (금융 특화 딥러닝 광학문자인식 기반 문서 처리 플랫폼 구축 및 금융권 내 활용)

  • Dongyoung Kim;Doohyung Kim;Myungsung Kwak;Hyunsoo Son;Dongwon Sohn;Mingi Lim;Yeji Shin;Hyeonjung Lee;Chandong Park;Mihyang Kim;Dongwon Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.143-174
    • /
    • 2023
  • With the development of deep learning technologies, Artificial Intelligence powered Optical Character Recognition (AI-OCR) has evolved to read multiple languages from various forms of images accurately. For the financial industry, where a large number of diverse documents are processed through manpower, the potential for using AI-OCR is great. In this study, we present a configuration and a design of an AI-OCR modality for use in the financial industry and discuss the platform construction with application cases. Since the use of financial domain data is prohibited under the Personal Information Protection Act, we developed a deep learning-based data generation approach and used it to train the AI-OCR models. The AI-OCR models are trained for image preprocessing, text recognition, and language processing and are configured as a microservice architected platform to process a broad variety of documents. We have demonstrated the AI-OCR platform by applying it to financial domain tasks of document sorting, document verification, and typing assistance The demonstrations confirm the increasing work efficiency and conveniences.