A Convergent and Combined Activation Plan for Exercise Rehabilitation in the Era of the Fourth Industrial Revolution (4차 산업혁명시대에 운동재활분야의 융·복합적 활성화 방안)
-
- Journal of Korea Entertainment Industry Association
- /
- v.14 no.8
- /
- pp.407-426
- /
- 2020
The purpose of this study was to make convergent and combined analysis of the sport industry and exercise rehabilitation in the era of New Normal based on the Fourth Industrial Revolution and devise a comprehensive plan for future activation. For this purpose, literature review was performed mainly by analyzing the environment of the sport industry in the New Normal era based on the Fourth Industrial Revolution and by carrying out convergent and combined analysis of the sport industry to present a convergent and combined activation plan for exercise rehabilitation comprehensively as follows: First, it is necessary to make a strategy of promoting exercise rehabilitation in convergent and combined ways at the sport industry level. This means development of a convergent and combined exercise rehabilitation-tourism-ICT model as well as a convergent and combined exercise rehabilitation-ICT model through collaboration among ministries, including those of health and sports. Second, it is necessary to convert into a convergent and combined way of thinking and extend and reinforce educational competitiveness in the area of exercise rehabilitation. That is, it is necessary to refine the education and training systems for reinforcing personal ICT competence of exercise rehabilitation majors and relevant ones and provide convergent and combined business commencement education. Third, it is necessary to make different types of research and development by applying practical, convergent and combined skills based on the industrial field to exercise rehabilitation and relevant areas. Efforts should be made to overcome any risk in the era of New Normal and support business commencement with convergent and combined skills for exercise rehabilitation. Fourth, it is necessary to make mid- and long-term clusters where exercise rehabilitation and relevant businesses can be accumulated. This means building an industrial hub and complex for exercise rehabilitation and requires making an R&D-based cluster with industrial-academic-governmental collaboration, maximizing the synergy effects with local infrastructures, and fulfilling the function of realizing a spontaneous profit-generating structure.
Modern scientists are trying to find the basic unit of order, fractal geometry, in the complex systems of the universe. Fractal is a term often used in mathematics or physics, it is appropriate as a principle to explain why some models of ultimate reality are represented as multifaceted. Fractals are already widely used in the field of computer graphics and as a commercial principle in the world of science. In this paper, using observations from fractal geometry, I present the embodiment of ultimate reality as understood in Daesoon Thought. There are various models of ultimate reality such as Dao (道, the way), Sangje (上帝, supreme god), Sinmyeong (神明, Gods), Mugeuk (無極, limitlessness), Taegeuk (太極, the Great Ultimate), and Cheonji (天地, heaven and earth) all of which exist in Daesoon Thought, and these concepts are mutually interrelated. In other words, by revealing the fact that ultimate reality is embodied within fractal geometry, it can be shown that concordance and transformation of various models of ultimate reality are supported by modern science. But when the major religions of the world were divided along lines of personality (personal gods) and non-personality (impersonal deities), most religions came to assume that ultimate reality was either transcendental or personal, and they could not postulate a relationship between God and humanity as Yin Yang (陰陽) fractals (Holon). In addition, religions, which assume ultimate reality as an intrinsic and impersonal being, are somewhat different in terms of their degree of Holon realization - all parts and whole restitution. Daesoon Thought most directly states that gods (deities) and human beings are in a relationship of Yin Yang fractals. In essence, "deities are Yin, and humanity is Yang" and furthermore, "human beings are divine beings." Additionally, in the Daesoon Thought, these models of ultimate reality are presented through various concepts from various viewpoints, and they are revealed as mutually interrelated concepts. As such, point of view regarding the universe wherein Holarchy becomes a models in a key idea within perennial philosophy. According to a universalized view of religious phenomena, perennial philosophy was adopted by the world's great spiritual teachers, thinkers, philosophers, and scientists. From this viewpoint, when ultimate reality coincides, human beings and God are no longer different. In other words, the veracity of the theory of ultimate reality that has appeared in Daesoon Thought can find support in both modern science and perennial philosophy.
DNA barcoding without assessing reliability and validity causes taxonomic errors of species identification, which is responsible for disruptions of their conservation and aquaculture industry. Although DNA barcoding facilitates molecular identification and phylogenetic analysis of species, its availability in clariid catfish lineage remains uncertain. In this study, DNA barcoding was developed and validated for clariid catfish. 2,970 barcode sequences from mitochondrial cytochrome c oxidase I (COI) and cytochrome b (Cytb) genes and D-loop sequences were analyzed for 37 clariid catfish species. The highest intraspecific nearest neighbor distances were 85.47%, 98.03%, and 89.10% for COI, Cytb, and D-loop sequences, respectively. This suggests that the Cytb gene is the most appropriate for identifying clariid catfish and can serve as a standard region for DNA barcoding. A positive barcoding gap between interspecific and intraspecific sequence divergence was observed in the Cytb dataset but not in the COI and D-loop datasets. Intraspecific variation was typically less than 4.4%, whereas interspecific variation was generally more than 66.9%. However, a species complex was detected in walking catfish and significant intraspecific sequence divergence was observed in North African catfish. These findings suggest the need to focus on developing a DNA barcoding system for classifying clariid catfish properly and to validate its efficacy for a wider range of clariid catfish. With an enriched database of multiple sequences from a target species and its genus, species identification can be more accurate and biodiversity assessment of the species can be facilitated.
In accordance with the government's stance of actively promoting intelligent administrative service policies through data utilization, in the disaster and safety management field, it also is proceeding with disaster and safety management policies utilizing data and constructing systems for responding efficiently to new and complex disasters and establishing scientific and systematic safety policies. However, it is difficult to quickly and accurately grasp the on-site situation in the event of a disaster, and there are still limitations in providing information necessary for situation judgment and response only by displaying vast data. This paper focuses on deriving specific needs to make disaster situation management work more intelligent and efficient by utilizing intelligent information technology. Through individual interviews with workers at the Central Disaster and Safety Status Control Center, we investigated the scope of disaster situation management work and the main functions and usability of the geographic information system (GIS)-based integrated situation management system by practitioners in this process. In addition, the data built in the system was reclassified according to purpose and characteristics to check the status of data in the GIS-based integrated situation management system. To derive needed to make disaster situation management more intelligent and efficient by utilizing intelligent information technology, 3 strategies were established to quickly and accurately identify on-site situations, make data-based situation judgments, and support efficient situation management tasks, and implementation tasks were defined and task priorities were determined based on the importance of implementation tasks through analytic hierarchy process (AHP) analysis. As a result, 24 implementation tasks were derived, and to make situation management efficient, it is analyzed that the use of intelligent information technology is necessary for collecting, analyzing, and managing video and sensor data and tasks that can take a lot of time of be prone to errors when performed by humans, that is, collecting situation-related data and reporting tasks. We have a conclusion that among situation management intelligence strategies, we can perform to develop technologies for strategies being high important score, that is, quickly and accurately identifying on-site situations and efficient situation management work support.
While the government policy to fully adopt BIM in the construction sector is being implemented, the construction and utilization of landscape BIM models are facing challenges due to problems such as limitations in BIM authoring tools, difficulties in modeling natural materials, and a shortage in BIM content including libraries. In particular, plants, fundamental design elements in the field of landscape architecture, must be included in BIM models, yet they are often omitted during the modeling process, or necessary information is not included, which further compromises the quality of the BIM data. This study aimed to contribute to the construction and utilization of landscape BIM models by developing a plant library that complies with BIM standards and is applicable to the landscape industry. The plant library of trees and shrubs was developed in Revit by modeling 3D shapes and collecting attribute items. The geometric information is simplified to express the unique characteristics of each plant species at LOD200, LOD300, and LOD350 levels. The attribute information includes properties on plant species identification, such as species name, specifications, and quantity estimation, as well as ecological attributes and environmental performance information, totaling 24 items. The names of the files were given so that the hierarchy of an object in the landscape field could be revealed and the object name could classify the plant itself. Its usability was examined by building a landscape BIM model of an apartment complex. The result showed that the plant library facilitated the construction process of the landscape BIM model. It was also confirmed that the library was properly operated in the basic utilization of the BIM model, such as 2D documentation, quantity takeoff, and design review. However, the library lacked ground cover, and had limitations in those variables such as the environmental performance of plants because various databases for some materials have not yet been established. Further efforts are needed to develop BIM modeling tools, techniques, and various databases for natural materials. Moreover, entities and systems responsible for creating, managing, distributing, and disseminating BIM libraries must be established.
Pollinators are organisms that carry out the pollination process of plants and include Hymenoptera, Lepidoptera, Diptera, and Coleoptera. Among them, bees not only pollinate plants but also improve urban green spaces damaged by land use changes, providing a habitat and food for birds and insects. Today, however, the number of pollinating plants is decreasing due to issues such as early flowering due to climate change, fragmentation of green spaces due to urbanization, and pesticide use, which in turn leads to a decline in bee populations. The decline of bee populations directly translates into problems, such as reduced biodiversity in cities and decreased food production. Urban beekeeping has been proposed as a strategy to address the decline of bee populations. However, there is a problem asurban beekeeping strategies are proposed without considering the complex structure of the socio-ecological system consisting of bees foraging and pollination activities and are therefore unsustainable. Therefore, this study aims to analyze the socio-ecological system of honeybees, which are pollinators, structurally using system thinking and propose a green space planning strategy to revitalize urban beekeeping. For this study, previous studies that centered on the social and ecological system of bees in cities were collected and reviewed to establish the system area and derive the main variables for creating a causal loop diagram. Second, the ecological structure of bees' foraging and pollination activities and the structure of bees' ecological system in the city were analyzed, as was the social-ecological system structure of urban beekeeping by creating an individual causal loop diagram. Finally, the socio-ecological system structure of honey bees was analyzed from a holistic perspective through the creation of an integrated causal loop diagram. Citizen participation programs, local government investment, and the creation of urban parks and green spaces in idle spaces were suggestedas green space planning strategies to revitalize urban beekeeping. The results of this study differ from previous studies in that the ecological structure of bees and the social structure of urban beekeeping were analyzed from a holistic perspective using systems thinking to propose strategies, policy recommendations, and implications for introducing sustainable urban beekeeping.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.
Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.