• Title/Summary/Keyword: SIMPLE

Search Result 35,095, Processing Time 0.062 seconds

Usefulness of Troponin-I, Lactate, C-reactive protein as a Prognostic Markers in Critically Ill Non-cardiac Patients (비 순환기계 중환자의 예후 인자로서의 Troponin-I, Lactate, C-reactive protein의 유용성)

  • Cho, Yu Ji;Ham, Hyeon Seok;Kim, Hwi Jong;Kim, Ho Cheol;Lee, Jong Deok;Hwang, Young Sil
    • Tuberculosis and Respiratory Diseases
    • /
    • v.58 no.6
    • /
    • pp.562-569
    • /
    • 2005
  • Background : The severity scoring system is useful for predicting the outcome of critically ill patients. However, the system is quite complicated and cost-ineffective. Simple serologic markers have been proposed to predict the outcome, which include troponin-I, lactate and C-reactive protein(CRP). The aim of this study was to evaluate the prognostic values of troponin-I, lactate and CRP in critically ill non-cardiac patients. Methods : From September 2003 to June 2004, 139 patients(Age: $63.3{\pm}14.7$, M:F = 88:51), who were admitted to the MICU with non-cardiac critical illness at Gyeongsang National University Hospital, were enrolled in this study. This study evaluated the severity of the illness and the multi-organ failure score (Acute Physiologic and Chronic Health EvaluationII, Simplified Acute Physiologic ScoreII and Sequential Organ Failure Assessment) and measured the troponin-I, lactate and CRP within 24 hours after admission in the MICU. Each value in the survivors and non-survivors was compared at the 10th and 30th day after ICU admission. The mortality rate was compared at 10th and 30th day in normal and abnormal group. In addition, the correlations between each value and the severity score were assessed. Results : There were significantly higher troponin-I and CRP levels, not lactate, in the non-survivors than in the survivors at 10th day($1.018{\pm}2.58ng/ml$, $98.48{\pm}69.24mg/L$ vs. $4.208{\pm}10.23ng/ml$, $137.69{\pm}70.18mg/L$) (p<0.05). There were significantly higher troponin-I, lactate and CRP levels in the non-survivors than in the survivors on the 30th day ($0.99{\pm}2.66ng/ml$, $8.02{\pm}9.54ng/dl$, $96.87{\pm}68.83mg/L$ vs. $3.36{\pm}8.74ng/ml$, $15.42{\pm}20.57ng/dl$, $131.28{\pm}71.23mg/L$) (p<0.05). The mortality rate was significantly higher in the abnormal group of troponin-I, lactate and CRP than in the normal group of troponin-I, lactate and CRP at 10th day(28.1%, 31.6%, 18.9% vs. 11.0%, 15.8 %, 0%) and 30th day(38.6%, 47.4%, 25.8% vs. 15.9%, 21.7%, 14.3%) (p<0.05). Troponin-I and lactate were significantly correlated with the SAPS II score($r^2=0.254$, 0.365, p<0.05). Conclusion : Measuring the troponin-I, lactate and CRP levels upon admission may be useful for predicting the outcome of critically ill non-cardiac patients.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Diagnosis of the Field-Grown Rice Plant -[1] Diagnostic Criteria by Flag Leaf Analysis- (포장재배(圃場栽培) 수도(水稻)의 영양진단(營養診斷) -1. 지엽분석(止葉分析)에 의(依)한 진단(診斷)-)

  • Park, Hoon
    • Applied Biological Chemistry
    • /
    • v.16 no.1
    • /
    • pp.18-30
    • /
    • 1973
  • The flag and lower leaves (4th or 5th) of rice plant from the field of NPK simple trial and from three low productive area were analyzed in order to find out certain diagnostic criteria of nutritional status at harvest. 1. Nutrient contents in the leaves from no fertilizer, minus nutrient and fertilizer plots revealed each criterion for induced deficiency (severe deficient case induced by other nutrients), deficiency (below the critical concentration), insufficiency (hidden hunger region), sufficiency (luxuary consumption stage) and excess (harmful or toxic level). 2. Nitrogen contents for the above five status was less than 1.0%, 1.0 to 1.2, 1.2 to 1.6, 1.6 to 1.9 and greater than 1.9, respectively. 3. It was less than 0.3%, 0.3 to 0.4, 0.4 to 0.55 and greater than 0.55 for phosphorus $(P_2O_5)$ but excess level was not clear. 4. It was below 0.5%, 0.5 to 0.9, 0.9 to 1.2, 1.2 to 1.4 and above 1.4 for potassium. 5. It was below 4%, 4 to 6, 6 to 11 and above 11 for silicate $(SiO_2)$ and no excess was appeared. 6. Potassium in flag leaf seemed to crow out nitrogen to ear resulting better growth of ear by the inhibition of overgrowth of flag leaf. 7. Phosphorus accelerated the transport of Mg, Si, Mn and K in this order from lower leaf to flag, and retarded that of Ca and N in this order at flowering while potassium accelerated in the order of Mn, and Ca, and retarded in the order of Mg, Si, P and N at milky stage. 8. Transport acceleration index (TAI) expressed as (F_2L_1-F_1L_2)\;100/F_1L_1$ where F and L stand for other nutrient cotents in flag and lower leaf and subscripts indicate the rate of a nutrient applied, appears to be suitable for the effect of the nutrient on the translocation of others. 9. The content of silicate $(SiO_2)$ in the flag was lower than that of lower leaf in the early season cultivation indicating hinderance in translocation or absorption. It was reverse in the normal season cultivation. 10. The infection rate of Helminthosporium frequently occurred in the potassium deficient field seemed to be related more to silicate and nitrogen content than potassium in the flag leaf. 11. Deficiency of a nutrient occured simultaniously with deficiency of a few other ones. 12. Nutritional disorder under the field condition seems mainly to be attributed to macronutrients and the role of micronutrient appears to be none or secondary.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Open Digital Textbook for Smart Education (스마트교육을 위한 오픈 디지털교과서)

  • Koo, Young-Il;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.177-189
    • /
    • 2013
  • In Smart Education, the roles of digital textbook is very important as face-to-face media to learners. The standardization of digital textbook will promote the industrialization of digital textbook for contents providers and distributers as well as learner and instructors. In this study, the following three objectives-oriented digital textbooks are looking for ways to standardize. (1) digital textbooks should undertake the role of the media for blended learning which supports on-off classes, should be operating on common EPUB viewer without special dedicated viewer, should utilize the existing framework of the e-learning learning contents and learning management. The reason to consider the EPUB as the standard for digital textbooks is that digital textbooks don't need to specify antoher standard for the form of books, and can take advantage od industrial base with EPUB standards-rich content and distribution structure (2) digital textbooks should provide a low-cost open market service that are currently available as the standard open software (3) To provide appropriate learning feedback information to students, digital textbooks should provide a foundation which accumulates and manages all the learning activity information according to standard infrastructure for educational Big Data processing. In this study, the digital textbook in a smart education environment was referred to open digital textbook. The components of open digital textbooks service framework are (1) digital textbook terminals such as smart pad, smart TVs, smart phones, PC, etc., (2) digital textbooks platform to show and perform digital contents on digital textbook terminals, (3) learning contents repository, which exist on the cloud, maintains accredited learning, (4) App Store providing and distributing secondary learning contents and learning tools by learning contents developing companies, and (5) LMS as a learning support/management tool which on-site class teacher use for creating classroom instruction materials. In addition, locating all of the hardware and software implement a smart education service within the cloud must have take advantage of the cloud computing for efficient management and reducing expense. The open digital textbooks of smart education is consdered as providing e-book style interface of LMS to learners. In open digital textbooks, the representation of text, image, audio, video, equations, etc. is basic function. But painting, writing, problem solving, etc are beyond the capabilities of a simple e-book. The Communication of teacher-to-student, learner-to-learnert, tems-to-team is required by using the open digital textbook. To represent student demographics, portfolio information, and class information, the standard used in e-learning is desirable. To process learner tracking information about the activities of the learner for LMS(Learning Management System), open digital textbook must have the recording function and the commnincating function with LMS. DRM is a function for protecting various copyright. Currently DRMs of e-boook are controlled by the corresponding book viewer. If open digital textbook admitt DRM that is used in a variety of different DRM standards of various e-book viewer, the implementation of redundant features can be avoided. Security/privacy functions are required to protect information about the study or instruction from a third party UDL (Universal Design for Learning) is learning support function for those with disabilities have difficulty in learning courses. The open digital textbook, which is based on E-book standard EPUB 3.0, must (1) record the learning activity log information, and (2) communicate with the server to support the learning activity. While the recording function and the communication function, which is not determined on current standards, is implemented as a JavaScript and is utilized in the current EPUB 3.0 viewer, ths strategy of proposing such recording and communication functions as the next generation of e-book standard, or special standard (EPUB 3.0 for education) is needed. Future research in this study will implement open source program with the proposed open digital textbook standard and present a new educational services including Big Data analysis.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

The Change of The Effect on The Subcutaneous Fat Area and Visceral Fat Area by The Functional Electrical Stimulation and Aerobic Exercise (기능적 전기 자극과 유산소 운동이 복부비만의 피하지방과 내장지방에 미치는 효과)

  • Oh Sung-tae;Lee Mun-hwan;Park Rae-Joon
    • The Journal of Korean Physical Therapy
    • /
    • v.16 no.1
    • /
    • pp.85-123
    • /
    • 2004
  • Back ground : Subcutaneous fat area is the main factor involved in replacement disease and arteriosclerosis. Simple weight control is the appropriate medical treatment. It's understood that weight reduction does not only reduce the fat concentrations in blood but also reduces blood pressure, improves glucose levels in diabetes patients and reduces incidents of heart disease. there are several methods for reducing fat in the abdominal region but their effectiveness is not folly understood. one method is electrical stimulation of the problem areas. Method : From May 1st 2002 to October 31st. The 15 subjects who received medical examination were aged between 25 and 53 and were of mixed gender. The subjects were divided into two groups one to received functional electrical stimulation and the other a control group. Using Broca's criterion for judging fat grades. I analysed the differences between the two groups before and after the treatment. Subjects received functional electrical stimulation on the abdominal muscle intensity 50Hz. They received this treatment 4 days a week for 40 minutes a day. In the case of aerobic exercise, at the Treadmill, we used it with the intensity of $75\%$ maximum heart rate (220-age). Result 1)After functional electrical stimulation in the case of male subjects, the weight was reduced 1.93kg, obesity $2.60\%$, fat mass 2.73kg, Percent body fat $4.40\%$, waist circumference 6.53cm, circumference of hips 5.53cm. On the other side, the quality of muscle was increased at the rate of 1.03kg, but it's not attentional level. The subcutaneous fat area was reduced by $26.63cm^2$, the visceral fat area was reduced by $43.00cm^2$, In the female subjects, we can see the reduction of fat grade by $26.63cm^2$, the quantity of body fat by 1.5kg, percent body fat by $1.77\%$, circumference of waist by 4.02cm, circumference of hips by 3.67cm, weight by 1.40kg but was increased 0.72kg at the quantity of muscles. We can see the reduction also in the subcutaneous fat area $24.03cm^2$, the visceral fat area by $25.36cm^2$. 2)After aerobic exercise, on the male subjects, we can see reduction of weight by 3.36kg, obesity by $4.00\%$, fat mass by 2.83kg and we can see increase at the soft lean mass by 2.96kg, but we can see reduction, the percent body fat by $3.03\%$, fat distribution by $0.023\%$, circumference of waist by 3.10cm, circumference of hips by 2.23cm. The female subjects show a reduction in the weight by 2.48kg, percent body fat by $2.20\%$, show an increase in the soft lean mass by 1.54kg. We can see a reduction in the quantity of fat mass by 2.32kg, the percent body fat by $2.80\%$, the circumference of waist by 2.16cm, the circumference of hips by 2.68cm, the fat distribution by $0.016\%$, the subcutaneous fat area by $15.25cm^2$ the visceral fat area by $11.52cm^2$. After aerobic exercise, we can't see the attentional change at the total cholesterol, triglyceride, high density lipoprotein cholesterol, low density lipoprotein cholesterol. 3)After the application of functional electrical stimulation and aerobic exercise, in result of measurement on the body ingredient, we could see the weight reduction and increase the quantity of muscle with the male group who exercised aerobic. We can see the attentional rate on the electrical stimulation about abdominal fat rate, circumference of waist, circumference of hips. The other hand, I couldn't see the attentional differences between the two groups in the rate of fatness and quantity of body fat and the rate of body fat. There isn't any attentional difference in the area of fat under skin, on the contrary, There is attentional difference in the fat in the internal organs area at the electrical stimulation site. We can't see the attentional change of total cholesterol, triglyceride, high density lipoprotein cholesterol, low density lipoprotein cholesterol between electrical stimulation and aerobic exercise. 4)After execution of functional electrical stimulation and aerobic exercise, in result of measurement on change of body ingredient among female objects, We could see weight reduction, increase at muscle quantity in the aerobic exercise group. We could see the attentional differences in the rate of fatness, the rate of abdominal region, the circumference which received electrical stimulation. But, we couldn't see the attentional differences between two groups in the quantity of body fatness, the circumference of hips. The subcutaneous fat area doesn't show the attentional differences. On the Contrary, we could see lots of differences in the visceral fat area of the electrical stimulation group. Conclusion The results show that functional electrical stimulation and aerobic exercise have insignificant differences when if comes to total cholesterol, triglyceride, high density lipoprotein cholesterol, low density lipoprotein cholesterol. Though there is affirmative change in body ingredient after both electrical stimulation and aerobic exercise. Functional electrical stimulation is more effective on the subcutaneous fat area and in changing visceral fat area. There fore. It is concluded that the physical therapy is more effective in the treatment of abdominal fatness.

  • PDF