• Title/Summary/Keyword: 우수이용 시스템

Search Result 3,113, Processing Time 0.034 seconds

Derivation of Green Infrastructure Planning Factors for Reducing Particulate Matter - Using Text Mining - (미세먼지 저감을 위한 그린인프라 계획요소 도출 - 텍스트 마이닝을 활용하여 -)

  • Seok, Youngsun;Song, Kihwan;Han, Hyojoo;Lee, Junga
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.5
    • /
    • pp.79-96
    • /
    • 2021
  • Green infrastructure planning represents landscape planning measures to reduce particulate matter. This study aimed to derive factors that may be used in planning green infrastructure for particulate matter reduction using text mining techniques. A range of analyses were carried out by focusing on keywords such as 'particulate matter reduction plan' and 'green infrastructure planning elements'. The analyses included Term Frequency-Inverse Document Frequency (TF-IDF) analysis, centrality analysis, related word analysis, and topic modeling analysis. These analyses were carried out via text mining by collecting information on previous related research, policy reports, and laws. Initially, TF-IDF analysis results were used to classify major keywords relating to particulate matter and green infrastructure into three groups: (1) environmental issues (e.g., particulate matter, environment, carbon, and atmosphere), target spaces (e.g., urban, park, and local green space), and application methods (e.g., analysis, planning, evaluation, development, ecological aspect, policy management, technology, and resilience). Second, the centrality analysis results were found to be similar to those of TF-IDF; it was confirmed that the central connectors to the major keywords were 'Green New Deal' and 'Vacant land'. The results from the analysis of related words verified that planning green infrastructure for particulate matter reduction required planning forests and ventilation corridors. Additionally, moisture must be considered for microclimate control. It was also confirmed that utilizing vacant space, establishing mixed forests, introducing particulate matter reduction technology, and understanding the system may be important for the effective planning of green infrastructure. Topic analysis was used to classify the planning elements of green infrastructure based on ecological, technological, and social functions. The planning elements of ecological function were classified into morphological (e.g., urban forest, green space, wall greening) and functional aspects (e.g., climate control, carbon storage and absorption, provision of habitats, and biodiversity for wildlife). The planning elements of technical function were classified into various themes, including the disaster prevention functions of green infrastructure, buffer effects, stormwater management, water purification, and energy reduction. The planning elements of the social function were classified into themes such as community function, improving the health of users, and scenery improvement. These results suggest that green infrastructure planning for particulate matter reduction requires approaches related to key concepts, such as resilience and sustainability. In particular, there is a need to apply green infrastructure planning elements in order to reduce exposure to particulate matter.

A Study on the Satisfaction and Improvement Plan of Fraud Prevention Education about Technical and Vocational Education and Training (직업훈련 부정 예방교육 만족도 조사와 개선방안 연구)

  • Jeong, Sun Jeong;Lee, Eun Hye;Lee, Moon Su
    • Journal of vocational education research
    • /
    • v.37 no.5
    • /
    • pp.25-53
    • /
    • 2018
  • The purpose of this study is to find out the improvement plan through the satisfaction survey of the trainees involved in vocational training fraud preventive education. In order to do this, we conducted a satisfaction survey(4,263 persons) of 5,939 people who participated in the prevention education conducted by group education or e-learning in 2017. Finally we collected 4,237 effective responses data. Descriptive statistics and the regression analysis were conducted. The finding of the study were as follows. First, the education service quality(4.42), satisfaction level(4.44), understanding level(4.44) and help level(4.45) were significantly higher than those of participants in the preventive education 4 and above. Second, e-learning participants' perceived level of education service quality, satisfaction, comprehension, and help was higher in all variables than collective education's. Third, all of the sub-factors of preventive education service quality influenced satisfaction, understanding, and help in collective education and e-learning, respectively. In the collective education, the contents of education had the greatest influence, and in e-learning, the data composition had the greatest influence. Fourth, desirable education contents were cases of fraud training(70.7%), disposition regulations(47.9%), NCS course operation instructions(32.8%) and training management best practices(32.4%). Additional requirements also included the establishment of an in-depth course, the provision of anti-fraud education content for trainees, and screen switching and system stability that can be focused on e-learning. Therefore, this study suggests that first, it is necessary to activate e-learning for prevention education more, reflecting satisfaction of e-learning is higher than that of collective education. Second, it is necessary to diversify the content of preventive education and to provide it more abundantly, because it has the biggest influence in common with the satisfaction, understanding and help level of the preventive education. Third, education content next, the factors that have a relatively big influence on satisfaction are shown as delivery method and education place in the collective education. Therefore, it is necessary to prepare education place considering the assignment of instructor and convenience. Fourth, constructing data next, the factor that have a relatively great influence on understanding and help are found to be operator support, and more active operator support activities are required in e-learning. Fifth, it is required to delivery prevention activity for trainees participating in vocational training. Sixth, it is necessary to analyze the educational need to construct the contents of preventive education more systematically.

A Study on Particulate Matter Forecasting Improvement by using Asian Dust Emissions in East Asia (황사배출량을 적용한 동아시아 미세먼지 예보 개선 연구)

  • Choi, Daeryun;Yun, Huiyoung;Chang, Limseok;Lee, Jaebum;Lee, Younghee;Myoung, Jisu;Kim, Taehee;Koo, Younseo
    • Journal of the Korean Society of Urban Environment
    • /
    • v.18 no.4
    • /
    • pp.531-546
    • /
    • 2018
  • Air quality forecasting system with Asian dust emissions was developed in East Asia, and $PM_{10}$ forecasting performance of chemical transport model with Asian dust emissions was validated and evaluated. The chemical transport model (CTM) with Asian dust emission was found to supplement $PM_{10}$ concentrations that had been under-estimated in China regions and improved statistics for performance of CTM, although the model were overestimated during some periods in China. In Korea, the prediction model adequately simulated inflow of Asian dust events on February 22~24 and March 16~17, but the model is found to be overestimated during no Asian dust event periods on April. However, the model supplemented $PM_{10}$ concentrations, which was underestimated in most regions in Korea and the statistics for performance of the models were improved. The $PM_{10}$ forecasting performance of air quality forecasting model with Asian dust emissions tends to improve POD (Probability of Detection) compared to basic model without Asian dust emissions, but A (Accuracy) has shown similar or decreased, and FAR (False Alarms) have increased during 2017.Therefore, the developed air quality forecasting model with Asian dust emission was not proposed as a representative $PM_{10}$ forecast model in South Korea.

Analyzing the discriminative characteristic of cover letters using text mining focused on Air Force applicants (텍스트 마이닝을 이용한 공군 부사관 지원자 자기소개서의 차별적 특성 분석)

  • Kwon, Hyeok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.75-94
    • /
    • 2021
  • The low birth rate and shortened military service period are causing concerns about selecting excellent military officers. The Republic of Korea entered a low birth rate society in 1984 and an aged society in 2018 respectively, and is expected to be in a super-aged society in 2025. In addition, the troop-oriented military is changed as a state-of-the-art weapons-oriented military, and the reduction of the military service period was implemented in 2018 to ease the burden of military service for young people and play a role in the society early. Some observe that the application rate for military officers is falling due to a decrease of manpower resources and a preference for shortened mandatory military service over military officers. This requires further consideration of the policy of securing excellent military officers. Most of the related studies have used social scientists' methodologies, but this study applies the methodology of text mining suitable for large-scale documents analysis. This study extracts words of discriminative characteristics from the Republic of Korea Air Force Non-Commissioned Officer Applicant cover letters and analyzes the polarity of pass and fail. It consists of three steps in total. First, the application is divided into general and technical fields, and the words characterized in the cover letter are ordered according to the difference in the frequency ratio of each field. The greater the difference in the proportion of each application field, the field character is defined as 'more discriminative'. Based on this, we extract the top 50 words representing discriminative characteristics in general fields and the top 50 words representing discriminative characteristics in technology fields. Second, the number of appropriate topics in the overall cover letter is calculated through the LDA. It uses perplexity score and coherence score. Based on the appropriate number of topics, we then use LDA to generate topic and probability, and estimate which topic words of discriminative characteristic belong to. Subsequently, the keyword indicators of questions used to set the labeling candidate index, and the most appropriate index indicator is set as the label for the topic when considering the topic-specific word distribution. Third, using L-LDA, which sets the cover letter and label as pass and fail, we generate topics and probabilities for each field of pass and fail labels. Furthermore, we extract only words of discriminative characteristics that give labeled topics among generated topics and probabilities by pass and fail labels. Next, we extract the difference between the probability on the pass label and the probability on the fail label by word of the labeled discriminative characteristic. A positive figure can be seen as having the polarity of pass, and a negative figure can be seen as having the polarity of fail. This study is the first research to reflect the characteristics of cover letters of Republic of Korea Air Force non-commissioned officer applicants, not in the private sector. Moreover, these methodologies can apply text mining techniques for multiple documents, rather survey or interview methods, to reduce analysis time and increase reliability for the entire population. For this reason, the methodology proposed in the study is also applicable to other forms of multiple documents in the field of military personnel. This study shows that L-LDA is more suitable than LDA to extract discriminative characteristics of Republic of Korea Air Force Noncommissioned cover letters. Furthermore, this study proposes a methodology that uses a combination of LDA and L-LDA. Therefore, through the analysis of the results of the acquisition of non-commissioned Republic of Korea Air Force officers, we would like to provide information available for acquisition and promotional policies and propose a methodology available for research in the field of military manpower acquisition.

Preparation of an Inactivated Influenza Vaccine Using the Ethanol Extracts of Medical Herbs (한약재 식물 에탄올추출물을 이용한 인플루엔자 불활화백신 제작)

  • Cho, Sehee;Lee, Seung-Hoon;Kim, Seonjeong;Cheong, Yucheol;Kim, Yewon;Kim, Ju Won;Kim, Su Jeong;Seo, Seungin;Seo, Dong-Won;Lim, Jae-Hwan;Jeon, Sejin;Jang, Yo Han
    • Journal of Life Science
    • /
    • v.32 no.12
    • /
    • pp.919-928
    • /
    • 2022
  • As seen in the COVID-19 pandemic, unexpected emergence of new viruses presents serious concern on public health. Especially, the absence of effective vaccines or antiviral drugs against emerging viruses significantly increases the severity of disease and duration of viral circulation among population. Natural products have served as a major source for safe and effective antiviral drugs. In this study, we examined the virucidal activity of medical herb extracts with a view to discover novel antiviral agents with desired levels of safety and antiviral efficacy. Ethanol extracts of ten selected medical herbs were tested for antioxidant activity and in-vitro cytotoxicity in various animal cell lines. Of note, the herbal extracts showed broad and potent virucidal activities against rotavirus, hepatitis A virus, and influenza A virus. The extracts of Sorbus commixta and Glycyrrhiza uralensis showed strong virucidal activities against influenza A virus. We also examined whether the extracts of Sorbus commixta and Glycyrrhiza uralensis can be used as inactivating agents to prepare an inactivated viral vaccine. In a mouse model, influenza A virus inactivated by the extracts elicited high levels of neutralizing antibodies, and the vaccination provided complete protection against lethal challenge. These results suggest that herb-derived natural products can be developed to antiviral drugs as well as inactivating agents for preparation of inactivated viral vaccines.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

Weaning Following a 60 Minutes Spontaneous Breathing Trial (1시간 자가호흡관찰에 의한 기계적 호흡치료로부터의 이탈)

  • Park, Keon-Uk;Won, Kyoung-Sook;Koh, Young-Min;Baik, Jae-Jung;Chung, Yeon-Tae
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.3
    • /
    • pp.361-369
    • /
    • 1995
  • Background: A number of different weaning techniques can be employed such as spontaneous breathing trial, Intermittent mandatory ventilation(IMV) or Pressure support ventilation(PSV). However, the conclusive data indicating the superiority of one technique over another have not been published. Usually, a conventional spontaneous breathing trial is undertaken by supplying humidified $O_2$ through T-shaped adaptor connected to endotracheal tube or tracheostomy tube. In Korea, T-tube trial is not popular because the high-flow oxygen system is not always available. Also, the timing of extubation is not conclusive and depends on clinical experiences. It is known that to withdraw the endotracheal tube after weaning is far better than to go through any period. The tube produces varying degrees of resistance depending on its internal diameter and the flow rates encountered. The purpose of present study is to evaluate the effectiveness of weaning and extubation following a 60 minutes spontaneous breathing trial with simple oxygen supply through the endotracheal tube. Methods: We analyzed the result of weaning and extubation following a 60 minutes spontaneous breathing trial with simple oxygen supply through the endotracheal tube in 18 subjects from June, 1993 to June, 1994. They consisted of 9 males and 9 females. The duration of mechanical ventilation was from 38 hours to 341 hours(mean: $105.9{\pm}83.4$ hours). In all cases, the cause of ventilator dependency should be identified and precipitating factors should be corrected. The weaning trial was done when the patient became alert and arterial $O_2$ tension was adequate($PaO_2$ > 55mmHg) with an inspired oxygen fraction of 40%. We conducted a careful physical examination when the patient was breathing spontaneously through the endotracheal tube. Failure of weaning trial was signaled by cyanosis, sweating, paradoxical respiration, intercostal recession. Weaning failure was defined as the need for mechanical ventilation within 48 hours. Results: In 19 weaning trials of 18 patients, successful weaning and extubation was possible in 16/19(84.2 %). During the trial of spontaneous breathing for 60 minutes through the endotracheal tube, the patients who could wean developed slight increase in respiratory rates but significant changes of arterial blood gas values were not noted. But, the patients who failed weaning trial showed the marked increase in respiratory rates without significant changes of arterial blood gas values. Conclusion: The result of present study indicates that weaning from mechanical ventilation following a 60 minutes spontaneous breathing with $O_2$ supply through the endotracheal tube is a simple and effective method. Extubation can be done at the same time of successful weaning except for endobronchial toilet or airway protection.

  • PDF

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

Context Sharing Framework Based on Time Dependent Metadata for Social News Service (소셜 뉴스를 위한 시간 종속적인 메타데이터 기반의 컨텍스트 공유 프레임워크)

  • Ga, Myung-Hyun;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.39-53
    • /
    • 2013
  • The emergence of the internet technology and SNS has increased the information flow and has changed the way people to communicate from one-way to two-way communication. Users not only consume and share the information, they also can create and share it among their friends across the social network service. It also changes the Social Media behavior to become one of the most important communication tools which also includes Social TV. Social TV is a form which people can watch a TV program and at the same share any information or its content with friends through Social media. Social News is getting popular and also known as a Participatory Social Media. It creates influences on user interest through Internet to represent society issues and creates news credibility based on user's reputation. However, the conventional platforms in news services only focus on the news recommendation domain. Recent development in SNS has changed this landscape to allow user to share and disseminate the news. Conventional platform does not provide any special way for news to be share. Currently, Social News Service only allows user to access the entire news. Nonetheless, they cannot access partial of the contents which related to users interest. For example user only have interested to a partial of the news and share the content, it is still hard for them to do so. In worst cases users might understand the news in different context. To solve this, Social News Service must provide a method to provide additional information. For example, Yovisto known as an academic video searching service provided time dependent metadata from the video. User can search and watch partial of video content according to time dependent metadata. They also can share content with a friend in social media. Yovisto applies a method to divide or synchronize a video based whenever the slides presentation is changed to another page. However, we are not able to employs this method on news video since the news video is not incorporating with any power point slides presentation. Segmentation method is required to separate the news video and to creating time dependent metadata. In this work, In this paper, a time dependent metadata-based framework is proposed to segment news contents and to provide time dependent metadata so that user can use context information to communicate with their friends. The transcript of the news is divided by using the proposed story segmentation method. We provide a tag to represent the entire content of the news. And provide the sub tag to indicate the segmented news which includes the starting time of the news. The time dependent metadata helps user to track the news information. It also allows them to leave a comment on each segment of the news. User also may share the news based on time metadata as segmented news or as a whole. Therefore, it helps the user to understand the shared news. To demonstrate the performance, we evaluate the story segmentation accuracy and also the tag generation. For this purpose, we measured accuracy of the story segmentation through semantic similarity and compared to the benchmark algorithm. Experimental results show that the proposed method outperforms benchmark algorithms in terms of the accuracy of story segmentation. It is important to note that sub tag accuracy is the most important as a part of the proposed framework to share the specific news context with others. To extract a more accurate sub tags, we have created stop word list that is not related to the content of the news such as name of the anchor or reporter. And we applied to framework. We have analyzed the accuracy of tags and sub tags which represent the context of news. From the analysis, it seems that proposed framework is helpful to users for sharing their opinions with context information in Social media and Social news.