• Title/Summary/Keyword: 키워드 선택

Search Result 170, Processing Time 0.025 seconds

Analysis of Medical Student's Need for Pre-Medical Course on the Contents of Science Curriculum in High School (의예과 교육과정에 필요한 고등학교 과학관련 교과목 내용에 대한 요구분석)

  • Park, Hye Jin;Park, Won Kyun;Kim, Yura
    • Journal of Science Education
    • /
    • v.45 no.1
    • /
    • pp.129-141
    • /
    • 2021
  • With the change of the undergraduate medical education system, many medical schools have recently run or developed a medical education curriculum. The premedical curriculum should be designed according to the sequencing and level of the medical curriculum, but there were no discussions on the standards or evidence for the basic science-related subjects. Therefore, this study examines Physics I, Physics II, Life sciences I, Life sciences II, Chemistry I, and Chemistry II, which are the subjects of need assessment exploration. The need assessment used mean, mean difference, and Borich demand, The locus for focus of memory degree and importance, and the result was converted into 76 keywords. The results of this study are expected to be used as basic data for the development of subjects related to basic science in premedical curriculum.

A Statistical Analysis of the Causes of Marine Incidents occurring during Berthing (정박 중 발생한 준해양사고 원인에 대한 통계 분석 연구)

  • Roh, Boem-Seok;Kang, Suk-Young
    • Journal of Navigation and Port Research
    • /
    • v.45 no.3
    • /
    • pp.95-101
    • /
    • 2021
  • Marine Incidents based on Heinrich's law are very important in preventing accidents. However, marine Incident data are mainly qualitative and are used to prevent similar accidents through case sharing rather than statistical analysis, which can be confirmed in the marine Incident-related data posted in the Korea Maritime Safety Tribunal. Therefore, this study derived quantitative results by analyzing the causes of marine incidents during berthing using various methods of statistical analysis. To this end, data involving marine incidents from various shipping companies were collected and reclassified for easy analysis. The main keywords were derived via primary analysis using text mining. Only meaningful words were selected via verification by an expert group, and time series and cluster analysis were performed to predict marine incidents that may occur during berthing. Although the role of an expert group was still required during the analysis, it was confirmed that quantitative analysis of marine incidents was feasible, and iused to provide cause and accident prevention information.

Information-providing Application Based on Web Crawling (웹 크롤링을 통한 개인 맞춤형 정보제공 애플리케이션)

  • Ju-Hyeon Kim;Jeong-Eun Choi;U-Gyeong Shin;Min-Jun Piao;Tae-Kook Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.1
    • /
    • pp.21-27
    • /
    • 2024
  • This paper presents the implementation of a personalized real-time information-providing application utilizing filtering and web crawling technologies. The implemented application performs web crawling based on the user-set keywords within web pages, using the Jsoup library as a basis for the selected keywords. The crawled data is then stored in a MySQL database. The stored data is presented to the user through an application implemented using Flutter. Additionally, mobile push notifications are provided using Firebase Cloud Messaging (FCM). Through these methods, users can efficiently obtain the desired information quickly. Furthermore, there is an expectation that this approach can be applied to the Internet of Things (IoT) where big data is generated, allowing users to receive only the information they need.

Analyzing the Phenomena of Hate in Korea by Text Mining Techniques (텍스트마이닝 기법을 이용한 한국 사회의 혐오 양상 분석)

  • Hea-Jin, Kim
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.4
    • /
    • pp.431-453
    • /
    • 2022
  • Hate is a collective expression of exclusivity toward others and it is fostered and reproduced through false public perception. This study aims to explore the objects and issues of hate discussed in our society using text mining techniques. To this end, we collected 17,867 news data published from 1990 to 2020 and constructed a co-word network and cluster analysis. In order to derive an explicit co-word network highly related to hate, we carried out sentence split and extracted a total of 52,520 sentences containing the words 'hate', 'prejudice' and 'discrimination' in the preprocessing phase. As a result of analyzing the frequency of words in the collected news data, the subjects that appeared most frequently in relation to hate in our society were women, race, and sexual minorities, and the related issues were related laws and crimes. As a result of cluster analysis based on the co-word network, we found a total of six hate-related clusters. The largest cluster was 'genderphobic', accounting for 41.4% of the total, followed by 'sexual minority hatred' at 28.7%, 'racial hatred' at 15.1%, 'selective hatred' at 8.5%, 'political hatred' accounted for 5.7% and 'environmental hatred' accounted for 0.3%. In the discussion, we comprehensively extracted all specific hate target names from the collected news data, which were not specifically revealed as a result of the cluster analysis.

The Use of Reinforcement Learning and The Reference Page Selection Method to improve Web Spidering Performance (웹 탐색 성능 향상을 위한 강화학습 이용과 기준 페이지 선택 기법)

  • 이기철;이선애
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.3
    • /
    • pp.331-340
    • /
    • 2002
  • The web world is getting so huge and untractable that without an intelligent information extractor we would get more and more helpless. Conventional web spidering techniques for general purpose search engine may be too slow for the specific search engines, which concentrate only on specific areas or keywords. In this paper a new model for improving web spidering capabilities is suggested and experimented. How to select adequate reference web pages from the initial web Page set relevant to a given specific area (or keywords) can be very important to reduce the spidering speed. Our reference web page selection method DOPS dynamically and orthogonally selects web pages, and it can also decide the appropriate number of reference pages, using a newly defined measure. Even for a very specific area, this method worked comparably well almost at the level of experts. If we consider that experts cannot work on a huge initial page set, and they still have difficulty in deciding the optimal number of the reference web pages, this method seems to be very promising. We also applied reinforcement learning to web environment, and DOPS-based reinforcement learning experiments shows that our method works quite favorably in terms of both the number of hyper links and time.

  • PDF

Exploratory research on the dynamic capabilities of leading firms: Research framework building (시장 선도 기업의 동태적 역량에 대한 탐색적 연구: 연구 프레임워크 구축)

  • Paek, Byung-Joo;Lee, Hee-Sang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8262-8273
    • /
    • 2015
  • Sources of innovation and a sustainable competitive advantage for market leading firms in a dynamic environment are major concerns for both firm managers and academic researchers. For this reason, the dynamic capability view (DCV), evolved from the resource-based view (RBV) is gaining popularity as a cornerstone for success of firms in a dynamic market. DCV proposes a number of new concepts and elements, but these are still under development. This study suggests a comprehensive constitution of dynamic capabilities by performing a systematic literature review of 59 conceptual research papers from 238 bodies of literature. For the integrative conceptual framework, the elements of the dynamic capabilities are segmented into organizational learning and strategic choices with sensing, seizing, and reconfiguring stages. In addition, our conceptual framework lays a foundation for further empirical studies, with investigating sources of competitive advantage of market leading firms.

Feature Extraction to Detect Hoax Articles (낚시성 인터넷 신문기사 검출을 위한 특징 추출)

  • Heo, Seong-Wan;Sohn, Kyung-Ah
    • Journal of KIISE
    • /
    • v.43 no.11
    • /
    • pp.1210-1215
    • /
    • 2016
  • Readership of online newspapers has grown with the proliferation of smart devices. However, fierce competition between Internet newspaper companies has resulted in a large increase in the number of hoax articles. Hoax articles are those where the title does not convey the content of the main story, and this gives readers the wrong information about the contents. We note that the hoax articles have certain characteristics, such as unnecessary celebrity quotations, mismatch in the title and content, or incomplete sentences. Based on these, we extract and validate features to identify hoax articles. We build a large-scale training dataset by analyzing text keywords in replies to articles and thus extracted five effective features. We evaluate the performance of the support vector machine classifier on the extracted features, and a 92% accuracy is observed in our validation set. In addition, we also present a selective bigram model to measure the consistency between the title and content, which can be effectively used to analyze short texts in general.

Automatic Determination of Usenet News Groups from User Profile (사용자 프로파일에 기초한 유즈넷 뉴스그룹 자동 결정 방법)

  • Kim, Jong-Wan;Cho, Kyu-Cheol;Kim, Hee-Jae;Kim, Byeong-Man
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.142-149
    • /
    • 2004
  • It is important to retrieve exact information coinciding with user's need from lots of Usenet news and filter desired information quickly. Differently from email system, we must previously register our interesting news group if we want to get the news information. However, it is not easy for a novice to decide which news group is relevant to his or her interests. In this work, we present a service classifying user preferred news groups among various news groups by the use of Kohonen network. We first extract candidate terms from example documents and then choose a number of representative keywords to be used in Kohonen network from them through fuzzy inference. From the observation of training patterns, we could find the sparsity problem that lots of keywords in training patterns are empty. Thus, a new method to train neural network through reduction of unnecessary dimensions by the statistical coefficient of determination is proposed in this paper. Experimental results show that the proposed method is superior to the method using every dimension in terms of cluster overlap defined by using within cluster distance and between cluster distance.

A Study on Video Search Method using the Image map (이미지 맵을 이용한 동영상 검색 제공방법에 관한 연구 - IPTV 환경을 중심으로)

  • Lee, Ju-Hwan;Lea, Jong-Ho
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.298-303
    • /
    • 2008
  • Watching a program on IPTV among the numerous choices from the internet requires a burden of searching and browsing for a favorite one. This paper introduces a new concept called Mosaic Map and presents how it provides preview information of image map links to other programs. In Mosaic Map the pixels in the still image are used both as shading the background and as thumbnails which can link up with other programs. This kind of contextualized preview of choices can help IPTV users to associate the image with related programs without making visual saccades between watching IPTV and browsing many choices. The experiments showed that the Mosaic Map reduces the time to complete search and browsing, comparing to the legacy menu and web search.

  • PDF

Representative Keyword Extraction from Few Documents through Fuzzy Inference (퍼지추론을 이용한 소수 문서의 대표 키워드 추출)

  • 노순억;김병만;허남철
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.9
    • /
    • pp.837-843
    • /
    • 2001
  • In this work, we propose a new method of extracting and weighting representative keywords(RKs) from a few documents that might interest a user. In order to extract RKs, we first extract candidate terms and them choose a number of terms called initial representative keywords (IRKs) from them through fuzzy inference. Then, by expanding and reweighting IRKs using term co-occurrence similarity, the final RKs are obtained. Performance of our approach is heavily influenced by effectiveness of selection method of IRKs so that we choose fuzzy inference because it is more effective in handling the uncertainty inherent in selecting representative keywords of documents. The problem addressed in this paper can be viewed as the one of calculating center of document vectors. So, to show the usefulness of our approach, we compare with two famous methods - Rocchio and Widrow-Hoff - on a number of documents collections. The result show that our approach outperforms the other approaches.

  • PDF