• Title/Summary/Keyword: Portal Sites

Search Result 308, Processing Time 0.031 seconds

Comparison of Micro Mobility Patterns of Public Bicycles Before and After the Pandemic: A Case Study in Seoul (팬데믹 전후 공공자전거의 마이크로 모빌리티 패턴 비교: 서울시 사례 연구)

  • Jae-Hee Cho;Ga-Eun Baek;Il-Jung Seo
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.235-244
    • /
    • 2022
  • The rental history data of public bicycles in Seoul were analyzed to examine how pandemic phenomena such as COVID-19 caused changes in people's micro mobility. Data for 2019 and 2021 were compared and analyzed by dividing them before and after COVID-19. Data were collected from public data portal sites, and data marts were created for in-depth analysis. In order to compare the changes in the two periods, the riding direction type dimension and the rental station type dimension were added, and the derived variables (rotation rate per unit, riding speed) were newly created. There is no significant difference in the average rental time before and after COVID-19, but the average rental distance and average usage speed decreased. Even in the mobility of Ttareungi, you can see the slow rhythm of daily life. On weekdays, the usage rate was the highest during commuting hours even before COVID-19, but it increased rapidly after COVID-19. It can be interpreted that people who are concerned about infection prefer Ttareungi to village buses as a means of micro-mobility. The results of data mart-based visualization and analysis proposed in this study will be able to provide insight into public bicycle operation and policy development. In future studies, it is necessary to combine SNS data such as Twitter and Instagram with public bicycle rental history data. It is expected that the value of related research can be improved by examining the behavior of bike users in various places.

A Study on the Social Perception of Jiu-Jitsu Using Big data Analysis (빅데이터 분석을 활용한 주짓수의 사회적 인식 연구)

  • Kun-hee Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.209-217
    • /
    • 2024
  • The purpose of this study is to explore development plans by analyzing social interests and perceptions of jiu-jitsu using big data analysis. Network analysis, centrality analysis, and CONCOR analysis were conducted by collecting data for the last 10 years of major domestic portal sites. First, 'judo' was found to be the most important related word in network analysis, and 'judo' was also an important word in the analysis of dgree centrality. In the closeness centrality analysis, "defender" was the most important word, and "sports" was the most important word in betweenness centrality. Finally, as a result of CONCOR analysis, four clusters (related sports and marketing, jiu-jitsu competitions, belt test, supplies and expenses) were formed. As a conclusion of the study, first, words such as 'judo', 'exercise', 'competition', 'dobok', 'gym', and 'graduation' should be actively used to promote jiu-jitsu.As a conclusion of the study, first, words such as 'judo', 'exercise', 'contest', 'dobok', 'gym', and 'graduation' should be actively used to promote jiu-jitsu. Second, it is necessary to share information on training costs through various routes, to make awareness of the graduation process or method common, and to develop safety products and create a safe training culture. Third, it is necessary to find ways to continuously increase the influx of new trainees by attracting steady competitions.

A Document Collection Method for More Accurate Search Engine (정확도 높은 검색 엔진을 위한 문서 수집 방법)

  • Ha, Eun-Yong;Gwon, Hui-Yong;Hwang, Ho-Yeong
    • The KIPS Transactions:PartA
    • /
    • v.10A no.5
    • /
    • pp.469-478
    • /
    • 2003
  • Internet information search engines using web robots visit servers conneted to the Internet periodically or non-periodically. They extract and classify data collected according to their own method and construct their database, which are the basis of web information search engines. There procedure are repeated very frequently on the Web. Many search engine sites operate this processing strategically to become popular interneet portal sites which provede users ways how to information on the web. Web search engine contacts to thousands of thousands web servers and maintains its existed databases and navigates to get data about newly connected web servers. But these jobs are decided and conducted by search engines. They run web robots to collect data from web servers without knowledge on the states of web servers. Each search engine issues lots of requests and receives responses from web servers. This is one cause to increase internet traffic on the web. If each web server notify web robots about summary on its public documents and then each web robot runs collecting operations using this summary to the corresponding documents on the web servers, the unnecessary internet traffic is eliminated and also the accuracy of data on search engines will become higher. And the processing overhead concerned with web related jobs on web servers and search engines will become lower. In this paper, a monitoring system on the web server is designed and implemented, which monitors states of documents on the web server and summarizes changes of modified documents and sends the summary information to web robots which want to get documents from the web server. And an efficient web robot on the web search engine is also designed and implemented, which uses the notified summary and gets corresponding documents from the web servers and extracts index and updates its databases.

Damage of Whole Crop Maize in Abnormal Climate Using Machine Learning (이상기상 시 사일리지용 옥수수의 기계학습을 이용한 피해량 산출)

  • Kim, Ji Yung;Choi, Jae Seong;Jo, Hyun Wook;Kim, Moon Ju;Kim, Byong Wan;Sung, Kyung Il
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.42 no.2
    • /
    • pp.127-136
    • /
    • 2022
  • This study was conducted to estimate the damage of Whole Crop Maize (WCM) according to abnormal climate using machine learning and present the damage through mapping. The collected WCM data was 3,232. The climate data was collected from the Korea Meteorological Administration's meteorological data open portal. Deep Crossing is used for the machine learning model. The damage was calculated using climate data from the Automated Synoptic Observing System (95 sites) by machine learning. The damage was calculated by difference between the Dry matter yield (DMY)normal and DMYabnormal. The normal climate was set as the 40-year of climate data according to the year of WCM data (1978~2017). The level of abnormal climate was set as a multiple of the standard deviation applying the World Meteorological Organization(WMO) standard. The DMYnormal was ranged from 13,845~19,347 kg/ha. The damage of WCM was differed according to region and level of abnormal climate and ranged from -305 to 310, -54 to 89, and -610 to 813 kg/ha bnormal temperature, precipitation, and wind speed, respectively. The maximum damage was 310 kg/ha when the abnormal temperature was +2 level (+1.42 ℃), 89 kg/ha when the abnormal precipitation was -2 level (-0.12 mm) and 813 kg/ha when the abnormal wind speed was -2 level (-1.60 m/s). The damage calculated through the WMO method was presented as an mapping using QGIS. When calculating the damage of WCM due to abnormal climate, there was some blank area because there was no data. In order to calculate the damage of blank area, it would be possible to use the automatic weather system (AWS), which provides data from more sites than the automated synoptic observing system (ASOS).

An Analysis for Deriving New Convergent Service of Mobile Learning: The Case of Social Network Analysis and Association Rule (모바일 러닝에서의 신규 융합서비스 도출을 위한 분석: 사회연결망 분석과 연관성 분석 사례)

  • Baek, Heon;Kim, Jin Hwa;Kim, Yong Jin
    • Information Systems Review
    • /
    • v.15 no.3
    • /
    • pp.1-37
    • /
    • 2013
  • This study is conducted to explore the possibility of service convergence to promote mobile learning. This study has attempted to identify how mobile learning service is provided, which services among them are considered most popular, and which services are highly demanded by users. This study has also investigated the potential opportunities for service convergence of mobile service and e-learning. This research is then extended to examine the possibility of active convergence of common services in mobile services and e-learning. Important variables have been identified from related web pages of portal sites using social network analysis (SNA) and association rules. Due to the differences in number and type of variables on different web pages, SNA was used to deal with the difficulties of identifying the degree of complex connection. Association analysis has been used to identify association rules among variables. The study has revealed that most frequent services among common services of mobile services and e-learning were Games and SNS followed by Payment, Advertising, Mail, Event, Animation, Cloud, e-Book, Augmented Reality and Jobs. This study has also found that Search, News, GPS in mobile services were turned out to be very highly demanded while Simulation, Culture, Public Education were highly demanded in e-learning. In addition, It has been found that variables involving with high service convergence based on common variables of mobile and e-learning services were Games and SNS, Games and Sports, SNS and Advertising, Games and Event, SNS and e-Book, Games and Community in mobile services while Games, Animation, Counseling, e-Book, being preceding services Simulation, Speaking, Public Education, Attendance Management were turned out be highly convergent in e-learning services. Finally, this study has attempted to predict possibility of active service convergence focusing on Games, SNS, e-Book which were highly demanded common services in mobile and e-learning services. It is expected that this study can be used to suggest a strategic direction to promote mobile learning by converging mobile services and e-learning.

  • PDF

Analyzing the Issue Life Cycle by Mapping Inter-Period Issues (기간별 이슈 매핑을 통한 이슈 생명주기 분석 방법론)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.25-41
    • /
    • 2014
  • Recently, the number of social media users has increased rapidly because of the prevalence of smart devices. As a result, the amount of real-time data has been increasing exponentially, which, in turn, is generating more interest in using such data to create added value. For instance, several attempts are being made to analyze the relevant search keywords that are frequently used on new portal sites and the words that are regularly mentioned on various social media in order to identify social issues. The technique of "topic analysis" is employed in order to identify topics and themes from a large amount of text documents. As one of the most prevalent applications of topic analysis, the technique of issue tracking investigates changes in the social issues that are identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has two limitations. First, when a new period is included, topic analysis must be repeated for all the documents of the entire period, rather than being conducted only on the new documents of the added period. This creates practical limitations in the form of significant time and cost burdens. Therefore, this traditional approach is difficult to apply in most applications that need to perform an analysis on the additional period. Second, the issue is not only generated and terminated constantly, but also one issue can sometimes be distributed into several issues or multiple issues can be integrated into one single issue. In other words, each issue is characterized by a life cycle that consists of the stages of creation, transition (merging and segmentation), and termination. The existing issue tracking methods do not address the connection and effect relationship between these issues. The purpose of this study is to overcome the two limitations of the existing issue tracking method, one being the limitation regarding the analysis method and the other being the limitation involving the lack of consideration of the changeability of the issues. Let us assume that we perform multiple topic analysis for each multiple period. Then it is essential to map issues of different periods in order to trace trend of issues. However, it is not easy to discover connection between issues of different periods because the issues derived for each period mutually contain heterogeneity. In this study, to overcome these limitations without having to analyze the entire period's documents simultaneously, the analysis can be performed independently for each period. In addition, we performed issue mapping to link the identified issues of each period. An integrated approach on each details period was presented, and the issue flow of the entire integrated period was depicted in this study. Thus, as the entire process of the issue life cycle, including the stages of creation, transition (merging and segmentation), and extinction, is identified and examined systematically, the changeability of the issues was analyzed in this study. The proposed methodology is highly efficient in terms of time and cost, as it sufficiently considered the changeability of the issues. Further, the results of this study can be used to adapt the methodology to a practical situation. By applying the proposed methodology to actual Internet news, the potential practical applications of the proposed methodology are analyzed. Consequently, the proposed methodology was able to extend the period of the analysis and it could follow the course of progress of each issue's life cycle. Further, this methodology can facilitate a clearer understanding of complex social phenomena using topic analysis.

Analysis of the Imaging Dose for IGRT/Gated Treatments (영상유도 및 호흡동조 방사선치료에서의 영상장비에 의한 흡수선량 분석)

  • Shin, Jung-Suk;Han, Young-Yih;Ju, Sang-Gyu;Shin, Eun-Hyuk;Hong, Chae-Seon;Ahn, Yong-Chan
    • Radiation Oncology Journal
    • /
    • v.27 no.1
    • /
    • pp.42-48
    • /
    • 2009
  • Purpose: The introduction of image guided radiation therapy/four-dimensional radiation therapy (IGRT/4DRT) potentially increases the accumulated dose to patients from imaging and verification processes as compared to conventional practice. It is therefore essential to investigate the level of the imaging dose to patients when IGRT/4DRT devices are installed. The imaging dose level was monitored and was compared with the use of pre-IGRT practice. Materials and Methods: A four-dimensional CT (4DCT) unit (GE, Ultra Light Speed 16), a simulator (Varian Acuity) and Varian IX unit with an on-board imager (OBI) and cone beam CT (CBCT) were installed. The surface doses to a RANDO phantom (The Phantom Laboratory, Salem, NY USA) were measured with the newly installed devices and with pre-existing devices including a single slice CT scanner (GE, Light Speed), a simulator (Varian Ximatron) and L-gram linear accelerator (Varian, 2100C Linac). The surface doses were measured using thermo luminescent dosimeters (TLDs) at eight sites-the brain, eye, thyroid, chest, abdomen, ovary, prostate and pelvis. Results: Compared to imaging with the use of single slice non-gated CT, the use of 4DCT imaging increased the dose to the chest and abdomen approximately ten-fold ($1.74{\pm}0.34$ cGy versus $23.23{\pm}3.67$cGy). Imaging doses with the use of the Acuity simulator were smaller than doses with the use of the Ximatron simulator, which were $0.91{\pm}0.89$ cGy versus $6.77{\pm}3.56$ cGy, respectively. The dose with the use of the electronic portal imaging device (EPID; Varian IX unit) was approximately 50% of the dose with the use of the L-gram linear accelerator ($1.83{\pm}0.36$ cGy versus $3.80{\pm}1.67$ cGy). The dose from the OBI for fluoroscopy and low-dose mode CBCT were $0.97{\pm}0.34$ cGy and $2.3{\pm}0.67$ cGy, respectively. Conclusion: The use of 4DCT is the major source of an increase of the radiation (imaging) dose to patients. OBI and CBCT doses were small, but the accumulated dose associated with everyday verification need to be considered.

User-Perspective Issue Clustering Using Multi-Layered Two-Mode Network Analysis (다계층 이원 네트워크를 활용한 사용자 관점의 이슈 클러스터링)

  • Kim, Jieun;Kim, Namgyu;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.93-107
    • /
    • 2014
  • In this paper, we report what we have observed with regard to user-perspective issue clustering based on multi-layered two-mode network analysis. This work is significant in the context of data collection by companies about customer needs. Most companies have failed to uncover such needs for products or services properly in terms of demographic data such as age, income levels, and purchase history. Because of excessive reliance on limited internal data, most recommendation systems do not provide decision makers with appropriate business information for current business circumstances. However, part of the problem is the increasing regulation of personal data gathering and privacy. This makes demographic or transaction data collection more difficult, and is a significant hurdle for traditional recommendation approaches because these systems demand a great deal of personal data or transaction logs. Our motivation for presenting this paper to academia is our strong belief, and evidence, that most customers' requirements for products can be effectively and efficiently analyzed from unstructured textual data such as Internet news text. In order to derive users' requirements from textual data obtained online, the proposed approach in this paper attempts to construct double two-mode networks, such as a user-news network and news-issue network, and to integrate these into one quasi-network as the input for issue clustering. One of the contributions of this research is the development of a methodology utilizing enormous amounts of unstructured textual data for user-oriented issue clustering by leveraging existing text mining and social network analysis. In order to build multi-layered two-mode networks of news logs, we need some tools such as text mining and topic analysis. We used not only SAS Enterprise Miner 12.1, which provides a text miner module and cluster module for textual data analysis, but also NetMiner 4 for network visualization and analysis. Our approach for user-perspective issue clustering is composed of six main phases: crawling, topic analysis, access pattern analysis, network merging, network conversion, and clustering. In the first phase, we collect visit logs for news sites by crawler. After gathering unstructured news article data, the topic analysis phase extracts issues from each news article in order to build an article-news network. For simplicity, 100 topics are extracted from 13,652 articles. In the third phase, a user-article network is constructed with access patterns derived from web transaction logs. The double two-mode networks are then merged into a quasi-network of user-issue. Finally, in the user-oriented issue-clustering phase, we classify issues through structural equivalence, and compare these with the clustering results from statistical tools and network analysis. An experiment with a large dataset was performed to build a multi-layer two-mode network. After that, we compared the results of issue clustering from SAS with that of network analysis. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The sample dataset contains 150 million transaction logs and 13,652 news articles of 5,000 panels over one year. User-article and article-issue networks are constructed and merged into a user-issue quasi-network using Netminer. Our issue-clustering results applied the Partitioning Around Medoids (PAM) algorithm and Multidimensional Scaling (MDS), and are consistent with the results from SAS clustering. In spite of extensive efforts to provide user information with recommendation systems, most projects are successful only when companies have sufficient data about users and transactions. Our proposed methodology, user-perspective issue clustering, can provide practical support to decision-making in companies because it enhances user-related data from unstructured textual data. To overcome the problem of insufficient data from traditional approaches, our methodology infers customers' real interests by utilizing web transaction logs. In addition, we suggest topic analysis and issue clustering as a practical means of issue identification.

Extraction of Landmarks Using Building Attribute Data for Pedestrian Navigation Service (보행자 내비게이션 서비스를 위한 건물 속성정보를 이용한 랜드마크 추출)

  • Kim, Jinhyeong;Kim, Jiyoung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.1
    • /
    • pp.203-215
    • /
    • 2017
  • Recently, interest in Pedestrian Navigation Service (PNS) is being increased due to the diffusion of smart phone and the improvement of location determination technology and it is efficient to use landmarks in route guidance for pedestrians due to the characteristics of pedestrians' movement and success rate of path finding. Accordingly, researches on extracting landmarks have been progressed. However, preceding researches have a limit that they only considered the difference between buildings and did not consider visual attention of maps in display of PNS. This study improves this problem by defining building attributes as local variable and global variable. Local variables reflect the saliency of buildings by representing the difference between buildings and global variables reflects the visual attention by representing the inherent characteristics of buildings. Also, this study considers the connectivity of network and solves the overlapping problem of landmark candidate groups by network voronoi diagram. To extract landmarks, we defined building attribute data based on preceding researches. Next, we selected a choice point for pedestrians in pedestrian network data, and determined landmark candidate groups at each choice point. Building attribute data were calculated in the extracted landmark candidate groups and finally landmarks were extracted by principal component analysis. We applied the proposed method to a part of Gwanak-gu, Seoul and this study evaluated the extracted landmarks by making a comparison with labels and landmarks used by portal sites such as the NAVER and the DAUM. In conclusion, 132 landmarks (60.3%) among 219 landmarks of the NAVER and the DAUM were extracted by the proposed method and we confirmed that 228 landmarks which there are not labels or landmarks in the NAVER and the DAUM were helpful to determine a change of direction in path finding of local level.

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.