• Title/Summary/Keyword: 도로 벡터

Search Result 1,020, Processing Time 0.029 seconds

A Study on the Characteristics of Tropical Cyclone Passage Frequency over the Western North Pacific using Empirical Orthogonal Function (경험적 직교함수를 이용한 북서태평양 열대저기압의 이동빈도 특성에 관한 연구)

  • Choi, Ki-Seon;Kang, Ki-Ryong;Kim, Do-Woo;Hwang, Ho-Seong;Lee, Sang-Ryong
    • Journal of the Korean earth science society
    • /
    • v.30 no.6
    • /
    • pp.721-733
    • /
    • 2009
  • A pattern of tropical cyclone (TC) movement in the western North Pacific area was studied using the empirical orthogonal function (EOF) and the best track data from 1951 to 2007. The independent variable used in this study was defined as the frequency of tropical cyclone passage in 5 by 5 degree grid. The $1^{st}$, $2^{nd}$ and $3^{rd}$ modes were the east-west, north-south and diagonal variation patterns. Based on the time series of each component, the signs of first and second mode changed in 1997 and 1991, respectively, which seems to be related to the fact that the passage frequency was higher in the South China Sea for 20 years before 1990s, and recent 20 years in the East Asian area. When the eigen vectors were negative values in the first and second modes and TC moves into the western North Pacific, TC was formed mainly at the east side relatively compared to the case of the positive eigen vectors. The first mode seems to relate to the pressure pattern at the south of Lake Baikal, the second mode the variation pattern around $30^{\circ}N$, and the third mode the pressure pattern around Japan. The first mode was also closely related to the ENSO and negatively related to the $Ni\tilde{n}o$-3.4 index in the correlation analysis with SST anomalies.

Research on hybrid music recommendation system using metadata of music tracks and playlists (음악과 플레이리스트의 메타데이터를 활용한 하이브리드 음악 추천 시스템에 관한 연구)

  • Hyun Tae Lee;Gyoo Gun Lim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.145-165
    • /
    • 2023
  • Recommendation system plays a significant role on relieving difficulties of selecting information among rapidly increasing amount of information caused by the development of the Internet and on efficiently displaying information that fits individual personal interest. In particular, without the help of recommendation system, E-commerce and OTT companies cannot overcome the long-tail phenomenon, a phenomenon in which only popular products are consumed, as the number of products and contents are rapidly increasing. Therefore, the research on recommendation systems is being actively conducted to overcome the phenomenon and to provide information or contents that are aligned with users' individual interests, in order to induce customers to consume various products or contents. Usually, collaborative filtering which utilizes users' historical behavioral data shows better performance than contents-based filtering which utilizes users' preferred contents. However, collaborative filtering can suffer from cold-start problem which occurs when there is lack of users' historical behavioral data. In this paper, hybrid music recommendation system, which can solve cold-start problem, is proposed based on the playlist data of Melon music streaming service that is given by Kakao Arena for music playlist continuation competition. The goal of this research is to use music tracks, that are included in the playlists, and metadata of music tracks and playlists in order to predict other music tracks when the half or whole of the tracks are masked. Therefore, two different recommendation procedures were conducted depending on the two different situations. When music tracks are included in the playlist, LightFM is used in order to utilize the music track list of the playlists and metadata of each music tracks. Then, the result of Item2Vec model, which uses vector embeddings of music tracks, tags and titles for recommendation, is combined with the result of LightFM model to create final recommendation list. When there are no music tracks available in the playlists but only playlists' tags and titles are available, recommendation was made by finding similar playlists based on playlists vectors which was made by the aggregation of FastText pre-trained embedding vectors of tags and titles of each playlists. As a result, not only cold-start problem can be resolved, but also achieved better performance than ALS, BPR and Item2Vec by using the metadata of both music tracks and playlists. In addition, it was found that the LightFM model, which uses only artist information as an item feature, shows the best performance compared to other LightFM models which use other item features of music tracks.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

Effect of the Contents Ratio of Panaxadiol Ginsenosides Extracted from Various Compartment of Ginseng on the Transcription of Cu/Zn Superoxide Dismutase Gene (홍삼의 각 부위에서 추출된 Panaxadiol분획의 함량비에 따른 유해산소제거효소(Cu/Zn Superoxide Dismutase) 유도효과)

  • Chang Mun Seog;Choi Kang Ju;Rho Hyune Mo
    • Journal of Ginseng Research
    • /
    • v.23 no.1 s.53
    • /
    • pp.44-49
    • /
    • 1999
  • Cu/Zn superoxide dismutase (SOD1) is a protective enzyme responsible for the dismutat ion of superoxide radicals within the cell by converting superoxide radicals to oxygen and hydrogen peroxide, which is in turn changed to oxygen and water by catalase. Previously, we reported that the panaxadiol (PD) and its ginsenoside $Rb_2$ induced the expression of SOD1 gene through AP2 binding site and its induction. Here, we examined the effect of subfractions of panaxadiol ginsenosides, which were extracted from different parts of ginseng root that possess various ratios of panaxadiol to panaxatriol, on the induction of SOD1 gene expression. To explore this possibility, the upstream regulatory region of SOD1 was linked to the chloramphenicol acetyl transferase (CAT) structural gene and introduced into human hepatoma HepG2 cells. We observed that the transcriptional activation of SOD1 was proportional to the contents ratio of panaxadiol ginsensides. Consistent with this results, the total extract portion prepared from the finely-hairy root, which contains the higher ratio of panaxadiol to panaxatriol about 2.6, increased the SODl transcription about 3 fold. This results suggest that the panaxadiol fraction could induce the SOD1 and total extract of the ginseng finely-hairy root would be a useful material as a functional food for the SOD1 inducer.

  • PDF

The Internet GIS Infrastructure for Interoperablility : MAP(Mapping Assistant Protocol) (상호운용을 위한 인터넷 GIS 인프라구조 : MAP(Mapping Assistant Protocol))

  • 윤석찬;김영섭
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.424-426
    • /
    • 1998
  • 공간정보의 효율적 공유를 위해 인터넷 기반 GIS소프트웨어 개발 및 응용과 관련된 연구가 활발히 진행 중에 있다. 여러 인터넷 GIS의 기본적인 요구사항 및 현재까지 개발모델과 문제점을 살펴보고, 표준 인터넷 기술을 기반으로 최근 웹 기술 표준 동향을 포함한, OpenGIS상호 운용성이 지원되는 인터넷 GIS기본 구조를 제시하고자 한다. 표준화될 인터넷 GIS 속도 향상과 TCP/IP상의 보안문제가 해결되어야 하고, OpenGIS에서 구성하고 있는 공간 데이터 공유를 위한 표준 사양을 준수할 뿐 아니라 클라이언트/서버의 부하가 최적화된 구조여야한다. 특히 웹 중심의 각종 인터넷 기술들, 즉 HTTP NG. XML, SSL등의 표준 기술이 함께 적용되어야 한다. 새로운 인프라구조는 GIS D/B에 포함된 확장된 (Enhanced) HTTP/MAP 서버와 클라이언트로 구성된다. MAP클라이언트는 MIME-TYPE 에 따라 GIS데이터를 표시할 수 있는 윈도우 환경으로 변환되며 GIS 데이터셋은 XML을 기반으로 하는 MapML(Mapping Makup Language)를 통해 형식을 정한다. 클라이언트가 MapML 토큐먼트를 통해 정의된 구획의 레이어와 벡터 데이터를 요청하고, Map서버는GIS D/B에서 WKB 혹은 소위 VML 형태로 추출하여 클라이언트로 보내주게 된다. 주어진 구획은 MapML로 정의된 속성들을 통해 각종 부가 정보를 열람할 수 있다. MAP은 HTTP와 같은 형태로 동작하므로 전자인증, 암호화를 통한 GIS정보 보안, 클라이언트와 서버 부하의 효율적인 분배 XML을 통한 다양한 GIS속성표현이 가능하다. 본 구조는 Apache +Amiya + Crass D/B+ MapML 환경에서 구현되고 있다.팔일 전송 기법을 각각 제시하고 실험을 통해 이들의 특성을 비교분석하였다.미에서 uronic acid 함량이 두 배 이상으로 나타났다. 흑미의 uronic acid 함량이 가장 많이 용출된 분획은 sodium hydroxide 부분으로서 hemicellulose구조가 polyuronic acid의 형태인 것으로 사료된다. 추출획분의 구성단당은 여러 곡물연구의 보고와 유사하게 glucose, arabinose, xylose 함량이 대체로 높게 나타났다. 점미가 수가용성분에서 goucose대비 용출함량이 고르게 나타나는 경향을 보였고 흑미는 알칼리가용분에서 glucose가 상당량(0.68%) 포함되고 있음을 보여주었고 arabinose(0.68%), xylose(0.05%)도 다른 종류에 비해서 다량 함유한 것으로 나타났다. 흑미는 총식이섬유 함량이 높고 pectic substances, hemicellulose, uronic acid 함량이 높아서 콜레스테롤 저하 등의 효과가 기대되며 고섬유식품으로서 조리 특성 연구가 필요한 것으로 사료된다.리하였다. 얻어진 소견(所見)은 다음과 같았다. 1. 모년령(母年齡), 임신회수(姙娠回數), 임신기간(姙娠其間), 출산시체중등(出産時體重等)의 제요인(諸要因)은 주산기사망(周産基死亡)에 대(對)하여 통계적(統計的)으로 유의(有意)한 영향을 미치고 있어 $25{\sim}29$세(歲)의 연령군에서, 2번째 임신과 2번째의 출산에서 그리고 만삭의 임신 기간에, 출산시체중(出産時體重) $3.50{\sim}3.99kg$사이의 아이에서 그 주산기사망률(周産基死亡率)이 각각 가장 낮았다. 2. 사산(死産)과 초생아사망(初生兒死亡)을 구분(區分)하여 고려해 볼때 사산(死産)은 모성(母

  • PDF

Classification of Transport Vehicle Noise Events in Magnetotelluric Time Series Data in an Urban area Using Random Forest Techniques (Random Forest 기법을 이용한 도심지 MT 시계열 자료의 차량 잡음 분류)

  • Kwon, Hyoung-Seok;Ryu, Kyeongho;Sim, Ickhyeon;Lee, Choon-Ki;Oh, Seokhoon
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.4
    • /
    • pp.230-242
    • /
    • 2020
  • We performed a magnetotelluric (MT) survey to delineate the geological structures below the depth of 20 km in the Gyeongju area where an earthquake with a magnitude of 5.8 occurred in September 2016. The measured MT data were severely distorted by electrical noise caused by subways, power lines, factories, houses, and farmlands, and by vehicle noise from passing trains and large trucks. Using machine-learning methods, we classified the MT time series data obtained near the railway and highway into two groups according to the inclusion of traffic noise. We applied three schemes, stochastic gradient descent, support vector machine, and random forest, to the time series data for the highspeed train noise. We formulated three datasets, Hx, Hy, and Hx & Hy, for the time series data of the large truck noise and applied the random forest method to each dataset. To evaluate the effect of removing the traffic noise, we compared the time series data, amplitude spectra, and apparent resistivity curves before and after removing the traffic noise from the time series data. We also examined the frequency range affected by traffic noise and whether artifact noise occurred during the traffic noise removal process as a result of the residual difference.

Recognition of Partially Occluded Binary Objects using Elastic Deformation Energy Measure (탄성변형에너지 측도를 이용한 부분적으로 가려진 이진 객체의 인식)

  • Moon, Young-In;Koo, Ja-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.10
    • /
    • pp.63-70
    • /
    • 2014
  • Process of recognizing objects in binary images consists of image segmentation and pattern matching. If binary objects in the image are assumed to be separated, global features such as area, length of perimeter, or the ratio of the two can be used to recognize the objects in the image. However, if such an assumption is not valid, the global features can not be used but local features such as points or line segments should be used to recognize the objects. In this paper points with large curvature along the perimeter are chosen to be the feature points, and pairs of points selected from them are used as local features. Similarity of two local features are defined using elastic deformation energy for making the lengths and angles between gradient vectors at the end points same. Neighbour support value is defined and used for robust recognition of partially occluded binary objects. An experiment on Kimia-25 data showed that the proposed algorithm runs 4.5 times faster than the maximum clique algorithm with same recognition rate.

Implementation of a Web-Based Early Warning System for Meteorological Hazards (기상위험 조기경보를 위한 웹기반 표출시스템 구현)

  • Kong, In Hak;Kim, Hong Joong;Oh, Jai Ho;Lee, Yang Won
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.4
    • /
    • pp.21-28
    • /
    • 2016
  • Numeric weather prediction is important to prevent meteorological disasters such as heavy rain, heat wave, and cold wave. The Korea meteorological administration provides a realtime special weather report and the rural development administration demonstrates information about 2-day warning of agricultural disasters for farms in a few regions. To improve the early warning systems for meteorological hazards, a nation-wide high-resolution dataset for weather prediction should be combined with web-based GIS. This study aims to develop a web service prototype for early warning of meteorological hazards, which integrates web GIS technologies with a weather prediction database in a temporal resolution of 1 hour and a spatial resolution of 1 km. The spatially and temporally high-resolution dataset for meteorological hazards produced by downscaling of GME was serviced via a web GIS. In addition to the information about current status of meteorological hazards, the proposed system provides the hourly dong-level forecasting of meteorologic hazards for upcoming seven days, such as heavy rain, heat wave, and cold wave. This system can be utilized as an operational information service for municipal governments in Korea by achieving the future work to improve the accuracy of numeric weather predictions and the preprocessing time for raster and vector dataset.

A Fast Search Algorithm for Raman Spectrum using Singular Value Decomposition (특이값 분해를 이용한 라만 스펙트럼 고속 탐색 알고리즘)

  • Seo, Yu-Gyung;Baek, Sung-June;Ko, Dae-Young;Park, Jun-Kyu;Park, Aaron
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8455-8461
    • /
    • 2015
  • In this paper, we propose new search algorithms using SVD(Singular Value Decomposition) for fast search of Raman spectrum. In the proposed algorithms, small number of the eigen vectors obtained by SVD are chosen in accordance with their respective significance to achieve computation reduction. By introducing pilot test, we exclude large number of data from search and then, we apply partial distance search(PDS) for further computation reduction. We prepared 14,032 kinds of chemical Raman spectrum as the library for comparisons. Experiments were carried out with 7 methods, that is Full Search, PDS, 1DMPS modified MPS for applying to 1-dimensional space data with PDS(1DMPS+PDS), 1DMPS with PDS by using descending sorted variance of data(1DMPS Sort with Variance+PDS), 250-dimensional components of the SVD with PDS(250SVD+PDS) and proposed algorithms, PSP and PSSP. For exact comparison of computations, we compared the number of multiplications and additions required for each method. According to the experiments, PSSP algorithm shows 64.8% computation reduction when compared with 250SVD+PDS while PSP shows 157% computation reduction.