• Title/Summary/Keyword: retrieval time

Search Result 842, Processing Time 0.035 seconds

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

Estimation of Soil Moisture Content from Backscattering Coefficients Using a Radar Scatterometer (레이더 산란계 후방산란계수를 이용한 토양수분함량 추정)

  • Kim, Yi-Hyun;Hong, Suk-Young;Lee, Jae-Eun
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.2
    • /
    • pp.127-134
    • /
    • 2012
  • Microwave remote sensing can help monitor the land surface water cycle, crop growth and soil moisture. A ground-based polarimetric scatterometer has an advantage for continuous crop using multi-polarization and multi-frequencies and various incident angles have been used extensively in a frequency range expanding from L-band to Ka-band. In this study, we analyzed the relationships between L-, C- and X-band signatures and soil moisture content over the whole soybean growth period. Polarimetric backscatter data at L-, C- and X-bands were acquired every 10 minutes. L-band backscattering coefficients were higher than those observed using C- or X-band over the period. Backscattering coefficients for all frequencies and polarizations increased until Day Of Year (DOY) 271 and then decreased until harvesting stage (DOY 294). Time serious of soil moisture content was not a corresponding with backscattering over the whole growth stage, although it increased relatively until early August (R2, DOY 224). We conducted the relationship between the backscattering coefficients of each band and soil moisture content. Backscattering coefficients for all frequencies were not correlated with soil moisture content when considered over the entire stage ($r{\leq}0.50$). However, we found that L-band HH polarization was correlated with soil moisture content (r=0.90) when Leaf Area Index (LAI)<2. Retrieval equations were developed for estimating soil moisture content using L-band HH polarization. Relation between L-HH and soil moisture shows exponential pattern and highly related with soil moisture content ($R^2=0.92$). Results from this study show that backscattering coefficients of radar scatterometer appear effective to estimate soil moisture content.

Retrieval of Nitrogen Dioxide Column Density from Ground-based Pandora Measurement using the Differential Optical Absorption Spectroscopy Method (차등흡수분광기술을 이용한 지상기반 Pandora 관측으로부터의 대기 중 이산화질소 칼럼농도 산출)

  • Yang, Jiwon;Hong, Hyunkee;Choi, Wonei;Park, Junsung;Kim, Daewon;Kang, Hyeongwoo;Lee, Hanlim;Kim, Joon
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.981-992
    • /
    • 2017
  • We, for the first time, retrieved tropospheric nitrogen dioxide ($Trop.NO_2$) vertical column density (VCD) from ground-based instrument, Pandora, using the optical density fitting based on Differential Optical Absorption Spectroscopy (DOAS)in Seoul for the period from May 2014 to December 2014. The $Trop.NO_2$ VCDs retrieved from Pandora were compared with those obtained from Ozone Monitoring Instrument (OMI). A correlation coefficient (R) between those retrieved from Pandora and those obtained from OMI is 0.55. To compare with surface $NO_2$ VMRs obtained from in-situ, Trop. $NO_2$ VCDs retrieved from Pandora and those obtained from OMI are converted into $NO_2$ VMRs in boundary layer (BLH $NO_2$ VMRs) using data measured from Atmospheric Infrared Sounder (AIRS). Surface $NO_2$ VMRs obtained from in-situ range from 5.5 ppbv to 61.5 ppbv. BLH $NO_2$ VMRs retrieved from Pandora and OMI range from 2.1 ppbv to 44.2 ppbv and from 0.9 ppbv to 11.6 ppbv, respectively. The range of BLH $NO_2$ VMRs retrieved from OMI is narrower than that of BLH $NO_2$ VMRs retrieved from Pandora and surface $NO_2$ VMRs obtained from in-situ. There is a batter correlation between surface $NO_2$ VMRs obtained from in-situ and BLH $NO_2$ VMRs retrieved from Pandora (R= 0.50)than the correlation between surface $NO_2$ VMRs obtained from in-situ and BLH $NO_2$ VMRs retrieved from OMI (R = 0.36). This poor correlation is thought to be due to the lower near-surface sensitivity of the satellite-based instrument (OMI) than Pandora, the ground-based instrument.

THE CURRENT STATUS OF BIOMEDICAL ENGINEERING IN THE USA

  • Webster, John G.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1992 no.05
    • /
    • pp.27-47
    • /
    • 1992
  • Engineers have developed new instruments that aid in diagnosis and therapy Ultrasonic imaging has provided a nondamaging method of imaging internal organs. A complex transducer emits ultrasonic waves at many angles and reconstructs a map of internal anatomy and also velocities of blood in vessels. Fast computed tomography permits reconstruction of the 3-dimensional anatomy and perfusion of the heart at 20-Hz rates. Positron emission tomography uses certain isotopes that produce positrons that react with electrons to simultaneously emit two gamma rays in opposite directions. It locates the region of origin by using a ring of discrete scintillation detectors, each in electronic coincidence with an opposing detector. In magnetic resonance imaging, the patient is placed in a very strong magnetic field. The precessing of the hydrogen atoms is perturbed by an interrogating field to yield two-dimensional images of soft tissue having exceptional clarity. As an alternative to radiology image processing, film archiving, and retrieval, picture archiving and communication systems (PACS) are being implemented. Images from computed radiography, magnetic resonance imaging (MRI), nuclear medicine, and ultrasound are digitized, transmitted, and stored in computers for retrieval at distributed work stations. In electrical impedance tomography, electrodes are placed around the thorax. 50-kHz current is injected between two electrodes and voltages are measured on all other electrodes. A computer processes the data to yield an image of the resistivity of a 2-dimensional slice of the thorax. During fetal monitoring, a corkscrew electrode is screwed into the fetal scalp to measure the fetal electrocardiogram. Correlations with uterine contractions yield information on the status of the fetus during delivery To measure cardiac output by thermodilution, cold saline is injected into the right atrium. A thermistor in the right pulmonary artery yields temperature measurements, from which we can calculate cardiac output. In impedance cardiography, we measure the changes in electrical impedance as the heart ejects blood into the arteries. Motion artifacts are large, so signal averaging is useful during monitoring. An intraarterial blood gas monitoring system permits monitoring in real time. Light is sent down optical fibers inserted into the radial artery, where it is absorbed by dyes, which reemit the light at a different wavelength. The emitted light travels up optical fibers where an external instrument determines O2, CO2, and pH. Therapeutic devices include the electrosurgical unit. A high-frequency electric arc is drawn between the knife and the tissue. The arc cuts and the heat coagulates, thus preventing blood loss. Hyperthermia has demonstrated antitumor effects in patients in whom all conventional modes of therapy have failed. Methods of raising tumor temperature include focused ultrasound, radio-frequency power through needles, or microwaves. When the heart stops pumping, we use the defibrillator to restore normal pumping. A brief, high-current pulse through the heart synchronizes all cardiac fibers to restore normal rhythm. When the cardiac rhythm is too slow, we implant the cardiac pacemaker. An electrode within the heart stimulates the cardiac muscle to contract at the normal rate. When the cardiac valves are narrowed or leak, we implant an artificial valve. Silicone rubber and Teflon are used for biocompatibility. Artificial hearts powered by pneumatic hoses have been implanted in humans. However, the quality of life gradually degrades, and death ensues. When kidney stones develop, lithotripsy is used. A spark creates a pressure wave, which is focused on the stone and fragments it. The pieces pass out normally. When kidneys fail, the blood is cleansed during hemodialysis. Urea passes through a porous membrane to a dialysate bath to lower its concentration in the blood. The blind are able to read by scanning the Optacon with their fingertips. A camera scans letters and converts them to an array of vibrating pins. The deaf are able to hear using a cochlear implant. A microphone detects sound and divides it into frequency bands. 22 electrodes within the cochlea stimulate the acoustic the acoustic nerve to provide sound patterns. For those who have lost muscle function in the limbs, researchers are implanting electrodes to stimulate the muscle. Sensors in the legs and arms feed back signals to a computer that coordinates the stimulators to provide limb motion. For those with high spinal cord injury, a puff and sip switch can control a computer and permit the disabled person operate the computer and communicate with the outside world.

  • PDF

A Comparative Errors Assessment Between Surface Albedo Products of COMS/MI and GK-2A/AMI (천리안위성 1·2A호 지표면 알베도 상호 오차 분석 및 비교검증)

  • Woo, Jongho;Choi, Sungwon;Jin, Donghyun;Seong, Noh-hun;Jung, Daeseong;Sim, Suyoung;Byeon, Yugyeong;Jeon, Uujin;Sohn, Eunha;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1767-1772
    • /
    • 2021
  • Global satellite observation surface albedo data over a long period of time are actively used to monitor changes in the global climate and environment, and their utilization and importance are great. Through the generational shift of geostationary satellites COMS (Communication, Ocean and Meteorological Satellite)/MI (Meteorological Imager sensor) and GK-2A (GEO-KOMPSAT-2A)/AMI (Advanced Meteorological Imager sensor), it is possible to continuously secure surface albedo outputs. However, the surface albedo outputs of COMS/MI and GK-2A/AMI differ between outputs due to Differences in retrieval algorithms. Therefore, in order to expand the retrieval period of the surface albedo of COMS/MI and GK-2A/AMI to secure continuous climate change monitoring linkage, the analysis of the two satellite outputs and errors should be preceded. In this study, error characteristics were analyzed by performing comparative analysis with ground observation data AERONET (Aerosol Robotic Network) and other satellite data GLASS (Global Land Surface Satellite) for the overlapping period of COMS/MI and GK-2A/AMI surface albedo data. As a result of error analysis, it was confirmed that the RMSE of COMS/MI was 0.043, higher than the RMSE of GK-2A/AMI, 0.015. In addition, compared to other satellite (GLASS) data, the RMSE of COMS/MI was 0.029, slightly lower than that of GK-2A/AMI 0.038. When understanding these error characteristics and using COMS/MI and GK-2A/AMI's surface albedo data, it will be possible to actively utilize them for long-term climate change monitoring.

Power Conscious Disk Scheduling for Multimedia Data Retrieval (저전력 환경에서 멀티미디어 자료 재생을 위한 디스크 스케줄링 기법)

  • Choi, Jung-Wan;Won, Yoo-Jip;Jung, Won-Min
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.242-255
    • /
    • 2006
  • In the recent years, Popularization of mobile devices such as Smart Phones, PDAs and MP3 Players causes rapid increasing necessity of Power management technology because it is most essential factor of mobile devices. On the other hand, despite low price, hard disk has large capacity and high speed. Even it can be made small enough today, too. So it appropriates mobile devices. but it consumes too much power to embed In mobile devices. Due to these motivations, in this paper we had suggested methods of minimizing Power consumption while playing multimedia data in the disk media for real-time and we evaluated what we had suggested. Strict limitation of power consumption of mobile devices has a big impact on designing both hardware and software. One difference between real-time multimedia streaming data and legacy text based data is requirement about continuity of data supply. This fact is why disk drive must persist in active state for the entire playback duration, from power management point of view; it nay be a great burden. A legacy power management function of mobile disk drive affects quality of multimedia playback negatively because of excessive I/O requests when the disk is in standby state. Therefore, in this paper, we analyze power consumption profile of disk drive in detail, and we develop the algorithm which can play multimedia data effectively using less power. This algorithm calculates number of data block to be read and time duration of active/standby state. From this, the algorithm suggested in this paper does optimal scheduling that is ensuring continual playback of data blocks stored in mobile disk drive. And we implement our algorithms in publicly available MPEG player software. This MPEG player software saves up to 60% of power consumption as compared with full-time active stated disk drive, and 38% of power consumption by comparison with disk drive controlled by native power management method.

Crepe Search System Design using Web Crawling (웹 크롤링 이용한 크레페 검색 시스템 설계)

  • Kim, Hyo-Jong;Han, Kun-Hee;Shin, Seung-Soo
    • Journal of Digital Convergence
    • /
    • v.15 no.11
    • /
    • pp.261-269
    • /
    • 2017
  • The purpose of this paper is to provide a search system using a method of accessing the web in real time without using a database server in order to guarantee the up-to-date information in a single network, rather than using a plurality of bots connected by a wide area network Design. The method of the research is to design and analyze the system which can search the person and keyword quickly and accurately in crepe system. In the crepe server, when the user registers information, the body tag matching conversion process stores all the information as it is, since various styles are applied to each user, such as a font, a font size, and a color. The crepe server does not cause a problem of body tag matching. However, when executing the crepe retrieval system, the style and characteristics of users can not be formalized. This problem can be solved by using the html_img_parser function and the Go language html parser package. By applying queues and multiple threads to a general-purpose web crawler, rather than a web crawler design that targets a specific site, it is possible to utilize a multiplier that quickly and efficiently searches and collects various web sites in various applications.

A Method of Generating Table-of-Contents for Educational Video (교육용 비디오의 ToC 자동 생성 방법)

  • Lee Gwang-Gook;Kang Jung-Won;Kim Jae-Gon;Kim Whoi-Yul
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.28-41
    • /
    • 2006
  • Due to the rapid development of multimedia appliances, the increasing amount of multimedia data enforces the development of automatic video analysis techniques. In this paper, a method of ToC generation is proposed for educational video contents. The proposed method consists of two parts: scene segmentation followed by scene annotation. First, video sequence is divided into scenes by the proposed scene segmentation algorithm utilizing the characteristics of educational video. Then each shot in the scene is annotated in terms of scene type, existence of enclosed caption and main speaker of the shot. The ToC generated by the proposed method represents the structure of a video by the hierarchy of scenes and shots and gives description of each scene and shot by extracted features. Hence the generated ToC can help users to perceive the content of a video at a glance and. to access a desired position of a video easily. Also, the generated ToC automatically by the system can be further edited manually for the refinement to effectively reduce the required time achieving more detailed description of the video content. The experimental result showed that the proposed method can generate ToC for educational video with high accuracy.

Video Matching Algorithm of Content-Based Video Copy Detection for Copyright Protection (저작권보호를 위한 내용기반 비디오 복사검출의 비디오 정합 알고리즘)

  • Hyun, Ki-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.315-322
    • /
    • 2008
  • Searching a location of the copied video in video database, signatures should be robust to video reediting, channel noise, time variation of frame rate. Several kinds of signatures has been proposed. Ordinal signature, one of them, is difficult to describe the spatial characteristics of frame due to the site of fixed window, $N{\times}N$, which is compute the average gray value. In this paper, I studied an algorithm of sequence matching in video copy detection for the copyright protection, employing the R-tree index method for retrieval and suggesting a robust ordinal signatures for the original video clips and the same signatures of the pirated video. Robust ordinal has a 2-dimensional vector structures that has a strong to the noise and the variation of the frame rate. Also, it express as MBR form in search space of R-tree. Moreover, I focus on building a video copy detection method into which content publishers register their valuable digital content. The video copy detection algorithms compares the web content to the registered content and notifies the content owners of illegal copies. Experimental results show the proposed method is improve the video matching rate and it has a characteristics of signature suitable to the large video databases.

  • PDF