• Title/Summary/Keyword: Memory efficiency

Search Result 721, Processing Time 0.045 seconds

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

Prediction of groundwater level in the middle mountainous area of Pyoseon Watershed in Jeju Island using deep learning algorithm, LSTM (딥러닝 알고리즘 LSTM을 활용한 제주도 표선유역 중산간지역의 지하수위 예측)

  • Shin, Mun-Ju;Moon, Soo-Hyoung;Moon, Duk Chul
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.291-291
    • /
    • 2020
  • 제주도는 강수의 지표침투성이 좋은 화산섬의 지질특성상 지표수의 개발이용여건이 취약한 관계로 용수의 대부분을 지하수에 의존하고 있다. 따라서 제주도는 정책 및 연구적으로 오랜 기간동안 지하수의 보전관리에 많은 노력을 기울여 오고 있다. 하지만 최근 기후변화로 인한 강수의 변동성 증가로 인해 지하수위의 변동성 또한 증가할 가능성이 있으며 따라서 지하수위의 급격한 하강에 대비하여 지하수위의 예측 및 지하수 취수량 관리의 필요성이 요구되고 있다. 지하수에 절대적으로 의존하고 있는 제주도의 수자원 이용 여건을 고려할 때, 지하수의 취수량 관리를 위한 지하수위의 실시간 예측이 필요한 실정이다. 하지만 기존의 예측방법에 의한 제주도 지하수위 예측기간은 충분히 길지 않으며 예측기간이 길어지면 예측성능이 낮아지는 문제점이 있었다. 본 연구에서는 이러한 단점을 보완하기 위해 딥러닝 알고리즘인 Long Short Term Memory(LSTM)를 활용하여 제주도 남동쪽 표선유역 중산간지역의 1개 지하수위 관측정에 대해 지하수위를 예측하고 분석하였다. R 기반의 Keras 패키지에 있는 LSTM 알고리즘을 사용하였고, 입력자료는 인근의 성판악 및 교래 강우관측소의 일단위 강수량자료와 인근 취수정의 지하수 취수량자료 및 연구대상 관측정의 지하수위 자료를 사용하였으며, 사용된 자료의 기간은 2001년 2월 11일부터 2019년 10월 31일까지 이다. 2001년부터 13년의 보정 및 3년의 검증용 시계열자료를 사용하여 매개변수의 보정 및 과적합을 방지하였고, 3년의 예측용 시계열자료를 사용하여 LSTM 알고리즘의 예측성능을 평가하였다. 목표 예측일수는 1일, 10일, 20일, 30일로 설정하였으며 보정, 검증 및 예측기간에 대한 모의결과의 평가지수로는 Nash-Sutcliffe Efficiency(NSE)를 활용하였다. 모의결과, 보정, 검증 및 예측기간에 대한 1일 예측의 NSE는 각각 0.997, 0.997, 0.993 이었고, 10일 예측의 NSE는 각각 0.993, 0.912, 0.930 이었다. 20일 예측의 경우 NSE는 각각 0.809, 0.781, 0.809 이었으며 30일 예측의 경우 각각 0.677, 0.622, 0.633 이었다. 이것은 LSTM 알고리즘에 의한 10일 예측까지는 관측 지하수위 시계열자료를 매우 적절히 모의할 수 있다는 것을 의미하며, 20일 예측 또한 적절히 모의할 수 있다는 것을 의미한다. 따라서 LSTM 알고리즘을 활용하면 본 연구대상지점에 대한 2주일 또는 3주일의 안정적인 지하수위 예보가 가능하다고 판단된다. 또한 LSTM 알고리즘을 통한 실시간 지하수위 예측은 지하수 취수량 관리에 활용할 수 있을 것이다.

  • PDF

Analysis of groundwater withdrawal impact in the middle mountainous area of Pyoseon Watershed in Jeju Island using LSTM (LSTM을 활용한 제주도 표선유역 중산간지역의 지하수 취수영향 분석)

  • Shin, Mun-Ju;Moon, Soo-Hyoung;Moon, Duk-Chul;Koh, Hyuk-Joon;Kang, Kyung Goo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.267-267
    • /
    • 2021
  • 제주도는 화산섬의 지질특성상 강수의 지표침투성이 높아 지표수의 개발이용여건이 취약한 관계로 용수의 대부분을 지하수에 의존하고 있다. 따라서 지하수의 보전관리는 매우 중요한 사항이며 특히 지하수의 안정적인 이용을 위해서는 지하수 취수가 주변지역 지하수위에 미치는 영향 분석이 반드시 필요하다. 본 연구는 딥러닝 알고리즘인 Long Short-Term Memory(LSTM)를 활용하여 제주도 남동쪽 표선유역 중산간지역에 위치한 2개 지하수위 관측정을 대상으로 지하수 취수영향을 분석하였다. 입력자료로써 인근 2개 강우관측소의 일단위 강수량자료와 인근 6개 취수정의 지하수 취수량자료 및 연구대상 관측정의 지하수위 자료(2001. 2. 11. ~ 2019. 10. 31.)를 사용하였다. 지하수위 변동특성을 최대한 반영하기 위해 LSTM의 예측일수를 1일로 설정하였다. 보정 및 검증 기간을 사용하여 매개변수의 과적합을 방지하였으며, 테스트 기간을 사용하여 LSTM의 예측성능을 평가하였다. 평가지수로써 Nash-Sutcliffe Efficiency(NSE)와 평균제곱근오차(RMSE)를 사용하였다. 그리고 지하수 취수가 주변 지하수위 변동에 미치는 영향을 분석하기 위해 취수량을 최대취수량인 2,300 m3/일, 최대취수량의 2/3인 1,533 m3/일 및 0 m3/일로 설정하여 모의하였다. 모의결과, 2개 감시정의 보정, 검증 및 예측기간에 대한 NSE는 최대 0.999, 최소 0.976의 범위를 보였으며, RMSE는 최대 0.494 m, 최소 0.084 m를 보여 LSTM은 우수한 예측성능을 나타내었다. 이것은 LSTM이 지하수위 변동특성을 적절히 학습하였다는 것을 의미하며 따라서 추정된 매개변수를 활용하여 지하수 취수영향을 모의 및 분석하였다. 그 결과, 지하수위 하강량은 최대 0.38 m 였으며 이것은 대상지점에 대한 취수량은 지하수위 하강에 거의 영향을 주지 않는다는 것을 의미한다. 또한 취수량과 지하수위 하강량과의 관계는 한 개 관측정에 대해 선형적인 관계를 보인 반면 나머지 한 개 관측정에 대해서는 비선형적인 관계를 나타내는 것을 확인하였다. 따라서 LSTM 알고리즘을 활용하여 제주도 표선유역 중산간지역의 지하수위 변동특성을 분석할 수 있다.

  • PDF

Development of the Information Delivery System for the Home Nursing Service (가정간호사업 운용을 위한 정보전달체계 개발 I (가정간호 데이터베이스 구축과 뇌졸중 환자의 가정간호 전산개발))

  • Park, J.H;Kim, M.J;Hong, K.J;Han, K.J;Park, S.A;Yung, S.N;Lee, I.S;Joh, H.;Bang, K.S
    • Journal of Home Health Care Nursing
    • /
    • v.4
    • /
    • pp.5-22
    • /
    • 1997
  • The purpose of the study was to development an information delivery system for the home nursing service, to demonstrate and to evaluate the efficiency of it. The period of research conduct was from September 1996 to August 31, 1997. At the 1st stage to achieve the purpose, Firstly Assessment tool for the patients with cerebral vascular disease who have the first priority of HNS among the patients with various health problems at home was developed through literature review. Secondly, after identification of patient nursing problem by the home care nurse with the assessment tool, the patient's classification system developed by Park (1988) that was 128 nursing activities under 6 categories was used to identify the home care nurse's activities of the patient with CAV at home. The research team had several workshops with 5 clinical nurse experts to refine it. At last 110 nursing activities under 11 categories for the patients with CVA were derived. At the second stage, algorithms were developed to connect 110 nursing activities with the patient nursing problems identified by assessment tool. The computerizing process of the algorithms is as follows: These algorithms are realized with the computer program by use of the software engineering technique. The development is made by the prototyping method, which is the requirement analysis of the software specifications. The basic features of the usability, compatibility, adaptability and maintainability are taken into consideration. Particular emphasis is given to the efficient construction of the database. To enhance the database efficiency and to establish the structural cohesion, the data field is categorized with the weight of relevance to the particular disease. This approach permits the easy adaptability when numerous diseases are applied in the future. In paralleled with this, the expandability and maintainability is stressed through out the program development, which leads to the modular concept. However since the disease to be applied is increased in number as the project progress and since they are interrelated and coupled each other, the expand ability as well as maintainability should be considered with a big priority. Furthermore, since the system is to be synthesized with other medical systems in the future, these properties are very important. The prototype developed in this project is to be evaluated through the stage of system testing. There are various evaluation metrics such as cohesion, coupling and adaptability so on. But unfortunately, direct measurement of these metrics are very difficult, and accordingly, analytical and quantitative evaluations are almost impossible. Therefore, instead of the analytical evaluation, the experimental evaluation is to be applied through the test run by various users. This system testing will provide the viewpoint analysis of the user's level, and the detail and additional requirement specifications arising from user's real situation will be feedback into the system modeling. Also. the degree of freedom of the input and output will be improved, and the hardware limitation will be investigated. Upon the refining, the prototype system will be used as a design template. and will be used to develop the more extensive system. In detail. the relevant modules will be developed for the various diseases, and the module will be integrated by the macroscopic design process focusing on the inter modularity, generality of the database. and compatibility with other systems. The Home care Evaluation System is comprised of three main modules of : (1) General information on a patient, (2) General health status of a patient, and (3) Cerebrovascular disease patient. The general health status module has five sub modules of physical measurement, vitality, nursing, pharmaceutical description and emotional/cognition ability. The CVA patient module is divided into ten sub modules such as subjective sense, consciousness, memory and language pattern so on. The typical sub modules are described in appendix 3.

  • PDF

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

Data collection strategy for building rainfall-runoff LSTM model predicting daily runoff (강수-일유출량 추정 LSTM 모형의 구축을 위한 자료 수집 방안)

  • Kim, Dongkyun;Kang, Seokkoo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.10
    • /
    • pp.795-805
    • /
    • 2021
  • In this study, after developing an LSTM-based deep learning model for estimating daily runoff in the Soyang River Dam basin, the accuracy of the model for various combinations of model structure and input data was investigated. A model was built based on the database consisting of average daily precipitation, average daily temperature, average daily wind speed (input up to here), and daily average flow rate (output) during the first 12 years (1997.1.1-2008.12.31). The Nash-Sutcliffe Model Efficiency Coefficient (NSE) and RMSE were examined for validation using the flow discharge data of the later 12 years (2009.1.1-2020.12.31). The combination that showed the highest accuracy was the case in which all possible input data (12 years of daily precipitation, weather temperature, wind speed) were used on the LSTM model structure with 64 hidden units. The NSE and RMSE of the verification period were 0.862 and 76.8 m3/s, respectively. When the number of hidden units of LSTM exceeds 500, the performance degradation of the model due to overfitting begins to appear, and when the number of hidden units exceeds 1000, the overfitting problem becomes prominent. A model with very high performance (NSE=0.8~0.84) could be obtained when only 12 years of daily precipitation was used for model training. A model with reasonably high performance (NSE=0.63-0.85) when only one year of input data was used for model training. In particular, an accurate model (NSE=0.85) could be obtained if the one year of training data contains a wide magnitude of flow events such as extreme flow and droughts as well as normal events. If the training data includes both the normal and extreme flow rates, input data that is longer than 5 years did not significantly improve the model performance.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Effects of stimulus similarity on P300 amplitude in P300-based concealed information test (P300-기반 숨긴정보검사에서 자극유사성이 P300의 진폭에 미치는 영향)

  • Eom, Jin-Sup;Han, Yu-Hwa;Sohn, Jin-Hun;Park, Kwang-Bai
    • Science of Emotion and Sensibility
    • /
    • v.13 no.3
    • /
    • pp.541-550
    • /
    • 2010
  • The present study examined whether the physical similarity of test stimuli affects P300 amplitude and detection accuracy for the P300-based concealed information test (P300 CIT). As the participant pretended suffering from memory impairment by an accident, own name was used as a concealed information to be probed by the P300 CIT in which the participant discriminated between a target and other (probe, irrelevant) stimuli. One group of participants was tested in the easy task condition with low physical similarity among stimuli, the other group was tested in the difficult task condition with high physical similarity among stimuli. Using the base-to-peak P300 amplitude, the interaction effect of task difficulty and stimulus type was significant at $\alpha$=.1 level (p=.052). In the easy task condition the difference of P300 amplitude between the probe and the irrelevant stimuli was significant, while in the difficult task condition the difference was not significant. Using peak-to-peak P300 amplitude, on the other hand, the interaction effect of task difficulty and stimulus type was not significant with significant differences of P300 amplitude between the probe and the irrelevant stimuli in both task difficulty conditions. The difference of detection accuracy between task conditions was not significant with both measures of P300 amplitude although the difference was much smaller when peak-to-peak P300 amplitude was used. The results suggest that the efficiency of P300 CIT would not decrease even when the perceptual similarity among test stimuli is high.

  • PDF

An efficient interconnection network topology in dual-link CC-NUMA systems (이중 연결 구조 CC-NUMA 시스템의 효율적인 상호 연결망 구성 기법)

  • Suh, Hyo-Joong
    • The KIPS Transactions:PartA
    • /
    • v.11A no.1
    • /
    • pp.49-56
    • /
    • 2004
  • The performance of the multiprocessor systems is limited by the several factors. The system performance is affected by the processor speed, memory delay, and interconnection network bandwidth/latency. By the evolution of semiconductor technology, off the shelf microprocessor speed breaks beyond GHz, and the processors can be scalable up to multiprocessor system by connecting through the interconnection networks. In this situation, the system performances are bound by the latencies and the bandwidth of the interconnection networks. SCI, Myrinet, and Gigabit Ethernet are widely adopted as a high-speed interconnection network links for the high performance cluster systems. Performance improvement of the interconnection network can be achieved by the bandwidth extension and the latency minimization. Speed up of the operation clock speed is a simple way to accomplish the bandwidth and latency betterment, while its physical distance makes the difficulties to attain the high frequency clock. Hence the system performance and scalability suffered from the interconnection network limitation. Duplicating the link of the interconnection network is one of the solutions to resolve the bottleneck of the scalable systems. Dual-ring SCI link structure is an example of the interconnection network improvement. In this paper, I propose a network topology and a transaction path algorism, which optimize the latency and the efficiency under the duplicated links. By the simulation results, the proposed structure shows 1.05 to 1.11 times better latency, and exhibits 1.42 to 2.1 times faster execution compared to the dual ring systems.

Dynamics of Barrel-Shaped Young Supernova Remnants (항아리 형태 젊은 초신성 잔해의 동력학)

  • Choe, Seung-Urn;Jung, Hyun-Chul
    • Journal of the Korean earth science society
    • /
    • v.23 no.4
    • /
    • pp.357-368
    • /
    • 2002
  • In this study we have tried to explain the barrel-shaped morphology for young supernova remnants considering the dynamical effects of the ejecta. We consider the magnetic field amplification resulting from the Rayleigh-Taylor instability near the contact discontinuity. We can generate the synthetic radio image assuming the cosmic-ray pressure and calculate the azimuthal intensity ratio (A) to enable a quantitative comparison with observations. The postshock magnetic field are amplified by shearing, stretching, and compressing at the R-T finger boundary. The evolution of the instability strongly depends on the deceleration of the ejecta and the evolutionary stage of the remnant. the strength of the magnetic field increases in the initial phase and decreases after the reverse shock passes the constant density region of the ejecta. However, some memory of the earlier phases of amplification is retained in the interior even when the outer regions turn into a blast wave. The ratio of the averaged magnetic field strength at the equator to the one at the pole in the turbulent region can amount to 7.5 at the peak. The magnetic field amplification can make the large azimuthal intensity ratio (A=15). The magnitude of the amplification is sensitive to numerical resolution. This mens the magnetic field amplification can explain the barrel-shaped morphology of young supernova remnant without the dependence of the efficiency of the cosmic-ray acceleration on the magnetic field configuration. In order for this mechanism to be effective, the surrounding magnetic field must be well-ordered. The small number of barrel-shaped remnants may indicate that this condition rarely occurs.