DOI QR코드

DOI QR Code

Massive Music Resources Retrieval Method Based on Ant Colony Algorithm

  • Yun Meng (Performance School, Communication University of Shanxi)
  • Received : 2023.06.27
  • Accepted : 2024.03.11
  • Published : 2024.05.31

Abstract

Music resources are characterized by quantization, diversification and complication. With the rapid increase of the demand for music resources, the storage of music resources is very large. In order to improve the retrieval effect of music resources, a massive music resources retrieval method based on ant colony algorithm is proposed to effectively use music resources. This paper constructs autocorrelation function to extract pitch feature of music resource, classifies the music resource information by calculating feature similarity. Using ant colony algorithm to correlate the feature of music resource, gain the result of correlative, locate the result of detection and get the result of multi-module. Simulation results show that the proposed method has high precision and recall, short retrieval time and can effectively retrieve massive music resources.

Keywords

1. Introduction

With the rapid development of big data information technology and music social media, online music resources are increasing rapidly and music types are increasingly diversified [1, 2]. A large number of music resources are stored in cyberspace in the form of deep Web databases [3], which provide free or paid downloads to web users. Efficient and optimized management of massive music resources can improve users' experience of music appreciation and recognition of music playing software [4]. Mass music resource information database is a database that collects, collects, downloads and plays music resources, and realizes music sharing and dissemination in combination with music playing software [5]. However, with the increase of data storage capacity and the development of transmission technology, the number of digital music presents an unprecedented growth. This explosive growth has made it increasingly difficult to find pieces of music of interest in such vast musical resources.

Currently, relevant scholars have conducted research on the retrieval of massive data resources. Furner et al. [6] proposed a novel music dataset collection technology that utilizes online music services to obtain real data, complete music information retrieval through machine learning, and automatically label music information retrieval data. Experimental results show that this method can provide users with the required resources, and the retrieval results can meet user needs, but there is a problem of low recall of retrieval results. Berardinis et al. [7] proposed a hierarchical analysis method for music structures based on graph theory and multi resolution community detection, which performs the tasks of boundary detection and structure grouping, divides different levels of music structures, and improves the effectiveness of music retrieval. Huang et al. [8] proposed an emotion based composition algorithm, using emotional retrieval of lyrics to establish a two-dimensional emotional plane that can define potency and motivation coordinates. Through algorithm combination, music and emotion mapping for emotional retrieval of song fragments is achieved. Although the above two methods can also achieve personalized retrieval of music resources, due to the complexity of resource processing steps, resource retrieval takes a long time. Wang et al. [9] proposed a deep learning driven cross modal data retrieval model, which is composed of image feature extraction subnets, text feature extraction subnets, and hash code learning subnets. Using the strong learning and representation capabilities of deep learning, they proposed multiple label similarity measurement methods and model training methods. Experimental results show that this method has high retrieval accuracy, but low precision. Wenige et al. [10] combined similarity based retrieval strategies with knowledge graph queries, reply on concept annotations of Simple Knowledge Organization System (SKOS) vocabularies and a SPARQL based query language that facilities advanced and personalized requests for open available knowledge graphs. This method improves the diversity of resource retrieval results, but there is a problem of low precision.

In order to improve the retrieval effect of massive music resources, an ant colony algorithm is proposed in this paper. The structure of the article is as follows:

(1) Since pitch is one of the most important parameters in speech signal processing, and pitch has periodicity, autocorrelation function is constructed to extract pitch characteristics of music resources.

(2) Calculating the similarity of fragment features of music resources so as to classify the features of music resources information and avoid the influence of multiple types of music features on the retrieval results of music resources;

(3) Building a massive music resource retrieval model based on ant colony algorithm and finding the optimal solution. Among them, through search and iteration to find the shortest path, so as to achieve massive music resources retrieval, to ensure retrieval speed and accuracy.

2. Feature Extraction and Classification of Mass Music Resources

2.1 Feature Extraction of Music Resource Information

Fundamental frequency is one of the most significant parameters in speech signal processing. It plays a vital role in many fields such as speech recognition, speech coding, speech synthesis, speech recognition and language recognition. According to the characteristics of melody, the retrieval technology of music resources searches for the songs most similar to humming. In the massive resources, the melody information is represented by the human pitch frequency curve contour, therefore extracts the melody in the music fragment, namely extracts the corresponding pitch frequency, namely the pitch frequency. Pitch refers to the sound produced by the vibration of vocal cords with the largest energy and the highest amplitude, which has certain periodicity. The pitch refers to the vibration frequency of the pitch, also known as pitch. When people sing, the pitch frequency of vocal chord vibration is consistent with that of melody. Therefore, it is necessary to extract the pitch information from the music fragment.

The fundamental frequency extracted from a musical segment is converted to the corresponding pitch, that is, to the value of a note in semitone in the twelve-tone average, and the formula for calculating the value of a note in semitone in the fundamental frequency is as follows:

\(\begin{align}Semitone=12 \times \log _{2}\left(\frac{\text { Frequency }}{440}\right)+69\end{align}\)       (1)

In the formula, Semitone is the note value, Frequency is the pitch frequency.

Traditional methods are susceptible to interference and difficult to accurately extract fundamental frequency features from music resources containing noise and non-stationary signals. Therefore, this article adopts the autocorrelation function method to extract the fundamental frequency features of music resources. Compared with traditional methods, the autocorrelation function method has better robustness and adaptability in extracting the fundamental frequency features of music resources. It can suppress noise effects and process non-stationary and non periodic sounds by performing autocorrelation analysis on time series signals. The calculation of autocorrelation function is simple, easy to implement, and can provide more accurate fundamental frequency estimation results, effectively revealing the fundamental frequency information of music resources.

The principle of using autocorrelation function to extract fundamental frequency features is that the function with periodicity has the largest autocorrelation function after the translation of periodic multiple. Therefore, the autocorrelation function of the waveform curve of the music resource fragment will appear a peak at the integral multiple of the pitch period.

Assuming that s(m) is the input speech signal and a frame signal after framing is sn(m), the autocorrelation function of a frame signal can be calculated using the following formula:

\(\begin{align}R_{n}(k)=\sum_{m=0}^{N-k-1} s_{n}(m) s_{n}(m+k)\end{align}\)       (2)

In the formula, N is frame length, k is delay, and the range of values of k that makes autocorrelation function valid is, 0 ≤ k ≤ N-1, m is the type of voice segment, n is the number of voice segments. According to the non-zero value of k at the maximum of autocorrelation function, the fundamental frequency of signal can be calculated.

Autocorrelation functions have the following three properties: 1) if s(m) is a periodic signal, then Rn(k) is also a periodic signal, and the periods of both are the same; 2) autocorrelation functions are the largest when k = 0 is; 3) autocorrelation functions are even functions. The autocorrelation function makes use of the quasi-periodicity of the speech signal. When the delay k is equal to the integral multiple of the pitch period, the peak value appears. Because the pitch of human voice is about 60Hz, the frame length of human voice is more than 30ms when calculating the pitch.

After the autocorrelation function is obtained, the feature of massive music resource information is extracted, and the variable of music resource information is set to X1, X2, ⋯, Xn . Xk is standard normal distribution, obeys random distribution, and E is constant. The characteristic expression for extracting music resource information is:

\(\begin{align}Z_{n}=\sum_{k=1}^{n} X_{k}-E\left(\sum_{k=1}^{n} X_{k}\right)\end{align}\)       (3)

Based on the feature extraction of music resource information, the similarity between features is calculated, and the feature information is classified according to its similarity.

2.2 Feature Classification of Music Resource Information

The attribute division of music resource information can be realized by multi-module classification, and the precondition of classification is to compute feature similarity. Calculates the similarity Sim(x, y) of music resource information x and y, the available formula (4):

\(\begin{align}\operatorname{Sim}(x, y)=1-\frac{\sum_{i=1}^{n}\left(w_{i} \times Z_{n} \times|\mu(x)-\mu(y)|\right) \times \gamma}{\sum_{i=1}^{n}\left(w_{i} \times \delta \times \operatorname{Max}(\sigma(x), \sigma(y))\right)}\end{align}\)       (4)

In the formula, δ is the keyword feature vector of music fragment information, i is the number of features of music resource information, wi is the smoothing coefficient, γ is the friction coefficient, σ is the weight coefficient of music information keyword, and µ is the embedding dimension.

It is necessary to classify the music resources to reduce the mutual interference of multiple music resources. The music resource matrix is Z, which contains z samples and p variables. The matrix Z is normalized, and the mutation information of multiple resources is constructed. The Z obtains the comprehensive variable f0 by linear combination, the f0 = Za0 variance is maximized, and a0 is the linear transformation matrix. After normalization of music resource information, \(\begin{align}V=\frac{1}{T}\end{align}\) is used as covariance matrix, and the variance expression of f0 is obtained as follows:

\(\begin{align}V\left(f_{0}\right)=\frac{1}{T}\left\|f_{0}\right\|^{2}\end{align}\)       (5)

In the formula, T is the retrieval time for music fragment information. According to the variance results, the music resource information features are solved and the music resource information preprocessing is completed, namely:

L = V(f0)(Za0 - 1)       (6)

Set the normalized characteristic variable Vi partial derivative of the music resource information matrix Z to zero, and get the expression of the classification result:

\(\begin{align}\frac{L}{V_{i}}=2 Z V_{i}-2 \lambda a_{0}=0\end{align}\)       (7)

In the eigenvalue expression, λ is the eigenvalue, and by solving the eigenvalue of music resource information, the normalized variable characteristic a0 of the maximum eigenvalue of λ of the music resource information matrix Z is obtained.

3. Application of Ant Colony Algorithm in Massive Music Resources Retrieval

3.1 Characteristic Data Association Method for Mass Music Resource Fragments

Based on the principle of ant colony algorithm, it is assumed that a set of characteristic data can be generated at a given time by a musical target [11]. At the same time, a set of feature data can only be associated with one music segment target. When the music segment target enters the observation area, the ant colony starts the first cycle and searches and iterates to find the shortest path until the music segment region leaves the observation area.

The steps for determining the association of the multi-segment target feature data are as follows:

Step 1: When the characteristic data of music resources are obtained, the ant j is placed on the target search track of a large number of music resources, the number of tracks is recorded as D, and the correlation degree of the characteristics of the music resources of the search track is opened;

Step 2: Ant j chooses the target search feature of mass music resources with transition probability ς. At the same time, tabu list tabuj and temporary pool temppool (j) are set, j tabu is used to record the trajectory-motion feature association pairs selected by ant j, and temppool (j) is used to save the probability value selected by ant j. The formula for calculating the transfer probability is

\(\begin{align}p_{j}=\left\{\begin{array}{cc}\frac{\left(\alpha \cdot \tau_{j}+\beta \cdot \eta_{j}\right) \times D_{j}}{\sum_{j=1} \alpha \cdot \tau_{j}+\beta \cdot \eta_{j}} & j \neq t a b u_{j} \\ 0 & \text { otherwise }\end{array}\right.\end{align}\)       (8)

In the formula, α represents the weight parameter of pheromone τj, β represents the weight parameter of visibility parameter ηj, and Dj represents the moving distance of multi-ant j trajectory.

Step 3: In the process of path optimization, ants will leave τj in real time. In order to prevent a path pheromone from increasing rapidly and leading to local optimization, set the initial pheromone as τmax and the value range as τj ∈ [τmin, τmax].

Step 4: The ant updates the τj based on the local update rule in the process of path selection. When path selection is complete, the τj update rule changes to a global update rule. The global update rule expression is:

τj = pj(1 - ρ) + ∆τj       (9)

In the formula, ρ represents the pheromone renewal coefficient, (1− ρ) represents the pheromone residue coefficient, and ∆τj represents the updating of the pheromone according to the ranking information of the length of σ−1 -only excellent ant paths. The calculation formula is as follows:

\(\begin{align}\Delta \tau_{j}=\sum_{j=1}^{D} \Delta \tau_{j}^{D_{j}}\end{align}\)       (10)

In the formula, ∆τDjj represents the pheromone retained by j ants in Dj.

Step 5: When the correlation matrix is determined, all the trajectory motion feature correlation information is recorded, and the correct multi-music segment target feature data association results are obtained.

3.2 Solution of Associated Result Gain of Music Clip Target Feature Data

Using balanced ant colony algorithm for global and local resource search, through global search, ant colony can explore the entire search space and discover more potential music resources; Through local search, ant colonies can search the surrounding areas more finely, improving the accuracy of search results. The balanced ant colony algorithm can fully leverage the advantages of global and local search, while ensuring search efficiency, providing richer, more personalized, and high-quality music resource recommendations. This innovative significance reflects the performance improvement and application expansion of ant colony algorithm in the field of massive music resource retrieval, which can better meet the needs of users.

Initialize the cluster center [12, 13], and update the formula of the ant colony algorithm as follows:

Ea = (Wβ - Wα)/∆τj       (11)

In the formula, Wα and Wβ represent the upper and lower inertia weights. Set the target function variable of the music fragment cluster resource index to Q and obtain the global best with V(f0) minimum as constraint condition:

Xbest = (1 - γ)V(f0) + f(xj)δ        (12)

In the formula, f(xj) and j represent the moving probability of ants. The ant colony updates its speed and position according to the individual optimization and the global optimization. Using the advantage of ant colony algorithm, it obtains the global optimization Xbest before the stable stage. During the iterative search, the position of ant j at the T+1 time is:

xj(T + 1) = f(xj) + τjλa0       (13)

In the formula, the posterior probability px of massive music resource retrieval is obtained by considering the global optimization problem. According to the inertia weight of ant colony, the exact retrieval probability of cluster resource is obtained:

\(\begin{align}p_{x}=\frac{E_{a}\left(\alpha \cdot \tau_{j}+\beta \cdot \eta_{j}\right)}{\sum_{j=1}\left(W_{\beta}-W_{\alpha}\right) f\left(x_{j}\right)}\end{align}\)       (14)

Then the position of ant j in M space can be expressed as Xj = (xj1, xj2, ⋯xjM). According to the best position of individual, the probability of ant j retrieving resource can be obtained exactly:

pjM = xj(k)(1 - px) + Pbestj(k)ηj(k)       (15)

In the formula, ηj(k) is the trajectory information of ant colony, and the optimal moving probability is Pbestj(k). Taking the music resource information with the highest gain ratio as the fulcrum, the music resource information is divided into several modules through the decision tree algorithm [14]:

F = (F1, F2, ⋯, Fh)       (16)

In the formula, F can be divided into h modules according to the category of attributes. Expressed in h1, h2, ⋯, hn, the gain rate of information resources is as follows:

\(\begin{align}H(F)=\sum_{j=1}^{q}\left[q \log _{2} p_{j M}\right]\end{align}\)       (17)

In the formula, q represents the number of samples of the distinguished class hn. Set the unpartitioned category as An, which contains different values of aq, when An = aq, that category sample complete the set. The average information of F is as follows:

\(\begin{align}\bar{F}=\sum_{j=1}^{q} a_{q} \log _{2} p_{j M}\end{align}\)       (18)

Using An to divide \(\begin{align}\bar {F}\end{align}\) into several modules, the expression of gain of music resource information is as follows:

\(\begin{align}f\left(\bar{F}, A_{n}\right)=\frac{H(F)}{A_{n}(1-\bar{F}) p_{j M}}\end{align}\)      (19)

3.3 Multi-Module Retrieval of Music Resources based on Feature Extraction

Calculate the skewness of the same data in the feature data, use the skewness to classify the feature information, use the triangle fuzzy set algorithm to get the identification function, locate the music resource information detection results, and finally get the resource information multi-module retrieval results [15].

Assuming that the retrieval time of the resource information of the cl of the l music category within the T time is quite different, it shows that the consistency of the classification of the characteristic information resources is poor and the skewness is large, and the consistency of the information of the music resources is expressed by the approximation. The expression of the skewness reliability matrix is as follows:

\(\begin{align}c_{l}(T)= \left[\begin{array}{cccccc}1 & c_{1,2}(T) & \cdots & c_{1,l} (T) \\ c_{2,1} & 1 & \cdots & c _{2,l}(T) \\ \vdots & \vdots & \vdots & \vdots \\ c_{2,1}(T) & c_{l,2} (T) & \cdots & 1\end{array}\right]\\\end{align}\)       (20)

The reliability matrix of skewness has the reliability of both temporal and spatial information. The skewness of the same music resource information retrieved in T time domain is:

\(\begin{align}p_{l}(T)=\frac{1}{4} \sum_{l=1}^{q} c_{l}(T)\end{align}\)       (21)

Using the deviation obtained from the above formula to classify the feature information, the music resource information with the highest reliability is obtained, and the calculation result of deviation is optimized through the deviation classification performance, which is specifically expressed as follows:

If the characteristic sample of music resource information is (Xk, Yk), and the implicit layer of the resource information database is L, the l output is as follows:

\(\begin{align}L_{l}=\frac{p_{l}(T)\left(X_{k}, Y_{k}\right)}{\bar{F} L \times G_{u}}\end{align}\)       (22)

In the formula, Gu is the maximum eigenvalue of the music resource information, and differential conversion is performed by using the skew weight matrix. The formula is as follows:

\(\begin{align}G_{u}^{(p, l)}=\frac{L_{l} \times G_{u}^{(p, l-1)}}{\psi}\end{align}\)       (23)

In the formula, ψ is the differential rotation vector, and G(p, l-1)u is the differential translation vector. The output layer is backwardly recursive by using BP network to optimize the weights of the r layer and improve the accuracy of the deviation [16]. The details are as follows:

\(\begin{align}G_{u}^{(p, l)}(r)=\frac{L_{l} \times G_{u}^{(p, l-1)}(r)}{\psi}\end{align}\)       (24)

In order to retrieve music resource information efficiently and accurately, a triangle fuzzy set algorithm is used to obtain the identification function. A fuzzy set in the domain G is expressed as G ∈ g, and a e is set as the cycle number. The vector vi of the identification function is calculated using the following formula:

\(\begin{align}v_{i}=\frac{G_{u}^{(p, l)}(r)}{e}\end{align}\)       (25)

The resolution function is optimized by using the e +1 loop operation. Any vi satisfies vi ≥ 1 and vi = 1.

In the loop, the vi is repeated, at which point vi is assigned to hn, and the clustering partition factor is given in combination with pl(T):

\(\begin{align}G_{i}=\frac{p_{l}(T) \times v_{i}}{e+1}\end{align}\)       (26)

Through the above calculation, the second clustering operation is carried out on the characteristic resource information to distinguish the edge of the class. For the fuzzy information attribute H, the uncertain parameters are set, and when the massive music resource information interferes, the uncertain parameters are used to eliminate the music resource information [17-20], which is expressed as follows:

\(\begin{align}\sup (H)=\sum_{i=1}^{q} G_{i} / e\end{align}\)       (27)

The distributed multi-module retrieval of music resource information is realized by identifying the music resource information detected by the discriminant function.

\(\begin{align}I\left(X_{k}, Y_{k}\right)=\sum_{x \in X} H\left(X_{k}, Y_{k}\right) \times \lg \frac{p\left(X_{k}, Y_{k}\right)}{\left(X_{k}, Y_{k}\right)}\end{align}\)       (28)

4. Simulation Experiment

4.1 Test Environment

In order to verify the massive music resources retrieval method based on ant colony algorithm, in the environment of Matlab R2019b, the corresponding program is simulated with Matlab code. The parameters of hardware and software during the testing are as follows (Table 1):

Table 1. Details of hardware and software parameters

4.2 Test Readiness

Based on the Hadoop cloud computing platform, the embedded access interface and compatible database storage of music resources are designed with cool dog and QQ music software. The scale of semantic feature segmentation is 1.45, the sample length of data stream is 1024, the bandwidth is 2-30kHz, and the bandwidth is 3.6ms.

4.3 Performance Metrics

Comparing the retrieval recall rate, precision rate and time consumption of the proposed method (mass music resource retrieval method based on ant colony algorithm), literature [6] method (resource retrieval method based on music dataset collection technology) and literature [7] method (music structure hierarchy analysis method based on graph theory and multi- resolution community detection), the specific test results prove that the proposed method has certain advantages in mass music resource retrieval.

(1) Recall rate: refers to the ratio of music resources retrieved to the total amount of music resources. Assuming that U is the amount of relevant information retrieved and O is the total amount of system related information, the recall rate is:

\(\begin{align}U_{o}=\frac{U}{O} \times 100 \%\end{align}\)       (29)

(2) Accuracy: refers to the percentage of retrieved music resources compared to all detected music resources. The higher the precision rate, the more accurate the retrieval results are, and the higher the application value of the method is. R is the total amount of retrieval information. The calculation formula is:

\(\begin{align}U_{r}=\frac{U}{R} \times 100 \%\end{align}\)       (30)

(3) Retrieval time: The time spent in retrieval. The longer the time, the higher the retrieval efficiency.

4.4 Test Results

Taking the recall ratio as the test index, this paper uses the literature [6] method, the literature [7] method and the method of this paper to retrieve the music information resources, and compares the changes of the average recall ratio (%) of the different retrieval methods under different iterations. The test results are shown in Fig. 1.

Fig. 1. Average recall of different retrieval methods.

As can be seen from Fig. 1, when the iteration times are 40, the average recall of the three retrieval methods is close to each other, but with the increase of the iteration times, the average recall of the three retrieval methods begins to pull apart. When the iteration times reach 150, the average recall of the literature [6] method is about 87%, the average recall of the literature [7] method is about 84%, and the average recall of the method is about 93%. This is because the method used in this article utilizes a balanced ant colony algorithm for global and local resource search. Through global search, the ant colony can explore the entire search space, discover more potential music resources, improve the average recall rate of music resource retrieval, and thus enhance the comprehensiveness of retrieval results.

Taking the precision rate as the test index, literature [6] method, literature [7] method and this paper method are used to retrieve the music information resources, and the average precision rate (%) of different retrieval methods under different iteration times is compared. The test results are shown in Fig. 2.

Fig. 2. Average precision results of different retrieval methods.

From the analysis of the fluctuation of the average precision of the three retrieval methods, it can be seen that the three methods have high precision, with the highest values reaching over 90%. However, the maximum average precision of the method in this article reached 97%, which increased by 4% and 6% respectively compared to the highest values of the methods in literature [6] and literature [7], indicating that the retrieval results of the method in this article are better, which is due to the use of ant colony algorithm to update the pheromone to improve the retrieval accuracy. Therefore, the superiority of this method in precision ratio and retrieval stability is verified.

The retrieval time of different retrieval methods is compared under different iterations. The test results are shown in Table 2. In Table 2, N is the amount of information on music resources, and the unit of retrieval time is the second (s).

Table 2. Time-consuming results from different retrieval methods

As can be seen from the above table, with the increase of the amount of information of music resources, the corresponding retrieval time will also increase. When the amount of information reaches 3000 MB, the retrieval time of the method of literature [6] is 8.1 s, the retrieval time of the method of literature [7] is 9.3 s, and the retrieval time of the present method is 6.3 s. It can be seen that the retrieval time of the method of literature [6] is shorter than that of the method of literature [7], but the retrieval time is still higher than that of the present method, compared with the two literature methods, the retrieval time of this method is reduced by 1.8 s and 3.0 s, respectively, which proves that the present method has higher retrieval efficiency.This is because the method used in this article adopts the autocorrelation function method to extract the fundamental frequency features of music resources. This method is not only computationally simple and easy to implement, but also can suppress the influence of noise, reduce the complexity of feature extraction, thereby reducing the time required for music resource retrieval and improving detection efficiency.

5. Conclusion

Facing the problems of incomplete retrieval information, inaccurate retrieval results and long retrieval time in traditional music resource information retrieval methods, this paper proposes a massive music resource retrieval method based on ant colony algorithm. This method utilizes ant colony algorithm to extract features of music information, and introduces skewness calculation to distributed multi-module retrieval of resource information. Experimental results show that the proposed retrieval method has higher recall and precision than traditional retrieval methods, and requires less retrieval time.

With the emergence of massive music resources, there are still some problems in the research of distributed multi-module retrieval of music resources information. The following aspects should be studied in depth at the next stage:

(1) Extracting useful features of music resource information is the fundamental problem to improve the retrieval accuracy. How to extract features of music resource information more efficiently and accurately is the research focus of music resource information retrieval in the future.

(2) It is also a problem to be solved in the current field of resource management to design an effective recognition measure of music resources according to the multiple characteristics of music resources and to match users' judgment of the similarity of music resources information.

(3) For music resource information, how to organize and manage effectively, search efficiently and avoid unnecessary retrieval errors is one of the problems that need to be solved urgently at present. In the future, it is still necessary to study simpler and more direct index structure and accelerate query algorithm.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflict of interest

The authors declare that they have no competing interests.

Funding Statement

There is no funding in this article.

References

  1. C. E. Cella, "Music information retrieval and contemporary classical music: a successful failure," Trans. Int. Soc. Music Inform. Retr., vol. 3, no. 1, pp. 126-136, Sep. 2020.
  2. M. N. Halgamuge and D. Guruge, "Fair rewarding mechanism in music industry using smart contracts on public-permissionless blockchain," Multimed. Tools Appl., vol. 81, no. 2, pp. 1523-1544, Oct. 2021.
  3. N. Zhou, "Database design of regional music characteristic culture resources based on improved neural network in data mining," Pers. Ubiquitous Comput., vol. 24, no. 1, pp. 103-114, Feb. 2020.
  4. R. Macdonald, R. Burke, T. D. Nora, M. S. Donohue, and R. Birrell. "Our virtual tribe: sustaining and enhancing community via online music improvisation," Front. Psychol., vol. 11, no. 2, pp. 623640, Feb. 2021.
  5. K. Prinz, A. Flexer, and G. Widmer, "On end-to-end white-box adversarial attacks in music information retrieval," Trans. Int. Soc. Music Inform. Retr., vol. 4, no. 1, pp. 93-104, Jul. 2021.
  6. M. Furner, M. Z. Islam, and C. T. Li, "Knowledge discovery and visualisation framework using machine learning for music information retrieval from broadcast radio data," Expert. Syst. Appl., vol. 182, no. 15, pp. 115236, May. 2021.
  7. J. D. Berardinis, M. Vamvakaris, A. Cangelosi, and E.Coutinho, "Unveiling the hierarchical structure of music by multi-resolution community detection," Trans. Int. Soc. Music Inform. Retr., vol.3, no. 1, pp. 82-97, Jun. 2020.
  8. C. F. Huang and S. H. Yao, "Algorithmic composition for pop songs based on lyrics emotion retrieval," Multimed. Tools Appl., vol. 81, no. 9, pp. 12421-12440, Feb. 2022.
  9. H. Z. Wang and Y. Yan, "Cross-modal retrieval with deep learning," J. Harbin Univ. Sci. Technol., vol. 26, no. 01, pp. 9-16, Feb. 2021.
  10. L. Wenige and J. Ruhland, "Similarity-based knowledge graph queries for recommendation retrieval," Semant. Web, vol. 10, no. 6, pp. 1007-1037, 2019.
  11. K. S. Amorim and G. S. Pavani, "Ant colony optimization-based distributed multilayer routing and restoration in IP/MPLS over optical networks," Comput. Netw., vol. 185, no. 4, pp. 107747.1-107747.13, Feb. 2021.
  12. Nurdin, Taufiq, and Fajriana, "Searching the shortest route for distribution of LPG in Medan city using ant colony algorithm," IOP Conf. Ser. Mater. Sci. Eng., vol. 725, no. 1, pp. 012121, Jan. 2020.
  13. G. Lv and S. Chen, "Routing optimization in wireless sensor network based on improved ant colony algorithm," Int. Core J. Eng., vol. 6, no. 2, pp. 1-11, 2020.
  14. T. Xue, S. Y. Bei, and B. Li, "Path planning of intelligent vehicle based on ant colony algorithm," Comput. Simul., vol. 38, no. 12, pp. 362-366, Dec. 2021.
  15. A. Mazidi, M. Mahdavi, and F. Roshanfar, "An autonomic decision tree-based and deadline-onstraint resource provisioning in cloud applications," Concurr. Comput. Pract. Exper., vol. 33, no. 10, pp. 6196, Jan. 2021.
  16. J. T. Starczewski, P. Goetzen, and C. Napoli, "Triangular fuzzy-rough set based fuzzification of fuzzy rule-based systems," J. Artif. Int. Soft Comput. Res., vol. 10, no. 4, pp. 271-285, Oct. 2020.
  17. W. Li, "Research on project knowledge management risk early warning based on bp neural network," J. Phys. Conf. Ser., vol. 1744, no. 3, pp. 032250, Feb. 2021.
  18. P. Cardoso and S. Pekar, "Arakno - an R package for effective spider nomenclature, distribution and trait data retrieval from online resources," J. Arachnol., vol. 50, no. 1, pp. 30-32, 2022.
  19. M. Furumura, K. Iwasa, Y. Suzuki, T. Demachi, T. Ishibe, and R. S. Matsu'Ura, "Data Retrieval system of jma analog seismograms in the headquarters for earthquake research promotion of the Japanese government," Seismol. Res. Lett., vol. 91, no. 3, pp. 1403-1412, Jan. 2020.
  20. A. Rosewelt, and A. Renjit, "Semantic analysis-based relevant data retrieval model using feature selection, summarization and CNN," Soft Comput., vol. 24, no. 22, pp. 16983-17000, Nov. 2020.