• Title/Summary/Keyword: searching system

Search Result 1,944, Processing Time 0.028 seconds

A study on the improving and constructing the content for the Sijo database in the Period of Modern Enlightenment (계몽기·근대시조 DB의 개선 및 콘텐츠화 방안 연구)

  • Chang, Chung-Soo
    • Sijohaknonchong
    • /
    • v.44
    • /
    • pp.105-138
    • /
    • 2016
  • Recently with the research function, "XML Digital collection of Sijo Texts in the Period of Modern Enlightenment" DB data is being provided through the Korean Research Memory (http://www.krm.or.kr) and the foundation for the constructing the contents of Sijo Texts in the Period of Modern Enlightenment has been laid. In this paper, by reviewing the characteristics and problems of Digital collection of Sijo Texts in the Period of Modern Enlightenment and searching for the improvement, I tried to find a way to make it into the content. This database has the primary meaning in the integrating and glancing at the vast amounts of Sijo in the Period of Modern Enlightenment to reaching 12,500 pieces. In addition, it is the first Sijo data base which is provide the variety of search features according to literature, name of poet, title of work, original text, per period, and etc. However, this database has the limits to verifying the overall aspects of the Sijo in the Period of Modern Enlightenment. The title and original text, which is written in the archaic word or Chinese character, could not be searched, because the standard type text of modern language is not formatted. And also the works and the individual Sijo works released after 1945 were missing in the database. It is inconvenient to extract the datum according to the poet, because poets are marked in the various ways such as one's real name, nom de plume and etc. To solve this kind of problems and improve the utilization of the database, I proposed the providing the standard type text of modern language, giving the index terms about content, providing the information on the work format and etc. Furthermore, if the Sijo database in the Period of Modern Enlightenment which is prepared the character of the Sijo Culture Information System could be built, it could be connected with the academic, educational contents. For the specific plan, I suggested as follow, - learning support materials for the Modern history and the national territory recognition on the Modern Age - source materials for studying indigenous animals and plants characters creating the commercial characters - applicability as the Sijo learning tool such as Sijo Game.

  • PDF

A Comparison of coincidence between the Light field & the Radiation field using film and BIS (필름과 BIS 영상장치를 이용한 광/방사선조사야 일치성 비교평가)

  • Bang, Dong-Wan;Seok, Jin-Yong;Jeong, Yun-Ju;Choi, Byeong-Don;Park, Jin-Hong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.16 no.2
    • /
    • pp.33-41
    • /
    • 2004
  • Purpose : Film has been the primary tool in coincidence testing between the light field and the radiation field, which constitutes the quality assurance list of a linear accelerator. But there is a great chance of errors being different among the observer when using film. Thus this study set out to use the BIS(Beam Image System) in addition to film in comparing and evaluating coincidence results between the two fields and in searching for the improvement measures. Materials & Methods : Photon beam of 6 and 15MV was exposed to film and the BIS using a linear accelerator. The light and radiation fields were each $50{\times}50,\;100{\times}100,\;and\;200{\times}200mm^2$. The gantry angle was $0^{\circ}$ when using film and $0^{\circ}\;and\;270^{\circ}$ when using the BIS. The devices adopted to test coincidence between the two fields were a ruler and film scanner when using film. With the BIS, the width of the scanned light and radiation fields was measured for errors with setting the X and Y axis. Results : The visual measurements of the observer with film resulted that the radiation field was bigger than the light field and that their maximum error was 1.9mm. The results were the same with the measurements using the film scanner except for the average error, which was less than 1.9mm. On the contrary, the measurements using the BIS showed that the light field was bigger than the radiation field at the gantry angle of $0^{\circ}\;and\;270^{\circ}$. The maximum error was 0.96mm, and the error range was $<{\pm}2mm$ both in the X and Y axis. The average error of ${\Delta}X$, Y was the smallest in the order of the visual film measurements, film scanner measurements, and BIS measurements Conclusion . This requires a careful measurement for accurate quality assurance since errors are much different according to each observer that tests coincidence between visual fields with film. And an observer needs to use another image device or develop a measuring device of his own if it seems necessary for accurate measurements.

  • PDF

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

A Study on the Resilience Process of Persons with Disabilities (중도장애인의 레질리언스(Resilience) 과정에 관한 연구)

  • Kim, Mi-Ok
    • Korean Journal of Social Welfare
    • /
    • v.60 no.2
    • /
    • pp.99-129
    • /
    • 2008
  • This study analyzed the resilience process of persons with disabilities using the grounded theory approach. To conduct this study, the researcher conducted in-depth interviews with 8 persons with disabilities. In data analysis, this study identified 393 concepts on the resilience process of persons with disabilities and the concepts were categorized into 45 sub-categories and 18 primary categories. In the paradigm model on the resilience process of persons with disabilities, it was identified that casual conditions included 'unawareness of disability before being disability', 'extreme pain', 'repressing psychological pain', and the contingent conditions were 'dis-empowerment by staying in home', 'isolation by himself with difficulty in accepting the disability', 'experience of frustration from social barriers with prejudice against persons with disabilities'. Also, it was identified that the resilience process could be dependent on the type and the degree of the disability, the gender, and the length of time being disability. In spite of the casual and contingent conditions, the central way in which persons with disabilities could acquire resilience was identified as 'enhancement of the power of positive thinking'. The control conditions which accelerate or retard central phenomenon were 'the awareness of not being alone through family, friends, neighborhood and the social system' externally and 'finding purpose in life through religion and help from other persons with disabilities', internally. The action/interactional sequences enhanced the efforts, self searching and active acting, and as a result, persons with disabilities could find comfort in life, participate in society and change the perspective of disability in society. The core categories of resilience process in persons with disabilities were a belief in affirmation and choice of life by initiative. In the process analysis, stages developed in the following: 'pain', 'strangeness', 'reflection', 'daily life'. This stage was more continuous and causal than discrete and complete. In this process, the types of resilience of persons with disabilities are divided into 'existence reflection', 'course development', 'implicit endeavor', and 'active execution'. This study showed the details of the paradigm models, the process and types with an in-depth understanding of the resilience process of persons with disabilities using grounded theory as well as theory construction and policy and clinical involvement on the study of persons with disabilities.

  • PDF

Exploring the Temporal Relationship Between Traffic Information Web/Mobile Application Access and Actual Traffic Volume on Expressways (웹/모바일-어플리케이션 접속 지표와 TCS 교통량의 상관관계 연구)

  • RYU, Ingon;LEE, Jaeyoung;CHOI, Keechoo;KIM, Junghwa;AHN, Soonwook
    • Journal of Korean Society of Transportation
    • /
    • v.34 no.1
    • /
    • pp.1-14
    • /
    • 2016
  • In the recent years, the internet has become accessible without limitation of time and location to anyone with smartphones. It resulted in more convenient travel information access both on the pre-trip and en-route phase. The main objective of this study is to conduct a stationary test for traffic information web/mobile application access indexes from TCS (Toll Collection System); and analyzing the relationship between the web/mobile application access indexes and actual traffic volume on expressways, in order to analyze searching behavior of expressway related travel information. The key findings of this study are as follows: first, the results of ADF-test and PP-test confirm that the web/mobile application access indexes by time periods satisfy stationary conditions even without log or differential transformation. Second, the Pearson correlation test showed that there is a strong and positive correlation between the web/mobile application access indexes and expressway entry and exit traffic volume. In contrast, truck entry traffic volume from TCS has no significant correlation with the web/mobile application access indexes. Third, the time gap relationship between time-series variables (i.e., concurrent, leading and lagging) was analyzed by cross-correlation tests. The results indicated that the mobile application access leads web access, and the number of mobile application execution is concurrent with all web access indexes. Lastly, there was no web/mobile application access indexes leading expressway entry traffic volumes on expressways, and the highest correlation was observed between webpage view/visitor/new visitor/repeat visitor/application execution counts and expressway entry volume with a lag of one hour. It is expected that specific individual travel behavior can be predicted such as route conversion time and ratio if the data are subdivided by time periods and areas and utilizing traffic information users' location.

A Study on the Effectiveness of Care of Patients with Alzheimer s Disease According to Residence Arrangement and Types of Services (치매노인의 거주형태 및 서비스유형에 따른 간호관리의 효과분석)

  • 홍여신;박현애;조남옥
    • Journal of Korean Academy of Nursing
    • /
    • v.26 no.4
    • /
    • pp.768-781
    • /
    • 1996
  • The problem of care of patients and families with Alzheimer's disease has become a conscious raising social policy issue in Korea. The government of the Republic of Korea has become cognizant of the situation and has begun searching for ways to remedy it. Thus, there is a need for a comprehensive under-standing of the situation in which patients and their families are struggling and the enormous problems of care. With a realization of the urgent need, this study was done to investigate the situation and the care needs of families with patients with Alzheimer's Disease, and to compare the effectiveness of services utilized by the families in terms of cost and effects on patient's conditions and on family live. The Subjects for the study were 29 families with hospitalized patients, 25 families utilizing hospital outpatient clinics, 14 families utilizing day care facilities, and 16 families with homebound patients. A total of 84 families were interviewed by four trained interviewers using structured and semistructured questionnaires. The data produced from these interviews included : the patient's stage of Alzheimer's disease, patient's bizarre behavior, hours spent on patient care per day, family burden and quality of life, direct and indirect costs encountered in the care of patients, and the families' evaluation of the effectiveness of the services received. The data were analyzed to determine the relationships between family charactersistics, patient's conditions and services utilization. The effectiveness of each of the service entities was assessed through families evaluation and hoped for service and comparisons were made between services in terms of the cost-effectiveness ratios. After initial comparison of cost-effectiveness ratios, further analysis was done to compare between groups for incremental effectiveness for each incremental unit of cost to determine the most cost-effective service entities. The findings of the study are as fellows : 1. The choice of living arrangement and the types of services are a function of the stage of Alzheimer's condition and the economic status of the family. 2. Comparision of the cost of care showed that most expenses were encountered in by families with hospitalization, families using outpatient services, and families using day care services in that order. The least expense was involved in the care of homebound patients. The economic burden felt by families was in the same order as expenses. 3. The average number of hours spent on daily patient care was 9.9 hours for the outpatient clinic users, 9.7 hours for homebound patients, and 5.4 hours for day care users. 4. There were significant differences in the patient's conditions (CDRL), bizarre behaviors and the families's burden by living arrangement and /or types of service. However, no significant difference was found between groups in the family's quality of life. 5. The families rated the services of day care center as most effective for the care of the patients and families, except for a few families who had experienced some improvement in the patient's conditions. The outpatient clinic users expressed psychological comforts mainly in that the patient was being taken care of. For those hospitalized patients, families expressed the comfort of being relieved of the burden of care and that the patient is being professionally cared for. Form the analysis of the costs, hours of patient care, patient's bizarre behaviors, family's quality of life and burdens, and family's evaluation of services, it is concluded that up to the mid stage of Alzheimer's condition, the utilization of day care center services is found to be the most cost-effective, and toward the end stage of the Alzheimer's disease, it is hoped that there will be a establishment of long term or short term in-patient facilities for the protection of patients and preservation of the integrity of families for less cost. Thus. it was concluded that the family centered system of care is the most effective for Korea with systematic support systems developed for the care of patients and their families according to the needs of families as the patient's condition deteriorates.

  • PDF

Development Plan of Guard Service According to the LBS Introduction (경호경비 발전전략에 따른 위치기반서비스(LBS) 도입)

  • Kim, Chang-Ho;Chang, Ye-Chin
    • Korean Security Journal
    • /
    • no.13
    • /
    • pp.145-168
    • /
    • 2007
  • Like to change to the information-oriented society, the guard service needs to be changed. The communication and hardware technology develop rapidly and according to the internet environment change from cable to wireless, modern person can approach every kinds of information service using wireless communication machinery which can be moved such as laptop, computer, PDA, mobile phone and so on, LBS field which presents the needing information and service at anytime, anywhere, and which kinds of device expands it's territory all the more together with the appearance of ubiquitous concept. LBS use the chip in the mobile phone and make to confirm the position of the joining member anytime within several tens centimeters to hundreds meters. LBS can be divided by the service method which use mobile communication base station and apply satellite. Also each service type can be divided by location chase service, public safe service, location based information service and so on, and it is the part which will plan with guard service development. It will be prospected 8.460 hundred million in 2005 years and 16.561 hundred million in 2007 years scale of market. Like this situation, it can be guessed that the guard service has to change rapidly according to the LBS application. Study method chooses documentary review basically, and at first theory method mainly uses the second documentary examination which depends on learned journal and independent volume which published in the inside and the outside of the country, internet searching, other kinds of all study report, statute book, thesis which published at public order research institute of the Regional Police Headquarter, police operation data, data which related with statute, documents and statistical data which depend on private guard company and so on. So the purpose of the study gropes in accordance with the LBS application, and present the problems and improvement method to analyze indirect of manager side of operate guard adaptation service of LBS, government side which has to activate LBS, systematical, operation management, manpower management and education training which related with guard course side which has to study and educate in accordance with application of the new guard service, as well as intents to excellent quality service of guard.

  • PDF

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

A digital Audio Watermarking Algorithm using 2D Barcode (2차원 바코드를 이용한 오디오 워터마킹 알고리즘)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.97-107
    • /
    • 2011
  • Nowadays there are a lot of issues about copyright infringement in the Internet world because the digital content on the network can be copied and delivered easily. Indeed the copied version has same quality with the original one. So, copyright owners and content provider want a powerful solution to protect their content. The popular one of the solutions was DRM (digital rights management) that is based on encryption technology and rights control. However, DRM-free service was launched after Steve Jobs who is CEO of Apple proposed a new music service paradigm without DRM, and the DRM is disappeared at the online music market. Even though the online music service decided to not equip the DRM solution, copyright owners and content providers are still searching a solution to protect their content. A solution to replace the DRM technology is digital audio watermarking technology which can embed copyright information into the music. In this paper, the author proposed a new audio watermarking algorithm with two approaches. First, the watermark information is generated by two dimensional barcode which has error correction code. So, the information can be recovered by itself if the errors fall into the range of the error tolerance. The other one is to use chirp sequence of CDMA (code division multiple access). These make the algorithm robust to the several malicious attacks. There are many 2D barcodes. Especially, QR code which is one of the matrix barcodes can express the information and the expression is freer than that of the other matrix barcodes. QR code has the square patterns with double at the three corners and these indicate the boundary of the symbol. This feature of the QR code is proper to express the watermark information. That is, because the QR code is 2D barcodes, nonlinear code and matrix code, it can be modulated to the spread spectrum and can be used for the watermarking algorithm. The proposed algorithm assigns the different spread spectrum sequences to the individual users respectively. In the case that the assigned code sequences are orthogonal, we can identify the watermark information of the individual user from an audio content. The algorithm used the Walsh code as an orthogonal code. The watermark information is rearranged to the 1D sequence from 2D barcode and modulated by the Walsh code. The modulated watermark information is embedded into the DCT (discrete cosine transform) domain of the original audio content. For the performance evaluation, I used 3 audio samples, "Amazing Grace", "Oh! Carol" and "Take me home country roads", The attacks for the robustness test were MP3 compression, echo attack, and sub woofer boost. The MP3 compression was performed by a tool of Cool Edit Pro 2.0. The specification of MP3 was CBR(Constant Bit Rate) 128kbps, 44,100Hz, and stereo. The echo attack had the echo with initial volume 70%, decay 75%, and delay 100msec. The sub woofer boost attack was a modification attack of low frequency part in the Fourier coefficients. The test results showed the proposed algorithm is robust to the attacks. In the MP3 attack, the strength of the watermark information is not affected, and then the watermark can be detected from all of the sample audios. In the sub woofer boost attack, the watermark was detected when the strength is 0.3. Also, in the case of echo attack, the watermark can be identified if the strength is greater and equal than 0.5.