• Title/Summary/Keyword: Order memory

Search Result 1,549, Processing Time 0.271 seconds

Comparative analysis of the soundscape evaluation depending on the listening experiment methods (청감실험방식에 따른 음풍경 평가결과 비교분석)

  • Jo, A-Hyeon;Haan, Chan-Hoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.287-301
    • /
    • 2022
  • The present study aims to investigate the difference of soundscape evaluation results from on-site field test and laboratory test which are commonly used for soundscape surveys. In order to do this, both field and lab tests were carried out at four different areas in Cheongju city. On-site questionnaire surveys were undertaken to 65 people at 13 points. Laboratory listening tests were carried out to 48 adults using recorded sounds and video. Laboratory tests were undertaken to two different groups who had experience of field survey or not. Also, two different sound reproduction tools, headphones and speakers, were used in laboratory tests. As a result, it was found that there is a very close correlation between sound loudness and annoyance in both field and laboratory tests. However, it was concluded that there must be a difference in recognizing the figure sounds between field and laboratory tests since it is hard to apprehend on-site situation only using visual and aural information provided in laboratory tests. In laboratory tests, it was shown that there is a some difference in perceived most loud figure sounds in two groups using headphones and speakers. Also, it was analyzed that there is a tendency that field experienced people recognize the figure sounds using their experienced memory while non-experienced people can not perceive the figure sounds.

Fake News Detection Using CNN-based Sentiment Change Patterns (CNN 기반 감성 변화 패턴을 이용한 가짜뉴스 탐지)

  • Tae Won Lee;Ji Su Park;Jin Gon Shon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.179-188
    • /
    • 2023
  • Recently, fake news disguises the form of news content and appears whenever important events occur, causing social confusion. Accordingly, artificial intelligence technology is used as a research to detect fake news. Fake news detection approaches such as automatically recognizing and blocking fake news through natural language processing or detecting social media influencer accounts that spread false information by combining with network causal inference could be implemented through deep learning. However, fake news detection is classified as a difficult problem to solve among many natural language processing fields. Due to the variety of forms and expressions of fake news, the difficulty of feature extraction is high, and there are various limitations, such as that one feature may have different meanings depending on the category to which the news belongs. In this paper, emotional change patterns are presented as an additional identification criterion for detecting fake news. We propose a model with improved performance by applying a convolutional neural network to a fake news data set to perform analysis based on content characteristics and additionally analyze emotional change patterns. Sentimental polarity is calculated for the sentences constituting the news and the result value dependent on the sentence order can be obtained by applying long-term and short-term memory. This is defined as a pattern of emotional change and combined with the content characteristics of news to be used as an independent variable in the proposed model for fake news detection. We train the proposed model and comparison model by deep learning and conduct an experiment using a fake news data set to confirm that emotion change patterns can improve fake news detection performance.

Research on the Design of TPO(Time, Place, 0Occasion)-Shift System for Mobile Multimedia Devices (휴대용 멀티미디어 디바이스를 위한 TPO(Time, Place, Occasion)-Shift 시스템 설계에 대한 연구)

  • Kim, Dae-Jin;Choi, Hong-Sub
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.9-16
    • /
    • 2009
  • While the broadband network and multimedia technology are being developed, the commercial market of digital contents as well as using IPTV has been widely spreading. In this background, Time-Shift system is developed for requirement of multimedia. This system is independent of Time but is not independent of Place and Occasion. For solving these problems, in this paper, we propose the TPO(Time, Place, Occasion)-Shift system for mobile multimedia devices. The profile that can be applied to the mobile multimedia devices is much different from that of the setter-box. And general mobile multimedia devices could not have such large memories that is for multimedia data. So it is important to continuously store and manage those multimedia data in limited capacity with mobile device's profile. Therefore we compose the basket in a way using defined time unit and manage these baskets for effective buffer management. In addition. since the file name of basket is made up to include a basket's time information, we can make use of this time information as DTS(Decoding Time Stamp). When some multimedia content is converted to be available for portable multimedia devices, we are able to compose new formatted contents using such DTS information. Using basket based buffer systems, we can compose the contents by real time in mobile multimedia devices and save some memory. In order to see the system's real-time operation and performance, we implemented the proposed TPO-Shift system on the basis of mobile device, MS340. And setter-box are desisted by using directshow player under Windows Vista environment. As a result, we can find the usefulness and real-time operation of the proposed systems.

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

6·25 Special Play Study (6·25 특집극 <최후의 증인> 연구)

  • Song, Chihyuk
    • (The) Research of the performance art and culture
    • /
    • no.42
    • /
    • pp.47-75
    • /
    • 2021
  • This thesis looks into the interpretation of the Korean War and mystery genre in Korea in the 1970s by analyzing the special drama , in which the theme was directly related to the Korean War, airing through MBC in 1979. It begins by finding the change in direction in the 1970s when the world of TV was dictated through the heavy censorship and the memory of the war by the government. It also looks at the intentions of the producer who was taking in the new way and the viewers who also accepted this drama and its reflections. In order to gain some insights into these issues, it compares between the drama "The Last Witness" and the original novel by Seong-jong Kim who holds the same time to see the way in which this is dramatized. The drama, "The Last Witness", was produced with a plan to generate a high-quality special drama which combined both artistry and sense of purpose. Nevertheless, as watching TV became a leisurely past-time during this period, TV dramas become more aggressive and suggestive in order to attract viewers. This ultimately was encored with obstacles due to the regime and the heavy censorship at the time. The genre of special drama that is well known in South Korea, is designed as an art form to satisfy both their unique artistry and its purpose. The conflict is seen between the key elements of the artistic drama crated by the producers and the 'encouraged' elements that often are needed to engage the viewers. Thus, more often than not, special dramas defeat the original intention of national harmony, encouraged by the regime. This is due to the 'novelty' aspect which grows from the effort of bringing enjoyment to viewers whilst also trying to achieve the artistic drama to life. Alongside this, crime element in this drama is designed in a way that visually embodies the process of deduction, becoming a new possibility to secure the reality of the times. However, it was also a paradoxical existence since it was indicated as an example of unrefined culture that lost its original intention. In that way, it is worth to think that detective suspense stories, which were not popular in Korea, influenced viewers as a tv drama series in the 1970s through the various elements that compose the genre. They went through a process of transplantation and acceptance whilst also attempting to satisfy the viewers and their encouraged elements to engage them. As is well known, crime drama in Korea has its own style by mixing anticommunism and detective reasoning. This combination is found in the way in which the genre naturally forms through the elements selected and excluded in the dramatization of "The Last Witness". The point is that the special drama "The Last Witness" can be seen as an intermediate form that shows the tendency of transformation from the detective reasoning form alongside the crime aspects as TV dramas began to include anticommunism messaging and investigation in the 1970s. In conclusion, when the detective reasoning is used as an element in a TV drama, it shows the trust of the public system and it constantly seeks the possibility of circumventing the political interpretation. The memories of the war is seen as a tool that neutralizes the dismal imaginations inscribed on the dark side of society and the system. As a result, "The Last Witness", broadcasted at the end of the Yushin regime in Korea, is a strange result which combines the logic of a special drama and the encouraged characteristics of television dramas. The viewers' desire which is the discussion about the hidden traces from the texts needs to be restored again.

Effects of heat treatment on the load-deflection properties of nickel-titanium wire (니켈-티타늄 와이어의 열처리에 따른 부하-변위 특성 변화)

  • Chang, Soo-Ho;Kim, Kwang-Won;Lim, Sung-Hoon
    • The korean journal of orthodontics
    • /
    • v.36 no.5
    • /
    • pp.349-359
    • /
    • 2006
  • Objective: Nickel-titanium alloy wire possesses excellent spring-back properties, shape memory and super-elasticity. In order to adapt this wire to clinical use, it is necessary to bend as well as to control its super-elastic force. The purpose of this study is to evaluate the effects of heat treatment on the load-deflection properties and transitional temperature range (TTR) of nickel-titanium wires. Methods: Nickel-titanium wires of different diameters ($0.016"\;{\times}\;0.022"$, $0.018"\;{\times}\;0.025"$ and $0.0215"\;{\times}\;0.028"$) were used. The samples were divided into 4 groups as follows: group 4, posterior segment of archwire (24 mm) without heat treatment; group 2, posterior segment of archwire (24 mm) with heat treatment only; group 3, anterior segment with bending and heat treatment; group 4, anterior segment with bending and 1 sec over heat treatment. Three point bending test was used to evaluate the change in load-deflection curve and obtained DSC (different scanning calorimetry) to check changes in $A_f$ temperature. Results: In the three point bending test, nickel-titanium wires with heat treatment only had higher load-deflection curve and loading and unloading plateau than nickel-titanium wires without heat treatment. Nickel-titanium wires with heat treatment had lower Af temperature than nickel-titanium wires without heat treatment. Nickel-titanium wires with heat treatment and bending had higher load-deflection curve than nickel- titanium wires with heat treatment and nickel-titanium wires without heat treatment. Nickel-titanium with heat treatment of over 1 sec and bending had the highest load-deflection curve. Nickel-titanium wires with heat treatment and bending had lower Af temperature, Nickel-titanium wires with heat treatment of over Af sec and bending had the lowest Af temperature. Conclusion: From the results of this study, it can be stated that heat treatment for bending of Nickel-titanium wires does not deprive the superelastic property but can cause increased force magnitude due to a higher load-deflection curve.