• Title/Summary/Keyword: Data Utility

Search Result 1,215, Processing Time 0.03 seconds

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

A Study on the Potential Use of ChatGPT in Public Design Policy Decision-Making (공공디자인 정책 결정에 ChatGPT의 활용 가능성에 관한연구)

  • Son, Dong Joo;Yoon, Myeong Han
    • Journal of Service Research and Studies
    • /
    • v.13 no.3
    • /
    • pp.172-189
    • /
    • 2023
  • This study investigated the potential contribution of ChatGPT, a massive language and information model, in the decision-making process of public design policies, focusing on the characteristics inherent to public design. Public design utilizes the principles and approaches of design to address societal issues and aims to improve public services. In order to formulate public design policies and plans, it is essential to base them on extensive data, including the general status of the area, population demographics, infrastructure, resources, safety, existing policies, legal regulations, landscape, spatial conditions, current state of public design, and regional issues. Therefore, public design is a field of design research that encompasses a vast amount of data and language. Considering the rapid advancements in artificial intelligence technology and the significance of public design, this study aims to explore how massive language and information models like ChatGPT can contribute to public design policies. Alongside, we reviewed the concepts and principles of public design, its role in policy development and implementation, and examined the overview and features of ChatGPT, including its application cases and preceding research to determine its utility in the decision-making process of public design policies. The study found that ChatGPT could offer substantial language information during the formulation of public design policies and assist in decision-making. In particular, ChatGPT proved useful in providing various perspectives and swiftly supplying information necessary for policy decisions. Additionally, the trend of utilizing artificial intelligence in government policy development was confirmed through various studies. However, the usage of ChatGPT also unveiled ethical, legal, and personal privacy issues. Notably, ethical dilemmas were raised, along with issues related to bias and fairness. To practically apply ChatGPT in the decision-making process of public design policies, first, it is necessary to enhance the capacities of policy developers and public design experts to a certain extent. Second, it is advisable to create a provisional regulation named 'Ordinance on the Use of AI in Policy' to continuously refine the utilization until legal adjustments are made. Currently, implementing these two strategies is deemed necessary. Consequently, employing massive language and information models like ChatGPT in the public design field, which harbors a vast amount of language, holds substantial value.

Evaluation of the Utilization Potential of High-Resolution Optical Satellite Images in Port Ship Management: A Case Study on Berth Utilization in Busan New Port (고해상도 광학 위성영상의 항만선박관리 활용 가능성 평가: 부산 신항의 선석 활용을 대상으로)

  • Hyunsoo Kim ;Soyeong Jang ;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1173-1183
    • /
    • 2023
  • Over the past 20 years, Korea's overall import and export cargo volume has increased at an average annual rate of approximately 5.3%. About 99% of the cargo is still being transported by sea. Due to recent increases in maritime cargo volume, congestion in maritime logistics has become challenging due to factors such as the COVID-19 pandemic and conflicts. Continuous monitoring of ports has become crucial. Various ground observation systems and Automatic Identification System (AIS) data have been utilized for monitoring ports and conducting numerous preliminary studies for the efficient operation of container terminals and cargo volume prediction. However, small and developing countries' ports face difficulties in monitoring due to environmental issues and aging infrastructure compared to large ports. Recently, with the increasing utility of artificial satellites, preliminary studies have been conducted using satellite imagery for continuous maritime cargo data collection and establishing ocean monitoring systems in vast and hard-to-reach areas. This study aims to visually detect ships docked at berths in the Busan New Port using high-resolution satellite imagery and quantitatively evaluate berth utilization rates. By utilizing high-resolution satellite imagery from Compact Advanced Satellite 500-1 (CAS500-1), Korea Multi-Purpose satellite-3 (KOMPSAT-3), PlanetScope, and Sentinel-2A, ships docked within the port berths were visually detected. The berth utilization rate was calculated using the total number of ships that could be docked at the berths. The results showed variations in berth utilization rates on June 2, 2022, with values of 0.67, 0.7, and 0.59, indicating fluctuations based on the time of satellite image capture. On June 3, 2022, the value remained at 0.7, signifying a consistent berth utilization rate despite changes in ship types. A higher berth utilization rate indicates active operations at the berth. This information can assist in basic planning for new ship operation schedules, as congested berths can lead to longer waiting times for ships in anchorages, potentially resulting in increased freight rates. The duration of operations at berths can vary from several hours to several days. The results of calculating changes in ships at berths based on differences in satellite image capture times, even with a time difference of 4 minutes and 49 seconds, demonstrated variations in ship presence. With short observation intervals and the utilization of high-resolution satellite imagery, continuous monitoring within ports can be achieved. Additionally, utilizing satellite imagery to monitor changes in ships at berths in minute increments could prove useful for small and developing country ports where harbor management is not well-established, offering valuable insights and solutions.

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Anthropometric Measurement, Dietary Behaviors, Health-related Behaviors and Nutrient Intake According to Lifestyles of College Students (대학생의 라이프스타일 유형에 따른 신체계측, 식행동, 건강관련 생활습관 및 영양소 섭취상태에 관한 연구)

  • Cheong, Sun-Hee;Na, Young-Joo;Lee, Eun-Hee;Chang, Kyung-Ja
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.36 no.12
    • /
    • pp.1560-1570
    • /
    • 2007
  • The purpose of this study was to investigate the differences according to lifestyle in anthropometric measurement, dietary attitude, health-related behaviors and nutrient intake among the college students. The subjects were 994 nation-wide college students (male: 385, female: 609) and divided into 7 clusters (PEAO: passive economy/appearance-oriented type, NCPR: non-consumption/pursuit of relationship type, PTA: pursuit of traditional actuality type, PAT: pursuit of active health type, UO: utility-oriented type, POF: pursuit of open fashion type, PFR: pursuit of family relations type). A cross-sectional survey was conducted using a self administered questionnaire, and the data were collected via Internet or by mail. The nutrient intake data collected from food record were analyzed by the Computer Aided Nutritional Analysis Program. Data were analyzed by a SPSS 12.0 program. Average age of male and female college students were 23.7 years and 21.6 years, respectively. Most of the college students had poor eating habits. In particular, about 60% of the PEAO group has irregularity in meal time. The students in PAH and POF groups showed significantly higher consumption frequency of fruits, meat products and foods cooked with oil compared to the other groups. As for exercise, drinking and smoking, there were significant differences between PAH and the other groups. Asked for the reason for body weight control, 16.2% of NCPR group answered "for health", but 24.8% of PEAO group and 26.3% of POF group answered "for appearance". Calorie, vitamin A, vitamin $B_2$, calcium and iron intakes of all the groups were lower than the Korean DRIs. Female students in PTA group showed significantly lower vitamin $B_1$ and niacin intakes compared to the PFR group. Therefore, these results provide nation-wide information on health-related behaviors and nutrient intake according to lifestyles among Korean college students.

Utility of H-reflex in the Diagnosis of Cervical Radiculopathy (경수 신경근병증 진단에서의 H-reflex의 유용성)

  • Lee, Jun;Park, Gun-Ju;Doo, Hyun-Cheol;Park, Sung-Geon;Jeong, Yun-Seog;Hah, Jung-Sang
    • Journal of Yeungnam Medical Science
    • /
    • v.14 no.1
    • /
    • pp.111-122
    • /
    • 1997
  • H-reflex is a kind of late respons which can be used for the proximal nerve conduction study. Also it is a useful and widely used nerve conduction technique es to look electrically at the monosynaptic reflex. Although recordable from all muscles theoretically, H-reflexes are most commonly recorded from the calf muscles following stimulation of the tibial nerve in the popliteal fossa. But in this study, We tried to establish the normal data and to evaluate the significance of the H-reflex study in cervical radiculopathy. H-reflexes were recorded from flexor carpi radialis (FCR) muscle, extensor carpi radialis (ECR) muscle, brachioradialis (BR) muscle, and abductor digiti minimi (ADM) muscle in 31 normal adults (62 cases) and 12 patients with cervical radiculopathy. The mean values of H-reflex latency in normal control group were $16.16{\pm}1.65$ msec in FCR; $15.99{\pm}1.25$ msec in ECR; $16.47{\pm}1.59$ msec in BR; $24.46{\pm}1.42$ msec in ADM. And the mean values of side to side difference of H-reflex latency were $0.47{\pm}0.48$ msec in FCR; $0.68{\pm}0.72$ msec in ECR; $0.63{\pm}0.43$ msec in BR; $22.31{\pm}1.24$ msec in ADM. Mean values of side to side differences of interlatency time were $0.49{\pm}0.47$ msec in FCR; $0.73{\pm}0.62$ msec in ECR; $0.79{\pm}0.71$ msec in BR; $0.69{\pm}0.44$ msec in ADM. Also, there were no significant differences in H-reflex latency between right and left side. H-reflex tests in patient group with cervical radiculopathy revealed abnormal findings in 11 out of 12 patients. These results suggest that H-reflex in the upper extremity would be helpful in the diagnosis of the cervical radiculopathy.

  • PDF

Studies on the Possible Utilization of Diplachne fusca L. as a Forage Crop II. Growth Characteristics, Forage Yield and Quality of Diplachne fusca L. (바다새 (Diplachne fusca L.) 의 사료작물화 가능성에 관한 연구 II. 바다새의 생육특성, 사초수량 및 사료가치)

  • 김창호;양주훈;이효원
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.18 no.3
    • /
    • pp.179-186
    • /
    • 1998
  • This experiment was conducted to study on forage utility of Diplachne fusca L. which live in reclaimed saline land of midwest region of Korea The secondary experiment was conducted to know on growth characteristics, forage yield and forage value of Diplachne fusca L. in order to get a necessary data on possible utilization of native plant as a forage crop and practical use of reclaimed saline land. The results obtained are summarized as follows; 1. The growth of Diplachne fusca L. was neary finished at heading stage. So plant height, leaf length, leaf width, stem diameter, tillering number, fresh weight and dry weight per plant were 137.5cm, 42.6cm, 4.65mm, 2.48mm, 15.3 tiller, 44.68 and 15.3g respectively. 2. Fresh weight was the highest with 4,460kg/10a at heading stage, dry weight was 1,530kg/10a at heading stage and 1,630kg/10a at 20 day after heading. The fresh weight was significantly difference between cutting height level according to cutting time, but total fresh weight was not significantly difference between cutting height level. Total dry weight was significantly difference between cutting height, so it was a large yield at cutting height of 10cm. 3. The contents of crude protein, available protein, digestible protein and TDN were the range of 12.3~3.7%, 12.3~3.7% 10.8~3.6% and 65.2~60.7% according to growth stage, respectively. The highest yield of crude protein, available protein, and digestible protein were showed at heading stage, that of TDN showed at 20 day after heading. The contents of ADF and NDF were the range of 36.4~50.0% and 62.7-80.5% according to growth stage. 4. The contents of P, Ca, K and Mg were the range of 0.31~0.20, 0.70~0.52, 1.74~1.28 and 0.19~0.18% according to growth stage, respectively. The highest yield of P, Ca and K was showed at heading stage, that of Mg showed at 20 day after heading. 5. The contents of ENE, NEL, NEM and NEG were the range of 1.42~1.29, 0.68~0.62, 0.68~0.61 and 0.40~0.35 McaVlb according to growth stage, respectively. The highest yield of ENE, NEL, NEM and NEG was showed at 20 day after heading by inuease after heading. 6. The grasseating ratio of Diplachne fusca L. of before and after heading by milk cattle was 96.5% and 95.3%, respectively.

  • PDF

The Characteristics of Pain Coping Strategies in Patients with Chronic Pain by Using Korean Version-Coping Strategies Questionnaire(K-CSQ) (한국판 대처 전략 질문지 (K-CSQ)를 이용한 만성 통증 환자의 통증대처 특성)

  • Song, Ji-Young;Kim, Tae;Yoon, Hyun-Sang;Kim, Chung-Song;Yeom, Tae-Ho
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.10 no.2
    • /
    • pp.110-119
    • /
    • 2002
  • Objectives : Numbers of patients who have chronic pain seem to be increasing in the psychiatric practice. Many investigators have used models of stress and coping to help explain the differences in adjustment found among persons who experience chronic pain. Coping strategies appear to be associated with adjustment in chronic pain patients. The objectives of this study were to develop a self-report questionnaire which is the most widely used measures of pain coping strategies, Coping Strategies Questionnaire (CSQ) into Korean version and to study the different coping strategies with which chronic pain patients frequently use when their pain reaches a moderate or greater level of intensity. Methods : One hundred twenty-eight individuals with chronic pain conditions and two hundred fifty-two normal controls were administered the Korean version-Coping Strategies Questionnaire(KCSQ) to assess the frequency of use and perceived effectiveness of a variety of cognitive and behavioral pain coping strategies. We also obtained their clinical features in chronic pain patients. Reliability of the questionnaire were analyzed and evaluated differences of coping strategies between two groups. Results : Data analysis revealed that the questionnaire was internally reliable. Chronic pain patients reported frequent use of a variety of pain coping strategies, such as coping self-statements, praying and hoping, catastrophizing, and increase behavior scales which were higher compared to the normal controls. Conclusion: K-CSQ revealed to be a reliable self-report questionnaire which is useful for the assessment of coping strategies in clinical setting on chronic pain. And analysis of pain coping strategies may be helpful in understanding pain for chronic pain patients. The individual K-CSQ may have greater utility in terms of examining coping, appraisals, and pain adjustment. A consideration of pain coping strategies may allow one to design pain coping skills training interventions so as to fit the individual chronic pain patient. Further research is needed to determine whether cognitive-behavioral intervention designed to decrease maladaptive coping strategies can reduce pain and improve the physical and psycho-social functioning of chronic patients.

  • PDF