• Title/Summary/Keyword: 정보 수집 및 추출

Search Result 749, Processing Time 0.029 seconds

The Development and Validation of the Leisure Obsession Scale (여가강박 척도의 개발 및 타당화 연구)

  • Jiyeon Yoon;Seung-Hyuk Choi;Taekyun Hur
    • Korean Journal of Culture and Social Issue
    • /
    • v.19 no.2
    • /
    • pp.235-257
    • /
    • 2013
  • The purpose of this study is to develop the Leisure Obsession Scale and examine the validity of the scale. The Leisure Obsession Scale was developed and identified its validity by exploratory factor analysis, confirmatory factor analysis, and correlation analysis. The Leisure Obsession Scale consists of two factors, which are 'Leisure Preoccupation' and 'Leisure Stereotype'. Those two factors indicated the reasonable fit index by confirmatory factor analysis. In addition, this scale displayed discriminant validity via measurement of obsession, workaholism, leisure anxiety, and leisure constraint. Also, the results of criterion validation analysis shows that the Leisure Obsession Scale and its subscale are correlated with measure of age, leisure information searching, intention of participation to new leisure activities, and intention of increase in leisure time. Conceptualizing leisure obsession and exploring components of leisure obsession would be valuable for understanding the nature of leisure obsession and its effects on leisure satisfaction, and suggesting more effective psychological intervention in a diverse population.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Tracking of cryptocurrency moved through blockchain Bridge (블록체인 브릿지를 통해 이동한 가상자산의 추적 및 검증)

  • Donghyun Ha;Taeshik Shon
    • Journal of Platform Technology
    • /
    • v.11 no.3
    • /
    • pp.32-44
    • /
    • 2023
  • A blockchain bridge (hereinafter referred to as "bridge") is a service that enables the transfer of assets between blockchains. A bridge accepts virtual assets from users and delivers the same virtual assets to users on other blockchains. Users use bridges because they cannot transfer assets to other blockchains in the usual way because each blockchain environment is independent. Therefore, the movement of assets through bridges is not traceable in the usual way. If a malicious actor moves funds through a bridge, existing asset tracking tools are limited in their ability to trace it. Therefore, this paper proposes a method to obtain information on bridge usage by identifying the structure of the bridge and analyzing the event logs of bridge requests. First, to understand the structure of bridges, we analyzed bridges operating on Ethereum Virtual Machine(EVM) based blockchains. Based on the analysis, we applied the method to arbitrary bridge events. Furthermore, we created an automated tool that continuously collects and stores bridge usage information so that it can be used for actual tracking. We also validated the automated tool and tracking method based on an asset transfer scenario. By extracting the usage information through the tool after using the bridge, we were able to check important information for tracking, such as the sending blockchain, the receiving blockchain, the receiving wallet address, and the type and quantity of tokens transferred. This showed that it is possible to overcome the limitations of tracking asset movements using blockchain bridges.

  • PDF

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Feasibility Study on FSIM Index to Evaluate SAR Image Co-registration Accuracy (SAR 영상 정합 정확도 평가를 위한 FSIM 인자 활용 가능성)

  • Kim, Sang-Wan;Lee, Dongjun
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.847-859
    • /
    • 2021
  • Recently, as the number of high-resolution satellite SAR images increases, the demand for precise matching of SAR imagesin change detection and image fusion is consistently increasing. RMSE (Root Mean Square Error) values using GCPs (Ground Control Points) selected by analysts have been widely used for quantitative evaluation of image registration results, while it is difficult to find an approach for automatically measuring the registration accuracy. In this study, a feasibility analysis was conducted on using the FSIM (Feature Similarity) index as a measure to evaluate the registration accuracy. TerraSAR-X (TSX) staring spotlight data collected from various incidence angles and orbit directions were used for the analysis. FSIM was almost independent on the spatial resolution of the SAR image. Using a single SAR image, the FSIM with respect to registration errors was analyzed, then use it to compare with the value estimated from TSX data with different imaging geometry. FSIM index slightly decreased due to the differencesin imaging geometry such as different look angles, different orbit tracks. As the result of analyzing the FSIM value by land cover type, the change in the FSIM index according to the co-registration error was most evident in the urban area. Therefore, the FSIM index calculated in the urban was mostsuitable for determining the accuracy of image registration. It islikely that the FSIM index has sufficient potential to be used as an index for the co-registration accuracy of SAR image.

Development of a Measuring Tool for Spiritual Care Performance of Hospice Team Members (호스피스 팀원들의 영적 돌봄 수행도 측정 도구 개발)

  • Yoo, Yang-Sook;Han, Sung-Suk;Lee, Sun-Mi;Seo, Min-Jeong;Hong, Jin-Ui
    • Journal of Hospice and Palliative Care
    • /
    • v.9 no.2
    • /
    • pp.86-92
    • /
    • 2006
  • Purpose: This study was conducted to develop a measuring tool for spiritual care performance of hospice team members. The tool may be utilized for providing hospice patients with more systematic and standardized spiritual tares. Methods: The concept and questions of the tool were developed, and then its validity and reliability were tested. For the validity and reliability tests, a self-reported questionnaire comprising 33 questions with 4 point scale ($1{\sim}4$), was developed, and the data were collected from 192 hospice team members from December 2005 to February 2006. Results: Thirty three questions, drafted through literature review and professional consultation, were reviewed by 20 professionals for their validity, were revised and supplemented resulted in the final 33 questions. The questions with a correlation coefficient grater than .30 were selected: all the 33 questions were selected based on this criterion. The reliability coefficient, Cronbarh's ${\alpha}$, was 0.95. The 33 questions were analyzed for factors, and six factors were extracted: relationship formation and communication, encouragement and promotion of spiritual growth, linking with spiritual resources, preparation of death, evaluation and quality control for spiritual intervention, Intervention, and spiritual assessment for intervention. Conclusion: The tool developed in this study includes six factors and has high level of reliability. This tool Will greatly contribute to assess and improve hospice care services, providing systematic and standardized spiritual cares for terminally ill patients and their families.

  • PDF

A Study on Design Education Method for Development of Self-Directed Learning Ability (자기주도학습 능력 개발을 위한 설계교육 방법에 관한 연구)

  • Han, Ji-Young;Lee, Min-Young;Jung, Bo-Ra
    • Journal of Engineering Education Research
    • /
    • v.12 no.4
    • /
    • pp.115-125
    • /
    • 2009
  • The purpose of this study was to suggest efficient method for development of learner's self-directed learning ability through literature review on self-directed learning, component of self-directed learning, development of self-directed learning ability, design education steps adopting problem based learning and project based learning. This study was conducted through literature review on self-directed learning and development of self-directed learning, design education. Design education and project based learning process and problem based learning to extract the items that are common to bring out five steps, and differences in levels of learners based on self-learning led to Grow(1991) model to connect the lessons of 9 steps present the design education steps, that is, the learners ready for learning, the definition of problem and recognition of necessity, team building, related data collection, team learning about real problem with the teacher, select optimal solution, student-centered discussion, models and product creation, testing and evaluation, complement.

Investigations on Techniques and Applications of Text Analytics (텍스트 분석 기술 및 활용 동향)

  • Kim, Namgyu;Lee, Donghoon;Choi, Hochang;Wong, William Xiu Shun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.2
    • /
    • pp.471-492
    • /
    • 2017
  • The demand and interest in big data analytics are increasing rapidly. The concepts around big data include not only existing structured data, but also various kinds of unstructured data such as text, images, videos, and logs. Among the various types of unstructured data, text data have gained particular attention because it is the most representative method to describe and deliver information. Text analysis is generally performed in the following order: document collection, parsing and filtering, structuring, frequency analysis, and similarity analysis. The results of the analysis can be displayed through word cloud, word network, topic modeling, document classification, and semantic analysis. Notably, there is an increasing demand to identify trending topics from the rapidly increasing text data generated through various social media. Thus, research on and applications of topic modeling have been actively carried out in various fields since topic modeling is able to extract the core topics from a huge amount of unstructured text documents and provide the document groups for each different topic. In this paper, we review the major techniques and research trends of text analysis. Further, we also introduce some cases of applications that solve the problems in various fields by using topic modeling.

An Evaluation Method of X-ray Imaging System Resolution for Non-Engineers (비공학도를 위한 X-ray 영상촬영 시스템 해상력 평가 방법)

  • Woo, Jung-Eun;Lee, Yong-Geum;Bae, Seok-Hwan;Kim, Yong-Gwon
    • Journal of radiological science and technology
    • /
    • v.35 no.4
    • /
    • pp.309-314
    • /
    • 2012
  • Nowadays, digital Radiography (DR) systems are widely used in clinical sites and substitute the analog-film x-ray imaging systems. The resolution of DR images depends on several factors such as characteristic contrast and motion of the object, the focal spot size and the quality of x-ray beam, x-ray scattering, the performance of the DR detector (x-ray conversion efficiency, the intrinsic resolution). The DR detector is composed of an x-ray capturing element, a coupling element and a collecting element, which systematically affect the system resolution. Generally speaking, the resolution of a medical imaging system is the discrimination ability of anatomical structures. Modulation transfer function (MTF) is widely used for the quantification of the resolution performance for an imaging system. MTF is defined as the frequency response of the imaging system to the input of a point spread function and can be obtained by doing Fourier transform of a line spread function, which is extracted from a test image. In clinic, radiologic technologists, who are in charge of system maintenance and quality control, have to evaluate or make routine check on their imaging system. However, it is not an easy task for the radiologic technologists to measure MTF accurately due to lack of their engineering and mathematical backgrounds. The objective of this study is to develop and provide for radiologic technologists a medical system imaging evaluation tool, so that they can measure and quantify system performance easily.

Three-Dimensional Analysis of the Mesophyll Plastids Using Ultra High Voltage Electron Microscopy (초고압전자현미경에 의한 엽육세포 색소체 미세구조의 3차원적 분석)

  • Kim, In-Sun;Park, Sang-Chan;Han, Sung-Sik;Kim, Eun-Soo
    • Applied Microscopy
    • /
    • v.36 no.3
    • /
    • pp.217-226
    • /
    • 2006
  • Image processing by ultra high voltage electron microscopy (UHVEM) and tomography has offered major contributions to research in the field of cellular ultrastructure. Furthermore, such advancements also have enabled the improved analysis of three-dimensional cellular structures in botany. In the present study. using UHVEM and tomography, we attempted to reconstruct the three-dimensional images of plastid inclusions that probably differentiate during photosynthesis. The foliar tissues were studied Primarily with the TEM and further examined with UHVEM. The spatial relationship between tubular elements and the thylakoidal membrane and/or starch grains within plastids mainly have been investigated in CAM-performing Sedum as well as in $C_4$ Salsola species. The inclusion bodies were found to occur only in early development in the former, while they were found only in mesophyll cells in the latter. The specimens were tilted every two degrees to obtain two-dimensional images with UHVEM and subsequently comparison has been made between the two types. Digital image processing was performed on the elements of the inclusion body using tilting, tomography, and IMOD program to generate and reconstruct three-dimensional images on the cellular level. In Sedum plastids, the inclusion bodies consisted of tubular elements exhibiting about 20 nm distance between elements. However, in Salsola, plastid inclusion bodies demonstrated quite different element structure, displaying pattern, and origin relative to those of the Sedum. The inclusion bodies had an integrative relationship with the starch grains in both species.