• Title/Summary/Keyword: pre-computation

Search Result 175, Processing Time 0.025 seconds

Lightweight Deep Learning Model for Real-Time 3D Object Detection in Point Clouds (실시간 3차원 객체 검출을 위한 포인트 클라우드 기반 딥러닝 모델 경량화)

  • Kim, Gyu-Min;Baek, Joong-Hwan;Kim, Hee Yeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1330-1339
    • /
    • 2022
  • 3D object detection generally aims to detect relatively large data such as automobiles, buses, persons, furniture, etc, so it is vulnerable to small object detection. In addition, in an environment with limited resources such as embedded devices, it is difficult to apply the model because of the huge amount of computation. In this paper, the accuracy of small object detection was improved by focusing on local features using only one layer, and the inference speed was improved through the proposed knowledge distillation method from large pre-trained network to small network and adaptive quantization method according to the parameter size. The proposed model was evaluated using SUN RGB-D Val and self-made apple tree data set. Finally, it achieved the accuracy performance of 62.04% at mAP@0.25 and 47.1% at mAP@0.5, and the inference speed was 120.5 scenes per sec, showing a fast real-time processing speed.

Analysis to a Remote User Authentication Scheme Using Smart Cards (스마트 카드를 이용한 사용자 인증 스킴의 안전성 분석)

  • An, Young-Hwa;Lee, Kang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.133-138
    • /
    • 2009
  • Recently Lin et al. proposed the remote user authentication scheme using smart cards. But the proposed scheme has not been satisfied security requirements considering in the user authentication scheme using the password based smart card. In this paper, we showed that he can get the user's password using the off-line password guessing attack on the scheme when the adversary steals the user's smart card and extracts the information in the smart card. Also, we proposed the seven security requirements for evaluating remote user authentication schemes using smart card. As a result of analysis, in Lin et al's scheme we have found the deficiencies of security requirements. So we suggest the improved scheme, the mutual authentication scheme that does not store the user's password verifier in server and can authenticate each other at the same time between the user and server.

Efficient Thread Allocation Method of Convolutional Neural Network based on GPGPU (GPGPU 기반 Convolutional Neural Network의 효율적인 스레드 할당 기법)

  • Kim, Mincheol;Lee, Kwangyeob
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.10
    • /
    • pp.935-943
    • /
    • 2017
  • CNN (Convolution neural network), which is used for image classification and speech recognition among neural networks learning based on positive data, has been continuously developed to have a high performance structure to date. There are many difficulties to utilize in an embedded system with limited resources. Therefore, we use GPU (General-Purpose Computing on Graphics Processing Units), which is used for general-purpose operation of GPU to solve the problem because we use pre-learned weights but there are still limitations. Since CNN performs simple and iterative operations, the computation speed varies greatly depending on the thread allocation and utilization method in the Single Instruction Multiple Thread (SIMT) based GPGPU. To solve this problem, there is a thread that needs to be relaxed when performing Convolution and Pooling operations with threads. The remaining threads have increased the operation speed by using the method used in the following feature maps and kernel calculations.

CComparative evaluation of the methods of producing planar image results by using Q-Metrix method of SPECT/CT in Lung Perfusion Scan (Lung Perfusion scan에서 SPECT-CT의 Q-Metrix방법과 평면영상 결과 산출방법에 대한 비교평가)

  • Ha, Tae Hwan;Lim, Jung Jin;Do, Yong Ho;Cho, Sung Wook;Noh, Gyeong Woon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.90-97
    • /
    • 2018
  • Purpose The lung segment ratio which is obtained through quantitative analyses of lung perfusion scan images is calculated to evaluate the lung function pre and post surgery. In this Study, the planar image production methods by using Q-Metrix (GE Healthcare, USA) program capable of not only quantitative analysis but also computation of the segment ratio after having performed SPECT/CT are comparatively evaluated. Materials and Methods Lung perfusion scan and SPECT/CT were performed on 50 lung cancer patients prior to surgery who visited our hospital from May 1, 2015 to September 13, 2016 by using Discovery 670(GE Healthcare, USA) equipment. AP(Anterior Posterior)method that uses planar image divided the frontal and rear images into three rectangular portions by means of ROI tool while PO(Posterior Oblique)method computed the segment ratio by dividing the right lobe into three parts and the left lobe into two parts on the oblique image. Segment ratio was computed by setting the ROI and VOI in the CT image by using Q-Metrix program and statistically analysis was performed with SPSS Ver. 23. Results Regarding the correlation concordance rate of Q-Metrix and AP methods, RUL(Right upper lobe), RML(Right middle lobe) and RLL(Right lower lobe) were 0.224, 0.035 and 0.447. LUL(Left upper lobe) and LLL(Left lower lobe) were found to be 0.643 and 0.456, respectively. In the PO method, the right lobe were 0.663, 0.623 and 0.702, respectively, while the left lobe were 0.754 and 0.823. When comparison was made by using the Paired sample T-test, Right lobe were $11.6{\pm}4.5$, $26.9{\pm}6.2$ and $17.8{\pm}4.2$, respectively in the AP method. Left lobe were $28.4{\pm}4.8$ and $15.4{\pm}5.6$. The right lobe of PO had values of $17.4{\pm}5.0$, $10.5{\pm}3.6$ and $27.3{\pm}6.0$, while the left lobe had values of $21.6{\pm}4.8$ and $23.1{\pm}6.6$, thereby having statistically significant difference in comparison to the Q-Metrix method for each of the lobes (P<0.05). However, there was no statistically significant difference in Right middle lobe (P>0.05). Conclusion The AP method showed low concordance rate in correlation with the Q-Metrix method. However, PO method displayed high concordance rate overall. although AP method had significant differences in all lobes, there was no significant difference in Right middle lobe of PO method. Therefore, at the time of production of lung perfusion scan results, utilization of Q-Metrix method of SPECT/CT would be useful in computation of accurate resultant values. Moreover, it is deemed possible to expect obtain more practical sectional computation result values by using PO method at the time of planar image acquisition.

Development of Dose Planning System for Brachytherapy with High Dose Rate Using Ir-192 Source (고선량률 강내조사선원을 이용한 근접조사선량계획전산화 개발)

  • Choi Tae Jin;Yei Ji Won;Kim Jin Hee;Kim OK;Lee Ho Joon;Han Hyun Soo
    • Radiation Oncology Journal
    • /
    • v.20 no.3
    • /
    • pp.283-293
    • /
    • 2002
  • Purpose : A PC based brachytherapy planning system was developed to display dose distributions on simulation images by 2D isodose curve including the dose profiles, dose-volume histogram and 30 dose distributions. Materials and Methods : Brachytherapy dose planning software was developed especially for the Ir-192 source, which had been developed by KAERI as a substitute for the Co-60 source. The dose computation was achieved by searching for a pre-computed dose matrix which was tabulated as a function of radial and axial distance from a source. In the computation process, the effects of the tissue scattering correction factor and anisotropic dose distributions were included. The computed dose distributions were displayed in 2D film image including the profile dose, 3D isodose curves with wire frame forms and dosevolume histogram. Results : The brachytherapy dose plan was initiated by obtaining source positions on the principal plane of the source axis. The dose distributions in tissue were computed on a $200\times200\;(mm^2)$ plane on which the source axis was located at the center of the plane. The point doses along the longitudinal axis of the source were $4.5\~9.0\%$ smaller than those on the radial axis of the plane, due to the anisotropy created by the cylindrical shape of the source. When compared to manual calculation, the point doses showed $1\~5\%$ discrepancies from the benchmarking plan. The 2D dose distributions of different planes were matched to the same administered isodose level in order to analyze the shape of the optimized dose level. The accumulated dose-volume histogram, displayed as a function of the percentage volume of administered minimum dose level, was used to guide the volume analysis. Conclusion : This study evaluated the developed computerized dose planning system of brachytherapy. The dose distribution was displayed on the coronal, sagittal and axial planes with the dose histogram. The accumulated DVH and 3D dose distributions provided by the developed system may be useful tools for dose analysis in comparison with orthogonal dose planning.

A Possible Path per Link CBR Algorithm for Interference Avoidance in MPLS Networks

  • Sa-Ngiamsak, Wisitsak;Varakulsiripunth, Ruttikorn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.772-776
    • /
    • 2004
  • This paper proposes an interference avoidance approach for Constraint-Based Routing (CBR) algorithm in the Multi-Protocol Label Switching (MPLS) network. The MPLS network itself has a capability of integrating among any layer-3 protocols and any layer-2 protocols of the OSI model. It is based on the label switching technology, which is fast and flexible switching technique using pre-defined Label Switching Paths (LSPs). The MPLS network is a solution for the Traffic Engineering(TE), Quality of Service (QoS), Virtual Private Network (VPN), and Constraint-Based Routing (CBR) issues. According to the MPLS CBR, routing performance requirements are capability for on-line routing, high network throughput, high network utilization, high network scalability, fast rerouting performance, low percentage of call-setup request blocking, and low calculation complexity. There are many previously proposed algorithms such as minimum hop (MH) algorithm, widest shortest path (WSP) algorithm, and minimum interference routing algorithm (MIRA). The MIRA algorithm is currently seemed to be the best solution for the MPLS routing problem in case of selecting a path with minimum interference level. It achieves lower call-setup request blocking, lower interference level, higher network utilization and higher network throughput. However, it suffers from routing calculation complexity which makes it difficult to real task implementation. In this paper, there are three objectives for routing algorithm design, which are minimizing interference levels with other source-destination node pairs, minimizing resource usage by selecting a minimum hop path first, and reducing calculation complexity. The proposed CBR algorithm is based on power factor calculation of total amount of possible path per link and the residual bandwidth in the network. A path with high power factor should be considered as minimum interference path and should be selected for path setup. With the proposed algorithm, all of the three objectives are attained and the approach of selection of a high power factor path could minimize interference level among all source-destination node pairs. The approach of selection of a shortest path from many equal power factor paths approach could minimize the usage of network resource. Then the network has higher resource reservation for future call-setup request. Moreover, the calculation of possible path per link (or interference level indicator) is run only whenever the network topology has been changed. Hence, this approach could reduce routing calculation complexity. The simulation results show that the proposed algorithm has good performance over high network utilization, low call-setup blocking percentage and low routing computation complexity.

  • PDF

Design of Video Pre-processing Algorithm for High-speed Processing of Maritime Object Detection System and Deep Learning based Integrated System (해상 객체 검출 고속 처리를 위한 영상 전처리 알고리즘 설계와 딥러닝 기반의 통합 시스템)

  • Song, Hyun-hak;Lee, Hyo-chan;Lee, Sung-ju;Jeon, Ho-seok;Im, Tae-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.117-126
    • /
    • 2020
  • A maritime object detection system is an intelligent assistance system to maritime autonomous surface ship(MASS). It detects automatically floating debris, which has a clash risk with objects in the surrounding water and used to be checked by a captain with a naked eye, at a similar level of accuracy to the human check method. It is used to detect objects around a ship. In the past, they were detected with information gathered from radars or sonar devices. With the development of artificial intelligence technology, intelligent CCTV installed in a ship are used to detect various types of floating debris on the course of sailing. If the speed of processing video data slows down due to the various requirements and complexity of MASS, however, there is no guarantee for safety as well as smooth service support. Trying to solve this issue, this study conducted research on the minimization of computation volumes for video data and the increased speed of data processing to detect maritime objects. Unlike previous studies that used the Hough transform algorithm to find the horizon and secure the areas of interest for the concerned objects, the present study proposed a new method of optimizing a binarization algorithm and finding areas whose locations were similar to actual objects in order to improve the speed. A maritime object detection system was materialized based on deep learning CNN to demonstrate the usefulness of the proposed method and assess the performance of the algorithm. The proposed algorithm performed at a speed that was 4 times faster than the old method while keeping the detection accuracy of the old method.

Detection of Gradual Transitions in MPEG Compressed Video using Hidden Markov Model (은닉 마르코프 모델을 이용한 MPEG 압축 비디오에서의 점진적 변환의 검출)

  • Choi, Sung-Min;Kim, Dai-Jin;Bang, Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.3
    • /
    • pp.379-386
    • /
    • 2004
  • Video segmentation is a fundamental task in video indexing and it includes two kinds of shot change detections such as the abrupt transition and the gradual transition. The abrupt shot boundaries are detected by computing the image-based distance between adjacent frames and comparing this distance with a pre-determined threshold value. However, the gradual shot boundaries are difficult to detect with this approach. To overcome this difficulty, we propose the method that detects gradual transition in the MPEG compressed video using the HMM (Hidden Markov Model). We take two different HMMs such as a discrete HMM and a continuous HMM with a Gaussian mixture model. As image features for HMM's observations, we use two distinct features such as the difference of histogram of DC images between two adjacent frames and the difference of each individual macroblock's deviations at the corresponding macroblock's between two adjacent frames, where deviation means an arithmetic difference of each macroblock's DC value from the mean of DC values in the given frame. Furthermore, we obtain the DC sequences of P and B frame by the first order approximation for a fast and effective computation. Experiment results show that we obtain the best detection and classification performance of gradual transitions when a continuous HMM with one Gaussian model is taken and two image features are used together.

Novel Collision Warning System using Neural Networks (신경회로망을 이용한 새로운 충돌 경고 시스템)

  • Kim, Beomseong;Choi, Baehoon;An, Jhonghyun;Hwang, Jaeho;Kim, Euntai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.4
    • /
    • pp.392-397
    • /
    • 2014
  • Recently, there are many researches on active safety system of intelligent vehicle. To reduce the probability of collision caused by driver's inattention and mistakes, the active safety system gives warning or controls the vehicle toward avoiding collision. For the purpose, it is necessary to recognize and analyze circumstances around. In this paper, we will treat the problem about collision risk assessment. In general, it is difficult to calculate the collision risk before it happens. To consider the uncertainty of the situation, Monte Carlo simulation can be employed. However it takes long computation time and is not suitable for practice. In this paper, we apply neural networks to solve this problem. It efficiently computes the unseen data by training the results of Monte Carlo simulation. Furthermore, we propose the features affects the performance of the assessment. The proposed algorithm is verified by applications in various crash scenarios.

An Effective Face Authentication Method for Resource - Constrained Devices (제한된 자원을 갖는 장치에서 효과적인 얼굴 인증 방법)

  • Lee Kyunghee;Byun Hyeran
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1233-1245
    • /
    • 2004
  • Though biometrics to authenticate a person is a good tool in terms of security and convenience, typical authentication algorithms using biometrics may not be executed on resource-constrained devices such as smart cards. Thus, to execute biometric processing on resource-constrained devices, it is desirable to develop lightweight authentication algorithm that requires only small amount of memory and computation. Also, among biological features, face is one of the most acceptable biometrics, because humans use it in their visual interactions and acquiring face images is non-intrusive. We present a new face authentication algorithm in this paper. Our achievement is two-fold. One is to present a face authentication algorithm with low memory requirement, which uses support vector machines (SVM) with the feature set extracted by genetic algorithms (GA). The other contribution is to suggest a method to reduce further, if needed, the amount of memory required in the authentication at the expense of verification rate by changing a controllable system parameter for a feature set size. Given a pre-defined amount of memory, this capability is quite effective to mount our algorithm on memory-constrained devices. The experimental results on various databases show that our face authentication algorithm with SVM whose input vectors consist of discriminating features extracted by GA has much better performance than the algorithm without feature selection process by GA has, in terms of accuracy and memory requirement. Experiment also shows that the number of the feature ttl be selected is controllable by a system parameter.