• Title/Summary/Keyword: Adaptive applications

Search Result 863, Processing Time 0.042 seconds

Predicting Unseen Object Pose with an Adaptive Depth Estimator (적응형 깊이 추정기를 이용한 미지 물체의 자세 예측)

  • Sungho, Song;Incheol, Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.12
    • /
    • pp.509-516
    • /
    • 2022
  • Accurate pose prediction of objects in 3D space is an important visual recognition technique widely used in many applications such as scene understanding in both indoor and outdoor environments, robotic object manipulation, autonomous driving, and augmented reality. Most previous works for object pose estimation have the limitation that they require an exact 3D CAD model for each object. Unlike such previous works, this paper proposes a novel neural network model that can predict the poses of unknown objects based on only their RGB color images without the corresponding 3D CAD models. The proposed model can obtain depth maps required for unknown object pose prediction by using an adaptive depth estimator, AdaBins,. In this paper, we evaluate the usefulness and the performance of the proposed model through experiments using benchmark datasets.

A Two-Step Call Admission Control Scheme using Priority Queue in Cellular Networks (셀룰러 이동망에서의 우선순위 큐 기반의 2단계 호 수락 제어 기법)

  • 김명일;김성조
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.4
    • /
    • pp.461-473
    • /
    • 2003
  • Multimedia applications are much more sensitive to QoS(Quality of Service) than text based ones due to their data continuity. In order to provide a fast moving MH(Mobil Host) using multimedia application with a consistent QoS,an efficient call admission mechanism is in need. This paper proposes the 2SCA(2-Step Call Admission) scheme based on cal admission scheme using pripority to guarantee the consistent QoS for mobile multimedia applications. A calls of MH are classified new calls, hand-off calls, and QoS upgrading calls. The 2SCA is composed of the basic call admission and advanced call admission; the former determines the call admission based on bandwidth available in each cell and the latter determines the call admission by applying DTT(Delay Tolerance Time), PQeueu(Priority Queue), and UpQueue(Upgrade Queue) algorithm according to the type of each call blocked at the basic call admission stage. In order to evaluate the performance of our mechanism, we measure the metrics such as the dropping probability of new calls, dropping probability of hand-off calls, and bandwidth utilization. The result shows that the performance of our mechanism is superior to that of existing mechanisms such as CSP(Complete Sharing Policy), GCP(Guard Channel Policy) and AGCP(Adaptive Guard Channel Policy).

Improving Performance of ART with Iterative Partitioning using Test Case Distribution Management (테스트 케이스 분포 조절을 통한 IP-ART 기법의 성능 향상 정책)

  • Shin, Seung-Hun;Park, Seung-Kyu;Choi, Kyung-Hee
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.451-461
    • /
    • 2009
  • The Adaptive Random Testing(ART) aims to improve the performance of traditional Random Testing(RT) by reducing the number of test cases to find the failure region which is located in the input domain. Such enhancement can be obtained by efficient selection algorithms of test cases. The ART through Iterative Partitioning(IP-ART) is one of ART techniques and it uses an iterative input domain partitioning method to improve the performance of early-versions of ART which have significant drawbacks in computation time. And the IP-ART with Enlarged Input Domain(EIP-ART), an improved version of IP-ART, is known to make additional performance improvement with scalability by expanding to virtual test space beyond real input domain of IP-ART. The EIP-ART algorithm, however, have the drawback of heavy cost of computation time to generate test cases mainly due to the virtual input domain enlargement. For this reason, two algorithms are proposed in this paper to mitigate the computation overhead of the EIP-ART. In the experiments by simulations, the tiling technique of input domain, one of two proposed algorithms, showed significant improvements in terms of computation time and testing performance.

Endo- and Epi-cardial Boundary Detection of the Left Ventricle Using Intensity Distribution and Adaptive Gradient Profile in Cardiac CT Images (심장 CT 영상에서 밝기값 분포와 적응적 기울기 프로파일을 이용한 좌심실 내외벽 경계 검출)

  • Lee, Min-Jin;Hong, Helen
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.4
    • /
    • pp.273-281
    • /
    • 2010
  • In this paper, we propose an automatic segmentation method of the endo- and epicardial boundary by using ray-casting profile based on intensity distribution and gradient information in CT images. First, endo-cardial boundary points are detected by using adaptive thresholding and seeded region growing. To include papillary muscles inside the boundary, the endo-cardial boundary points are refined by using ray-casting based profile. Second, epi-cardial boundary points which have both a myocardial intensity value and a maximum gradient are detected by using ray-casting based adaptive gradient profile. Finally, to preserve an elliptical or circular shape, the endo- and epi-cardial boundary points are refined by using elliptical interpolation and B-spline curve fitting. Then, curvature-based contour fitting is performed to overcome problems associated with heterogeneity of the myocardium intensity and lack of clear delineation between myocardium and adjacent anatomic structures. To evaluate our method, we performed visual inspection, accuracy and processing time. For accuracy evaluation, average distance difference and overalpping region ratio between automatic segmentation and manual segmentation are calculated. Experimental results show that the average distnace difference was $0.56{\pm}0.24mm$. The overlapping region ratio was $82{\pm}4.2%$ on average. In all experimental datasets, the whole process of our method was finished within 1 second.

Generation of Multi-view Images Using Depth Map Decomposition and Edge Smoothing (깊이맵의 정보 분해와 경계 평탄 필터링을 이용한 다시점 영상 생성 방법)

  • Kim, Sung-Yeol;Lee, Sang-Beom;Kim, Yoo-Kyung;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.471-482
    • /
    • 2006
  • In this paper, we propose a new scheme to generate multi-view images utilizing depth map decomposition and adaptive edge smoothing. After carrying out smooth filtering based on an adaptive window size to regions of edges in the depth map, we decompose the smoothed depth map into four types of images: regular mesh, object boundary, feature point, and number-of-layer images. Then, we generate 3-D scenes from the decomposed images using a 3-D mesh triangulation technique. Finally, we extract multi-view images from the reconstructed 3-D scenes by changing the position of a virtual camera in the 3-D space. Experimental results show that our scheme generates multi-view images successfully by minimizing a rubber-sheet problem using edge smoothing, and renders consecutive 3-D scenes in real time through information decomposition of depth maps. In addition, the proposed scheme can be used for 3-D applications that need the depth information, such as depth keying, since we can preserve the depth data unlike the previous unsymmetric filtering method.

Performance Analysis of Adaptive Channel Estimation Scheme in V2V Environments (V2V 환경에서 적응적 채널 추정 기법에 대한 성능 분석)

  • Lee, Jihye;Moon, Sangmi;Kwon, Soonho;Chu, Myeonghun;Bae, Sara;Kim, Hanjong;Kim, Cheolsung;Kim, Daejin;Hwang, Intae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.8
    • /
    • pp.26-33
    • /
    • 2017
  • Vehicle communication can facilitate efficient coordination among vehicles on the road and enable future vehicular applications such as vehicle safety enhancement, infotainment, or even autonomous driving. In the $3^{rd}$ Generation Partnership Project (3GPP), many studies focus on long term evolution (LTE)-based vehicle communication. Because vehicle speed is high enough to cause severe channel distortion in vehicle-to-vehicle (V2V) environments. We can utilize channel estimation methods to approach a reliable vehicle communication systems. Conventional channel estimation schemes can be categorized as least-squares (LS), decision-directed channel estimation (DDCE), spectral temporal averaging (STA), and smoothing methods. In this study, we propose a smart channel estimation scheme in LTE-based V2V environments. The channel estimation scheme, based on an LTE uplink system, uses a demodulation reference signal (DMRS) as the pilot symbol. Unlike conventional channel estimation schemes, we propose an adaptive smoothing channel estimation scheme (ASCE) using quadratic smoothing (QS) of the pilot symbols, which estimates a channel with greater accuracy and adaptively estimates channels in data symbols. In simulation results, the proposed ASCE scheme shows improved overall performance in terms of the normalized mean square error (NMSE) and bit error rate (BER) relative to conventional schemes.

Design of Adaptive Security Framework based on Carousel for Cognitive Radio Network (인지무선네트워크를 위한 회전자 기반 적응형 보안프레임워크 설계)

  • Kim, Hyunsung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.165-172
    • /
    • 2013
  • Convergence is increasingly prevalent in the IT world which generally refers to the combination of two or more different technologies in a single device. Especially, the spectrum scarcity is becoming a big issue because there are exponential growth of broadcasting and communication systems in the spectrum demand. Cognitive radio (CR) is a convergence technology that is envisaged to solve the problems in wireless networks resulting from the limited available spectrum and the inefficiency in the spectrum usage by exploiting the existing wireless spectrum opportunistically. However, the very process of convergence is likely to expose significant security issues due to the merging of what have been separate services and technologies and also as a result of the introduction of new technologies. The main purpose of this research is focused on devising an adaptive security framework based on carousel for CR networks as a distinct telecommunication convergence application, which are still at the stage of being developed and standardized with the lack of security concerns. The framework uses a secure credential, named as carousel, initialized with the location related information from objects position, which is used to design security mechanisms for supporting privacy and various securities based on it. The proposed adaptive security framework could be used as a security building block for the CR network standards and various convergence applications.

Cluster-based Delay-adaptive Sensor Scheduling for Energy-saving in Wireless Sensor Networks (센서네트워크에서 클러스터기반의 에너지 효율형 센서 스케쥴링 연구)

  • Choi, Wook;Lee, Yong;Chung, Yoo-Jin
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.47-59
    • /
    • 2009
  • Due to the application-specific nature of wireless sensor networks, the sensitivity to such a requirement as data reporting latency may vary depending on the type of applications, thus requiring application-specific algorithm and protocol design paradigms which help us to maximize energy conservation and thus the network lifetime. In this paper, we propose a novel delay-adaptive sensor scheduling scheme for energy-saving data gathering which is based on a two phase clustering (TPC). The ultimate goal is to extend the network lifetime by providing sensors with high adaptability to the application-dependent and time-varying delay requirements. The TPC requests sensors to construct two types of links: direct and relay links. The direct links are used for control and forwarding time critical sensed data. On the other hand, the relay links are used only for data forwarding based on the user delay constraints, thus allowing the sensors to opportunistically use the most energy-saving links and forming a multi-hop path. Simulation results demonstrate that cluster-based delay-adaptive data gathering strategy (CD-DGS) saves a significant amount of energy for dense sensor networks by adapting to the user delay constraints.

Robust Extraction of Facial Features under Illumination Variations (조명 변화에 견고한 얼굴 특징 추출)

  • Jung Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.1-8
    • /
    • 2005
  • Facial analysis is used in many applications like face recognition systems, human-computer interface through head movements or facial expressions, model based coding, or virtual reality. In all these applications a very precise extraction of facial feature points are necessary. In this paper we presents a method for automatic extraction of the facial features Points such as mouth corners, eye corners, eyebrow corners. First, face region is detected by AdaBoost-based object detection algorithm. Then a combination of three kinds of feature energy for facial features are computed; valley energy, intensity energy and edge energy. After feature area are detected by searching horizontal rectangles which has high feature energy. Finally, a corner detection algorithm is applied on the end region of each feature area. Because we integrate three feature energy and the suggested estimation method for valley energy and intensity energy are adaptive to the illumination change, the proposed feature extraction method is robust under various conditions.

  • PDF

Development of a High Performance Web Server Using A Real-Time Compression Architecture (실시간 압축 전송 아키텍쳐를 이용한 고성능 웹 서버 구현)

  • 민병조;강명석;우천희;남의석;김학배
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.3
    • /
    • pp.345-354
    • /
    • 2004
  • In these days, such services are popularized as E-commerce, E-government, multimedia services, and home networking applications. Most web traffics generated contemporarily basically use the Hyper Text Transfer Protocol(HTTP). Unfortunately, the HTTP is improper for these applications that comprise significant components of the web traffics. In this paper, we introduce a real-time contents compression architecture that maximizes the web service performance as well as reduces the response time. This architecture is built into the linux kernel-based web accelerating module. It guarantees not only the freshness of compressed contents but also the minimum time delay using an server-state adaptive algorithm, which can determine whether the server sends the compressed message considering the consumption of server resources when heavy requests reach the web server Also, We minimize the CPU overhead of the web server by exclusively implementing the compression kernel-thread. The testing results validates that this architecture saves the bandwidth of the web server and that elapsed time improvement is dramatic.

  • PDF