• Title/Summary/Keyword: performance-based optimization

Search Result 2,574, Processing Time 0.04 seconds

lp-norm regularization for impact force identification from highly incomplete measurements

  • Yanan Wang;Baijie Qiao;Jinxin Liu;Junjiang Liu;Xuefeng Chen
    • Smart Structures and Systems
    • /
    • v.34 no.2
    • /
    • pp.97-116
    • /
    • 2024
  • The standard l1-norm regularization is recently introduced for impact force identification, but generally underestimates the peak force. Compared to l1-norm regularization, lp-norm (0 ≤ p < 1) regularization, with a nonconvex penalty function, has some promising properties such as enforcing sparsity. In the framework of sparse regularization, if the desired solution is sparse in the time domain or other domains, the under-determined problem with fewer measurements than candidate excitations may obtain the unique solution, i.e., the sparsest solution. Considering the joint sparse structure of impact force in temporal and spatial domains, we propose a general lp-norm (0 ≤ p < 1) regularization methodology for simultaneous identification of the impact location and force time-history from highly incomplete measurements. Firstly, a nonconvex optimization model based on lp-norm penalty is developed for regularizing the highly under-determined problem of impact force identification. Secondly, an iteratively reweighed l1-norm algorithm is introduced to solve such an under-determined and unconditioned regularization model through transforming it into a series of l1-norm regularization problems. Finally, numerical simulation and experimental validation including single-source and two-source cases of impact force identification are conducted on plate structures to evaluate the performance of lp-norm (0 ≤ p < 1) regularization. Both numerical and experimental results demonstrate that the proposed lp-norm regularization method, merely using a single accelerometer, can locate the actual impacts from nine fixed candidate sources and simultaneously reconstruct the impact force time-history; compared to the state-of-the-art l1-norm regularization, lp-norm (0 ≤ p < 1) regularization procures sufficiently sparse and more accurate estimates; although the peak relative error of the identified impact force using lp-norm regularization has a decreasing tendency as p is approaching 0, the results of lp-norm regularization with 0 ≤ p ≤ 1/2 have no significant differences.

Distributed Throughput-Maximization Using the Up- and Downlink Duality in Wireless Networks (무선망에서의 상하향 링크 쌍대성 성질을 활용한 분산적 수율 최대화 기법)

  • Park, Jung-Min;Kim, Seong-Lyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.11A
    • /
    • pp.878-891
    • /
    • 2011
  • We consider the throughput-maximization problem for both the up- and downlink in a wireless network with interference channels. For this purpose, we design an iterative and distributive uplink algorithm based on Lagrangian relaxation. Using the uplink power prices and network duality, we achieve throughput-maximization in the dual downlink that has a symmetric channel and an equal power budget compared to the uplink. The network duality we prove here is a generalized version of previous research [10], [11]. Computational tests show that the performance of the up- and downlink throughput for our algorithms is close to the optimal value for the channel orthogonality factor, ${\theta}{\in}$(0.5, 1]. On the other hand, when the channels are slightly orthogonal (${\theta}{\in}$(0, 0.5]), we observe some throughput degradation in the downlink. We have extended our analysis to the real downlink that has a nonsymmetric channel and an unequal power budget compared to the uplink. It is shown that the modified duality-based approach is thoroughly applied to the real downlink. Considering the complexity of the algorithms in [6] and [18], we conclude that these results are quite encouraging in terms of both performance and practical applicability of the generalized duality theorem.

Optimization of ZnO-based transparent conducting oxides for thin-film solar cells based on the correlations of structural, electrical, and optical properties (ZnO 박막의 구조적, 전기적, 광학적 특성간의 상관관계를 고려한 박막태양전지용 투명전극 최적화 연구)

  • Oh, Joon-Ho;Kim, Kyoung-Kook;Song, Jun-Hyuk;Seong, Tae-Yeon
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2010.11a
    • /
    • pp.42.2-42.2
    • /
    • 2010
  • Transparent conducting oxides (TCOs) are of significant importance for their applications in various devices, such as light-emitting diodes, thin-film solar cells, organic light-emitting diodes, liquid crystal displays, and so on. In order for TCOs to contribute to the performance improvement of these devices, TCOs should have high transmittance and good electrical properties simultaneously. Sn-doped $In_2O_3$ (ITO) is the most commonly used TCO. However, indium is toxic and scarce in nature. Thus, ZnO has attracted a lot of attention because of the possibility for replacing ITO. In particular, group III impurity-doped ZnO showed the optoelectronic properties comparable to those of ITO electrodes. Al-doped ZnO exhibited the best performance among various doped ZnO films because of the high substitutional doping efficiency. However, in order for the Al-doped ZnO to replace ITO in electronic devices, their electrical and optical properties should further significantly be improved. In this connection, different ways such as a variation of deposition conditions, different deposition techniques, and post-deposition annealing processes have been investigated so far. Among the deposition methods, RF magnetron sputtering has been extensively used because of the easiness in controlling deposition parameters and its fast deposition rate. In addition, when combined with post-deposition annealing in a reducing ambient, the optoelectronic properties of Al-doped ZnO films were found to be further improved. In this presentation, we deposited Al-doped ZnO (ZnO:$Al_2O_3$ = 98:2 wt%) thin films on the glass and sapphire substrates using RF magnetron sputtering as a function of substrate temperature. In addition, the ZnO samples were annealed in different conditions, e.g., rapid thermal annealing (RTA) at $900^{\circ}C$ in $N_2$ ambient for 1 min, tube-furnace annealing at $500^{\circ}C$ in $N_2:H_2$=9:1 gas flow for 1 hour, or RTA combined with tube-furnace annealing. It is found that the mobilities and carrier concentrations of the samples are dependent on growth temperature followed by one of three subsequent post-deposition annealing conditions.

  • PDF

Efficient High-Speed Intra Mode Prediction based on Statistical Probability (통계적 확률 기반의 효율적인 고속 화면 내 모드 예측 방법)

  • Lim, Woong;Nam, Jung-Hak;Jung, Kwang-Soo;Sim, Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.44-53
    • /
    • 2010
  • The H.264/AVC has been designed to use 9 directional intra prediction modes for removing spatial redundancy. It also employs high correlation between neighbouring block modes in sending mode information. For indication of the mode, smaller bits are assigned for higher probable modes and are compressed by predicting the mode with minimum value between two prediction modes of neighboring two blocks. In this paper, we calculated the statistical probability of prediction modes of the current block to exploit the correlation among the modes of neighboring two blocks with several test video sequences. Then, we made the probable prediction table that lists 5 most probable candidate modes for all possible combinatorial modes of upper and left blocks. By using this probability table, one of 5 higher probable candidate modes is selected based on RD-optimization to reduce computational complexity and determines the most probable mode for each cases for improving compression performance. The compression performance of the proposed algorithm is around 1.1%~1.50%, compared with JM14.2 and we achieved 18.46%~36.03% improvement in decoding speed.

A Multipurpose Design Framework for Hardware-Software Cosimulation of System-on-Chip (시스템-온-칩의 하드웨어-소프트웨어 통합 시뮬레이션을 위한 다목적 설계 프레임워크)

  • Joo, Young-Pyo;Yun, Duk-Young;Kim, Sung-Chan;Ha, Soon-Hoi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.9_10
    • /
    • pp.485-496
    • /
    • 2008
  • As the complexity of SoC (System-on-Chip) design increases dramatically. traditional system performance analysis and verification methods based on RTL (Register Transfer Level) are no more valid for increasing time-to-market pressure. Therefore a new design methodology is desperately required for system verification in early design stages. and hardware software (HW-SW) cosimulation at TLM (Transaction Level Modeling) level has been researched widely for solving this problem. However, most of HW-SW cosimulators support few restricted ion levels only, which makes it difficult to integrate HW-SW cosimulators with different ion levels. To overcome this difficulty, this paper proposes a multipurpose framework for HW SW cosimulation to provide systematic SoC design flow starting from software application design. It supports various design techniques flexibly for each design step, and various HW-SW cosimulators. Since a platform design is possible independently of ion levels and description languages, it allows us to generate simulation models with various ion levels. We verified the proposed framework to model a commercial SoC platform based on an ARM9 processor. It was also proved that this framework could be used for the performance optimization of an MJPEG example up to 44% successfully.

Calibration and Validation of the Hargreaves Equation for the Reference Evapotranspiration Estimation in Gyeonggi Bay Watershed (경기만 유역의 기준 증발산량 산정을 위한 Hargreaves 공식의 보정 및 검정)

  • Lee, Khil-Ha;Cho, Hong-Yeon;Oh, Nam-Sun
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.4
    • /
    • pp.413-422
    • /
    • 2008
  • It is essential to locally adjust the Hargreaves parameter for estimating reference evapotranspiration with short data as a substitute of Penman-Monteith equation. In this study, evaluation of daily-based reference evapotranspiration is computed with Hargreaves equation. in Gyeonggi bay area including Ganghwa, Incheon, Suwon, Seosan, and Cheonan station for the time period of 1997-2004. Hargreaves coefficient is adjusted to give the best fit with Penman-Monteith evapotranspiration, being regarded as a reference. Then, the preferred parameters are validated for the same stations for the time period of 2005-2006. The optimization-based correction in calibration for 1997-2004 shows improved performance of the Hargreaves equation, giving 0.68-0.77 to 0.92-0.98 in Nash-Sutcliffe coefficient of efficiency (NSC) and 14.63-23.30 to 5.23-11.75 in RMSE. The validation for 2005-2006 shows improved performance of the Hargreaves equation, giving 0.43-0.85 to 0.93-0.97 in NSC and 14.43-26.81 to 6.48-9.09 in RMSE.

Transcoding from Distributed Video Coding to H.264/AVC Based on Motion Vectors of Side Information (보조정보의 움직임 벡터를 이용한 분산 비디오 코딩에서 H.264/AVC로의 트랜스코딩)

  • Min, Kyung-Yeon;Yoo, Sung-Eun;Sim, Dong-Gyu;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.108-122
    • /
    • 2011
  • In this paper, a transcoding method with low computational complexity and high coding efficiency is proposed to transcode distributed video coding (DVC) bitstreams to H.264/AVC ones. For the proposed high-performance transcoding with low complexity, not only Wyner-Ziv frames but also key frames can be transcoded with motion vectors estimated in generation of side information. As a motion vector is estimated from a key frame to a prior key frame for side information generation, the motion vector can be used to encode the intra key frame as a predicted frame. Motion estimation is performed with two predicted motion vectors. One is the motion vector from side information generation and the other is median of motion vectors of neighboring blocks. The proposed method selects the best motion vector between two motion vectors based on rate-distortion optimization. Coding efficiency can be improved with a small size of search range, because a motion vector estimated in side information generation is used as an initial motion vector for transcoding. In the experimental results, complexity of transcoder is reduced about 12% and bitrate performance increases about 28.7%.

Evaluation of High Absorption Photoconductor for Application to Auto Exposure Control Sensor by Screen Printing Method (자동노출제어장치 센서적용을 위한 스크린 프린팅 제작방식의 고흡수율 광도전체 특성평가)

  • Kim, Dae-Kuk;Kim, Kyo-Tae;Park, Jeong-Eun;Hong, Ju-Yeon;Kim, Jin-Seon;Oh, Kyung-Min;Nam, Sang-Hee
    • Journal of the Korean Society of Radiology
    • /
    • v.9 no.2
    • /
    • pp.67-72
    • /
    • 2015
  • In diagnostic radiology, the use of automatic exposure control device is internationally recommended for diagnosis and optimization. However, if exposed to prolonged radiation is a complicated manufacturing process, there is a problem that occurs decrease of various performance overall brightness sensor, which is commercially available conventional. Therefore, in this study, absorption of X-ray is high, and I want to evaluate the AEC applicability of the sensor of the photoconductor-based production has an easy advantage. Experimental results confirms the possibility of fabrication of the sensor through an increase in the SNR, with the detection efficiency superior, accurate turn-off. In addition, it is confirmed that the experimental results of the transmittance and the latent image, Ghost effect by the light conductor does not appear, in the case of a photoconductor with the exception of the PbO, 80% - and it was confirmed good transmittance of 90%. Therefore, excellent mechanical stability and poor performance due to a change of the doping concentration than the existing products that have been put to practical use, the sensor easy photoconductor based, fabrication and can be applied as AEC sensor is expected.

A Study on the Identification and Classification of Relation Between Biotechnology Terms Using Semantic Parse Tree Kernel (시맨틱 구문 트리 커널을 이용한 생명공학 분야 전문용어간 관계 식별 및 분류 연구)

  • Choi, Sung-Pil;Jeong, Chang-Hoo;Chun, Hong-Woo;Cho, Hyun-Yang
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.45 no.2
    • /
    • pp.251-275
    • /
    • 2011
  • In this paper, we propose a novel kernel called a semantic parse tree kernel that extends the parse tree kernel previously studied to extract protein-protein interactions(PPIs) and shown prominent results. Among the drawbacks of the existing parse tree kernel is that it could degenerate the overall performance of PPI extraction because the kernel function may produce lower kernel values of two sentences than the actual analogy between them due to the simple comparison mechanisms handling only the superficial aspects of the constituting words. The new kernel can compute the lexical semantic similarity as well as the syntactic analogy between two parse trees of target sentences. In order to calculate the lexical semantic similarity, it incorporates context-based word sense disambiguation producing synsets in WordNet as its outputs, which, in turn, can be transformed into more general ones. In experiments, we introduced two new parameters: tree kernel decay factors, and degrees of abstracting lexical concepts which can accelerate the optimization of PPI extraction performance in addition to the conventional SVM's regularization factor. Through these multi-strategic experiments, we confirmed the pivotal role of the newly applied parameters. Additionally, the experimental results showed that semantic parse tree kernel is superior to the conventional kernels especially in the PPI classification tasks.

Bio-Sensing Convergence Big Data Computing Architecture (바이오센싱 융합 빅데이터 컴퓨팅 아키텍처)

  • Ko, Myung-Sook;Lee, Tae-Gyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.2
    • /
    • pp.43-50
    • /
    • 2018
  • Biometric information computing is greatly influencing both a computing system and Big-data system based on the bio-information system that combines bio-signal sensors and bio-information processing. Unlike conventional data formats such as text, images, and videos, biometric information is represented by text-based values that give meaning to a bio-signal, important event moments are stored in an image format, a complex data format such as a video format is constructed for data prediction and analysis through time series analysis. Such a complex data structure may be separately requested by text, image, video format depending on characteristics of data required by individual biometric information application services, or may request complex data formats simultaneously depending on the situation. Since previous bio-information processing computing systems depend on conventional computing component, computing structure, and data processing method, they have many inefficiencies in terms of data processing performance, transmission capability, storage efficiency, and system safety. In this study, we propose an improved biosensing converged big data computing architecture to build a platform that supports biometric information processing computing effectively. The proposed architecture effectively supports data storage and transmission efficiency, computing performance, and system stability. And, it can lay the foundation for system implementation and biometric information service optimization optimized for future biometric information computing.