• Title/Summary/Keyword: regularized

Search Result 231, Processing Time 0.023 seconds

A Study on Areal & Dimensional Characteristics of 21C Apartment Typical Unit Plans in Seoul and its Metropolitan Vicinity (아파트 전형적 평면의 실 크기와 치수 특성에 관한 연구 - 21세기 강남권, 강북권, 수도권 아파트를 중심으로 -)

  • Yoon, Chae-Shin;Jun, Nam-Il;Kim, Do-Yeon;Kim, Min-Kyoung;Kim, Jun-Lae
    • Journal of the Korean housing association
    • /
    • v.19 no.6
    • /
    • pp.21-32
    • /
    • 2008
  • The purpose of this research is firstly to derive regular sizes of average dwellings in Korea and to examine minimum living standards in light of those regular dwellings in order to meet future housing requirements of low income households. Two plan types of unit floor area 60 $m^2$ and 85 $m^2$ have become prevalent and ubiquitous so as to reflect the basic requirements of ordinary living standards. Thus, dimensional characteristics of each space in those two plan types is thoroughly investigated in this research. The background of regular plans and their popularized process is first reviewed and the 120 cases of apartment units which were constructed between 2000 and 2007, are selected from those three regional groups and surveyed in detail. The area, depth, width and proportion of each space of unit plans are compared and analyzed in various aspects. As a result, proper space sizes and standards for low income households are reviewed and compared. The regional difference of space dimensions is not significant as expected but area and size characteristics of each space is very much regularized and obvious. And it is argued that those dimensional characteristics should convey the social and cultural values of Korean housing. The average dimensions of each spaces of surveyed apartment unit turns out to be much closer to guidable living standards rather than minimum living standards. Thus, it is very probable that the present guidable living standards could be upgraded to become the future minimum living standards soon.

Face Recognition via Sparse Representation using the ROMP Method (ROMP를 이용한 희소 표현 방식 얼굴 인식 방법론)

  • Ahn, Jung-Ho;Choi, KwonTaeg
    • Journal of Digital Contents Society
    • /
    • v.18 no.2
    • /
    • pp.347-356
    • /
    • 2017
  • It is well-known that the face recognition method via sparse representation has been proved very robust and showed good performance. Its weakness is, however, that its time complexity is very high because it should solve $L_1$-minimization problem to find the sparse solution. In this paper, we propose to use the ROMP(Regularized Orthogonal Matching Pursuit) method for the sparse solution, which solves the $L_2$-minimization problem with regularization condition using the greed strategy. In experiments, we shows that the proposed method is comparable to the existing best $L_1$-minimization solver, Homotopy, but is 60 times faster than Homotopy. Also, we proposed C-SCI method for classification. The C-SCI method is very effective since it considers the sparse solution only without reconstructing the test data. It is shown that the C-SCI method is comparable to, but is 5 times faster than the existing best classification method.

DCT-based Regularized High-Resolution Image Reconstruction Algorithm (DCT 기반의 정규화 된 고해상도 영상 복원 알고리즘)

  • 박진열;이승현;강문기
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.8B
    • /
    • pp.1558-1566
    • /
    • 1999
  • While high resolution images are required for various applications, aliased low-resolution images are only available due to the physical limitations of sensors. In this paper, we propose an algorithm to reconstruct a high resolution image from multiple aliased low-resolution images, which is based on the generalized multichannel deconvolution technique. The conventional approaches are based on the discrete Fourier transform (DFT) since the aliasing effect is easily analyzed in the frequency domain. However, the useful solution may not be available in many cases, i.e., the underdetermined cases or the insufficient subpixel information cases. In order to compensate for such ill-posedness, the generalized multichannel regularization was adopted in the spatial domain. Furthermore, the usage of the discrete cosine transform instead of the DFT leads to the computationally efficient reconstruction algorithm. The validity of the proposed algorithm is both theoretically and experimentally demonstrated in this paper. It is also shown that the effect of inaccurate motion information is reduced by regularization.

  • PDF

Design and Implementation of Dynamic Web Server Page Builder on Web (웹 기반의 동적 웹 서버 페이지 생성기 설계 및 구현)

  • Shin, Yong-Min;Kim, Byung-Ki
    • The KIPS Transactions:PartD
    • /
    • v.15D no.1
    • /
    • pp.147-154
    • /
    • 2008
  • Along with the trend of internet use, various web application developments have been performed to provide information that was managed in the internal database on the web by making a web server page. However, in most cases, a direct program was made without a systematic developmental methodology or with the application of a huge developmental methodology that is inappropriate and decreased the efficiency of the development. A web application that fails to follow a systematic developmental methodology and uses a script language can decrease the productivity of the program development, maintenance, and reuse. In this thesis, the auto writing tool for a dynamic web server page was designed and established by using a database for web application development based on a fast and effective script. It suggests a regularized script model and makes a standardized script for the data bound control tag creator by analyzing a dynamic web server page pattern with the database in order to contribute to productivity by being used in the web application development and maintenance.

Study on the termination rule in the iterative image restoration algorithm (반복 복원 알고리듬에서의 종료 규칙에 관한 연구)

  • 문태진;김인겸;박규태
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.8
    • /
    • pp.1803-1813
    • /
    • 1997
  • The goal of image restoration is to remove the degradations in a way that the resrored image will best approximate the original image. This can be done by the iterative regularized image restoration method. In any iterative image restoration algorithm, using a "better" termination rule results in both "better" quality of ther restored image and "less" computation, and hence, "faster" and "simp;er" practical system. Therefore, finding a better termmination rule for an iterative image restoration algorithm has been an interesting and improtant question for many researchers in the iterative image restoration. In these reasons, the new termination rule using the estimated distance between the original image and the restored image is proposed inthis paper. Noise suppression parameter(NSP) and the rule for estimating NSP with the noise variance are also proposed. The experimental results shows that the proposed termination rule is superior to the conventional methods.

  • PDF

Quantitative Analysis of Bayesian SPECT Reconstruction : Effects of Using Higher-Order Gibbs Priors

  • S. J. Lee
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.2
    • /
    • pp.133-142
    • /
    • 1998
  • In Bayesian SPECT reconstruction, the incorporation of elaborate forms of priors can lead to improved quantitative performance in various statistical terms, such as bias and variance. In particular, the use of higher-order smoothing priors, such as the thin-plate prior, is known to exhibit improved bias behavior compared to the conventional smoothing priors such as the membrane prior. However, the bias advantage of the higher-order priors is effective only when the hyperparameters involved in the reconstruction algorithm are properly chosen. In this work, we further investigate the quantitative performance of the two representative smoothing priors-the thin plate and the membrane-by observing the behavior of the associated hyperparameters of the prior distributions. In our experiments we use Monte Carlo noise trials to calculate bias and variance of reconstruction estimates, and compare the performance of ML-EM estimates to that of regularized EM using both membrane and thin-plate priors, and also to that of filtered backprojection, where the membrane and thin plate models become simple apodizing filters of specified form. We finally show that the use of higher-order models yields excellent "robustness" in quantitative performance by demonstrating that the thin plate leads to very low bias error over a large range of hyperparameters, while keeping a reasonable variance. variance.

  • PDF

A Preliminary Study on the Ice-induced Fatigue in Ice-going Ships (빙 해역 운항선박의 빙 유기 피로문제에 대한 기초연구)

  • Hwang, Mi-Ran;Kwon, Yong-Hyun;Lee, Tak-Kee
    • Journal of Ocean Engineering and Technology
    • /
    • v.30 no.4
    • /
    • pp.303-309
    • /
    • 2016
  • As commercialization of the Arctic sea route and resource developments are regularized, demands for ice-breaking tankers, LNG carriers, and offshore plants are expected to increase. In addition, the existing ice-breaking cargo ships navigating in the ice-covered waters are worn out. Hence, the construction of new ships is likely to be undertaken for both current and long-term applications. The design of ships navigating in ice-covered waters demands conservative methods and strict development standards owing to the extreme cold and collision tendencies with ice floes and/or icebergs. ISO 19906 recently stated that a fatigue limit should be defined when designing Arctic offshore structures such that the ice-induced fatigue becomes one of the important design drivers. Thus, establishing systematic measures to mitigate ice-induced fatigue problems in ice-breaking ships are important from the viewpoint of having a competitive advantage. In this paper, the issues relating to ice-induced fatigue problems, based on data and published literature, are examined to describe the criticality of ice-induced fatigue. Potential fatigue damage possibilities are investigated using data measured in the Arctic Ocean (2013) and using the Korean icebreaker, ARAON.

A Plan to Operate a Beach through Safety Management Prevention Using ICT Technology (ICT기술을 활용한 안전관리 방역을 통한 해수욕장 운영 방안)

  • An, Tai-Gi
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.12
    • /
    • pp.22-29
    • /
    • 2021
  • COVID-19, which has spread around the world, is also affecting local economic industries such as the domestic tourism industry and the service industry. In particular, the quality of life is threatened as safety prevention rules related to infectious diseases such as social distancing have been regularized. The purpose of this study is to analyze the impact on safety quarantine on users of the summer festival at Songho Beach in Haenam, a summer resort. In addition, it protrudes through big data surveys, demographic analysis, and technology analysis on the management of users who have changed in the COVID-19 era. It is expected to be a reference material by utilizing practical data on users in the future. In addition, this study is significant that it has been reviewed for safety and satisfaction for tourists using the summer beach festival through quarantine management using ICT technology in the COVID-19 situation, and needs to be used as good guidelines and examples for this study in the future.

The Effects of the Educational Resources on Recruitment Rates of the Universities in South-Eastern Korea (한국의 동남권 대학의 학내 교육자원이 대학의 취업성과에 미치는 영향)

  • Kim, Young-Bu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.12
    • /
    • pp.471-479
    • /
    • 2018
  • This research examines the sustainable mutual growth of academia and industry regarding human resource cultivation and recruitment of local communities. at the beginning of regularized survival competitions and university innovations according to University Basic Competence Evaluations and etc., This research considers the substantive effect of educational resources of universities on recruitment rates in the pursuit of enhancing university-industry cooperation. Therefore, to identify factors of recruitment rates, we employ a university-wise index based on a quantitative index of educational resources of universities. Regarding study methods, set-up and verification of hypothesis, empirical analysis, descriptive statistics analysis, and correlation analysis are used to identify the correlation between dependent variables and independent variables based on the three sub-indexes of open records at Higher Education including educational environments, educational finances, and research achievements. Implications were derived from multiple regression analysis results regarding education conditions and recruitment rates, educational finances and recruitment rates, and research achievement and recruitment rates. This research can be extended to predict regional university recruitment rates with empirical analysis considering regional characteristics.

A Pre-processing Study to Solve the Problem of Rare Class Classification of Network Traffic Data (네트워크 트래픽 데이터의 희소 클래스 분류 문제 해결을 위한 전처리 연구)

  • Ryu, Kyung Joon;Shin, DongIl;Shin, DongKyoo;Park, JeongChan;Kim, JinGoog
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.411-418
    • /
    • 2020
  • In the field of information security, IDS(Intrusion Detection System) is normally classified in two different categories: signature-based IDS and anomaly-based IDS. Many studies in anomaly-based IDS have been conducted that analyze network traffic data generated in cyberspace by machine learning algorithms. In this paper, we studied pre-processing methods to overcome performance degradation problems cashed by rare classes. We experimented classification performance of a Machine Learning algorithm by reconstructing data set based on rare classes and semi rare classes. After reconstructing data into three different sets, wrapper and filter feature selection methods are applied continuously. Each data set is regularized by a quantile scaler. Depp neural network model is used for learning and validation. The evaluation results are compared by true positive values and false negative values. We acquired improved classification performances on all of three data sets.