• Title/Summary/Keyword: Quality Cost Model

Search Result 1,032, Processing Time 0.03 seconds

Overlay Multicast Network for IPTV Service using Bandwidth Adaptive Distributed Streaming Scheme (대역폭 적응형 분산 스트리밍 기법을 이용한 IPTV 서비스용 오버레이 멀티캐스트 네트워크)

  • Park, Eun-Yong;Liu, Jing;Han, Sun-Young;Kim, Chin-Chol;Kang, Sang-Ug
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.12
    • /
    • pp.1141-1153
    • /
    • 2010
  • This paper introduces ONLIS(Overlay Multicast Network for Live IPTV Service), a novel overlay multicast network optimized to deliver live broadcast IPTV stream. We analyzed IPTV reference model of ITU-T IPTV standardization group in terms of network and stream delivery from the source networks to the customer networks. Based on the analysis, we divide IPTV reference model into 3 networks; source network, core network and access network, ION(Infrastructure-based Overlay Multicast Network) is employed for the source and core networks and PON(P2P-based Overlay Multicast Network) is applied to the access networks. ION provides an efficient, reliable and stable stream distribution with very negligible delay while PON provides bandwidth efficient and cost effective streaming with a little tolerable delay. The most important challenge in live P2P streaming is to reduce end-to-end delay without sacrificing stream quality. Actually, there is always a trade-off between delay & stream quality in conventional live P2P streaming system. To solve this problem, we propose two approaches. Firstly, we propose DSPT(Distributed Streaming P2P Tree) which takes advantage of combinational overlay multicasting. In DSPT, a peer doesn't fully rely on SP(Supplying Peer) to get the live stream, but it cooperates with its local ANR(Access Network Relay) to reduce delay and improve stream quality. When RP detects bandwidth drop in SP, it immediately switches the connection from SP to ANR and continues to receive stream without any packet loss. DSPT uses distributed P2P streaming technique to let the peer share the stream to the extent of its available bandwidth. This means, if RP can't receive the whole stream from SP due to lack of SP's uploading bandwidth, then it receives only partial stream from SP and the rest from the ANR. The proposed distributed P2P streaming improves P2P networking efficiency.

On the Improvement of Precision in Gravity Surveying and Correction, and a Dense Bouguer Anomaly in and Around the Korean Peninsula (한반도 일원의 중력측정 및 보정의 정밀화와 고밀도 부우게이상)

  • Shin, Young-Hong;Yang, Chul-Soo;Ok, Soo-Suk;Choi, Kwang-Sun
    • Journal of the Korean earth science society
    • /
    • v.24 no.3
    • /
    • pp.205-215
    • /
    • 2003
  • A precise and dense Bouguer anomaly is one of the most important data to improve the knowledge of our environment in the aspect of geophysics and physical geodesy. Besides the precise absolute gravity station net, we should consider two parts; one is to improve the precision in gravity measurement and correction of it, and the other is the density of measurement both in number and distribution. For the precise positioning, we have tested how we could use the GPS properly in gravity measurement, and deduced that the GPS measurement for 5 minutes would be effective when we used DGPS with two geodetic GPS receivers and the baseline was shorter than 40km. In this case we should use a precise geoid model such as PNU95. By applying this method, we are able to reduce the cost, time, and number of surveyors, furthermore we also get the benefit of improving in quality. Two kind of computer programs were developed to correct crossover errors and to calculate terrain effects more precisely. The repeated measurements on the same stations in gravity surveying are helpful not only to correct the drifts of spring but also to approach the results statistically by applying network adjustment. So we can find out the blunders of various causes easily and also able to estimate the quality of the measurements. The recent developments in computer technology, digital elevation data, and precise positioning also stimulate us to improve the Bouguer anomaly by more precise terrain correction. The gravity data of various sources, such as land gravity data (by Choi, NGI, etc.), marine gravity data (by NORI), Bouguer anomaly map of North Korea, Japanese gravity data, altimetry satellite data, and EGM96 geopotential model, were collected and processed to get a precise and dense Bouguer anomaly in and around the Korean Peninsula.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

PM2.5 Simulations for the Seoul Metropolitan Area: (II) Estimation of Self-Contributions and Emission-to-PM2.5 Conversion Rates for Each Source Category (수도권 초미세먼지 농도모사 : (II) 오염원별, 배출물질별 자체 기여도 및 전환율 산정)

  • Kim, Soontae;Bae, Changhan;Yoo, Chul;Kim, Byeong-Uk;Kim, Hyun Cheol;Moon, Nankyoung
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.33 no.4
    • /
    • pp.377-392
    • /
    • 2017
  • A set of BFM (Brute Force Method) simulations with the CMAQ (Community Multiscale Air Quality) model were conducted in order to estimate self-contributions and conversion rates of PPM (Primary $PM_{2.5}$), $NO_x$, $SO_2$, $NH_3$, and VOC emissions to $PM_{2.5}$ concentrations over the SMA (Seoul Metropolitan Area). CAPSS (Clean Air Policy Support System) 2013 EI (emissions inventory) from the NIER (National Institute of Environmental Research) was used for the base and sensitivity simulations. SCCs (Source Classification Codes) in the EI were utilized to group the emissions into area, mobile, and point source categories. PPM and $PM_{2.5}$ precursor emissions from each source category were reduced by 50%. In turn, air quality was simulated with CMAQ during January, April, July, and October in 2014 for the BFM runs. In this study, seasonal variations of SMA $PM_{2.5}$ self-sensitivities to PPM, $SO_2$, and $NH_3$ emissions can be observed even when the seasonal emission rates are almost identical. For example, when the mobile PPM emissions from the SMA were 634 TPM (Tons Per Month) and 603 TPM in January and July, self-contributions of the emissions to monthly mean $PM_{2.5}$ were $2.7{\mu}g/m^3$ and $1.3{\mu}g/m^3$ for the months, respectively. Similarly, while $NH_3$ emissions from area sources were 4,169 TPM and 3,951 TPM in January and July, the self-contributions to monthly mean $PM_{2.5}$ for the months were $2.0{\mu}g/m^3$ and $4.4{\mu}g/m^3$, respectively. Meanwhile, emission-to-$PM_{2.5}$ conversion rates of precursors vary among source categories. For instance, the annual mean conversion rates of the SMA mobile, area, and point sources were 19.3, 10.8, and $6.6{\mu}g/m^3/10^6TPY$ for $SO_2$ emissions while those rates for PPM emissions were 268.6, 207.7, and 181.5 (${\mu}g/m^3/10^6TPY$), respectively, over the region. The results demonstrate that SMA $PM_{2.5}$ responses to the same amount of reduction in precursor emissions differ for source categories and in time (e.g. seasons), which is important when the cost-benefit analysis is conducted during air quality improvement planning. On the other hand, annual mean $PM_{2.5}$ sensitivities to the SMA $NO_x$ emissions remains still negative even after a 50% reduction in emission category which implies that more aggressive $NO_x$ reductions are required for the SMA to overcome '$NO_x$ disbenefit' under the base condition.

A Study on Reputation as Corporate Asset (기업자산으로서의 기업명성가치 연구: 국내 4개 기업 슈퍼브랜드와 기업명성, 미디어 이용간 관련성을 중심으로)

  • Lee, Cheol-Han;Cha, Hee-Won
    • Korean journal of communication and information
    • /
    • v.30
    • /
    • pp.203-237
    • /
    • 2005
  • The purpose of this study is to find a model that can measure the public relations programs based on the assumption that the public relations should aim to lift the corporate reputation. It is a trend that corporate's activities are to be measured from the standpoint of cost-benefit efficiency. However, public relations fields in Korea is left behind this trend because the fields lack in sophisticated model. In order to fill this gap, the researchers introduce the reputation measurement model that can calculate individual corporate public relations programs. In addition, this reputation model Is applied to Korean companies with the expectation of producing a PR index which ran be used to measure the reputation as corporate asset, or superbrand. This study examines the effects of superbrand on consumers according to the media use. Based on the expert group interviews and surveys on consumers, the factors of reputation are drawn. These factors contribute to find reputation model and measurement index which are again applied to measure the Korean companies' public relations programs. Using superbrand as dependent variables and managing abilities, corporate responsibility, corporate communication, and product/employee quality, this study seeks to find which factor specifically attribute to lift corporate reputation. Results show that each factor influences the corporate reputation positively. In addition, the researchers find that media use is moderately related to the superbrand building process in cognitive dimension.

  • PDF

Development of Industrial Embedded System Platform (산업용 임베디드 시스템 플랫폼 개발)

  • Kim, Dae-Nam;Kim, Kyo-Sun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.5
    • /
    • pp.50-60
    • /
    • 2010
  • For the last half a century, the personal computer and software industries have been prosperous due to the incessant evolution of computer systems. In the 21st century, the embedded system market has greatly increased as the market shifted to the mobile gadget field. While a lot of multimedia gadgets such as mobile phone, navigation system, PMP, etc. are pouring into the market, most industrial control systems still rely on 8-bit micro-controllers and simple application software techniques. Unfortunately, the technological barrier which requires additional investment and higher quality manpower to overcome, and the business risks which come from the uncertainty of the market growth and the competitiveness of the resulting products have prevented the companies in the industry from taking advantage of such fancy technologies. However, high performance, low-power and low-cost hardware and software platforms will enable their high-technology products to be developed and recognized by potential clients in the future. This paper presents such a platform for industrial embedded systems. The platform was designed based on Telechips TCC8300 multimedia processor which embedded a variety of parallel hardware for the implementation of multimedia functions. And open-source Embedded Linux, TinyX and GTK+ are used for implementation of GUI to minimize technology costs. In order to estimate the expected performance and power consumption, the performance improvement and the power consumption due to each of enabled hardware sub-systems including YUV2RGB frame converter are measured. An analytic model was devised to check the feasibility of a new application and trade off its performance and power consumption. The validity of the model has been confirmed by implementing a real target system. The cost can be further mitigated by using the hardware parts which are being used for mass production products mostly in the cell-phone market.

Development of a Business Model for Korean Insurance Companies with the Analysis of Fiduciary Relationship Persistency Rate (신뢰관계 유지율 분석을 통한 보험회사의 비즈니스 모델 개발)

  • 최인수;홍복안
    • Journal of the Korea Society of Computer and Information
    • /
    • v.6 no.4
    • /
    • pp.188-205
    • /
    • 2001
  • Insurer's duty of declaration is based on reciprocity of principle of the highest good, and recently it is widely recognized in the British and American insurance circles. The conception of fiduciary relationship is no longer equity or the legal theory which is only confined to the nations with Anglo-American laws. Therefore, recognizing the fiduciary relationship as the essence of insurance contract, which is more closely related to public interest than any other fields. will serve an efficient measure to seek fair and reasonable relationship with contractor, and provide legal foundation which permits contractor to bring an action for damage against violation of insurer's duty of declaration. In the future, only when the fiduciary relationship is approved as the essence of insurance contract, the business performance and quality of insurance industry is expected to increase. Therefore, to keep well this fiduciary relationship, or increase the fiduciary relationship persistency rates seems to be the bottom line in the insurance industry. In this paper, we developed a fiduciary relationship maintenance ratio based on comparison by case, which is represented with usually maintained contract months to paid months, based on each contract of the basis point. In this paper we have developed a new business model seeking the maximum profit with low cost and high efficiency, management policy of putting its priority on its substantiality, as an improvement measure to break away from the vicious circle of high cost and low efficiency, and management policy of putting its priority on its external growth(expansion of market share).

  • PDF

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

A Relative Study of 3D Digital Record Results on Buried Cultural Properties (매장문화재 자료에 대한 3D 디지털 기록 결과 비교연구)

  • KIM, Soohyun;LEE, Seungyeon;LEE, Jeongwon;AHN, Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.55 no.1
    • /
    • pp.175-198
    • /
    • 2022
  • With the development of technology, the methods of digitally converting various forms of analog information have become common. As a result, the concept of recording, building, and reproducing data in a virtual space, such as digital heritage and digital reconstruction, has been actively used in the preservation and research of various cultural heritages. However, there are few existing research results that suggest optimal scanners for small and medium-sized relics. In addition, scanner prices are not cheap for researchers to use, so there are not many related studies. The 3D scanner specifications have a great influence on the quality of the 3D model. In particular, since the state of light reflected on the surface of the object varies depending on the type of light source used in the scanner, using a scanner suitable for the characteristics of the object is the way to increase the efficiency of the work. Therefore, this paper conducted a study on nine small and medium-sized buried cultural properties of various materials, including earthenware and porcelain, by period, to examine the differences in quality of the four types of 3D scanners. As a result of the study, optical scanners and small and medium-sized object scanners were the most suitable digital records of the small and medium-sized relics. Optical scanners are excellent in both mesh and texture but have the disadvantage of being very expensive and not portable. The handheld method had the advantage of excellent portability and speed. When considering the results compared to the price, the small and medium-sized object scanner was the best. It was the photo room measurement that was able to obtain the 3D model at the lowest cost. 3D scanning technology can be largely used to produce digital drawings of relics, restore and duplicate cultural properties, and build databases. This study is meaningful in that it contributed to the use of scanners most suitable for buried cultural properties by material and period for the active use of 3D scanning technology in cultural heritage.

Update of Digital Map by using The Terrestrial LiDAR Data and Modified RANSAC (수정된 RANSAC 알고리즘과 지상라이다 데이터를 이용한 수치지도 건물레이어 갱신)

  • Kim, Sang Min;Jung, Jae Hoon;Lee, Jae Bin;Heo, Joon;Hong, Sung Chul;Cho, Hyoung Sig
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.3-11
    • /
    • 2014
  • Recently, rapid urbanization has necessitated continuous updates in digital map to provide the latest and accurate information for users. However, conventional aerial photogrammetry has some restrictions on periodic updates of small areas due to high cost, and as-built drawing also brings some problems with maintaining quality. Alternatively, this paper proposes a scheme for efficient and accurate update of digital map using point cloud data acquired by Terrestrial Laser Scanner (TLS). Initially, from the whole point cloud data, the building sides are extracted and projected onto a 2D image to trace out the 2D building footprints. In order to register the footprint extractions on the digital map, 2D Affine model is used. For Affine parameter estimation, the centroids of each footprint groups are randomly chosen and matched by means of a modified RANSAC algorithm. Based on proposed algorithm, the experimental results showed that it is possible to renew digital map using building footprint extracted from TLS data.