• Title/Summary/Keyword: Common Cost

Search Result 1,065, Processing Time 0.032 seconds

Design and Implementation of a common API for building a system of mobile Web Services (모바일 웹서비스 시스템 구축을 위한 공통 API 설계 및 구현)

  • Kwon, Doowy;Park, Suhyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.101-108
    • /
    • 2014
  • Many businesses, government offices, educational institutions, according to the characteristics of each business and information system is used. However, prevalence of smart phones and a wide range of mobile devices with services, which requires users with mobility, according to the latest mobile services and in many places, and is under development. Interworking between existing systems and mobile systems to be set aside for the development cost and time, a waste of human resources is getting worse. Also, in many places to provide mobile services to existing systems need to fix the problem is coming. In this paper, to solve the problem of interworking between existing systems and mobile systems for the data transfer and processing of existing server, web services server, and mobile systems has been developed between the library.

An empirical Analysis of Scientific and Technological Performance for the Railroad R&D through the Cross-sectional Analysis (횡단면 분석을 통한 철도 R&D의 과학기술적 성과 실증 분석)

  • Park, Man-Soo;Bang, Yoon-Sock;Lee, Hi-Sung
    • Journal of the Korean Society for Railway
    • /
    • v.14 no.3
    • /
    • pp.285-294
    • /
    • 2011
  • An analysis of railroad industry has been insufficient whereas there are lots of analysis of accumulation of technology, economic performances and ripple effects for macroscopic view and other industry of R&D investments. This study decided intellectual rights, patent, and paper as common indicators of scientific and technological performances for setting up performance targets through surveying and analysis of preceding study and verified a appropriateness of scientific and technological performances for railroad R&D 11 projects which were successfully finished. Preceding study has been set up performance targets by research investments as input, but this study made a performance target by model through a cross-sectional and residual analysis of performances of railroad R&D 11 Projects in applying research investments, capital investments and inner labor cost per man and research time as inputs, and verified a validity and a empirical analysis through analysis of other project.

A Study on the Relationship Between Berth Occupancy Rate and Ship Size at Exclusive Bulk Terminal (Bulk 부두의 선박 대형화에 따른 선석별 점유율 비교 분석 - P제철 원료부두를 중심으로 -)

  • Kim, Chang-Gon;Jang, Seong-Yong
    • Journal of the Korea Society for Simulation
    • /
    • v.17 no.3
    • /
    • pp.63-73
    • /
    • 2008
  • The aim of study is to analyze the berth occupancy rate according to the ship size. P Iron and steel company operate exclusive bulk terminal at P port and G port and the depth of water at berth are not so equal each other. And to reduce the sea transport cost between loading port and unloading port P and G, P company increases the number of large ship while ship scheduling. But it causes to increase the berth congestion at the specific water depth berth owing to the draught of large ship. At this point, usually ship waiting time starts to rise even at low levels of berth occupancy rate, and will rise more and more sharply at the level of full utilization. But it is not common at exclusive terminal like P port and G port. Bulk ships arrive at port according to the early planned arrival time and the coefficient of variation of ship arrival time is not so big. So queueing time at exclusive terminal does not rise sharply near 80-90 berth occupancy rate.

  • PDF

A Heuristic Algorithm for a Ship Speed and Bunkering Decision Problem (선박속력 및 급유결정 문제에 대한 휴리스틱 알고리즘)

  • Kim, Hwa-Joong;Kim, Jae-Gon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.2
    • /
    • pp.19-27
    • /
    • 2016
  • Maritime transport is now regarded as one of the main contributors to global climate change by virtue of its $CO_2$ emissions. Meanwhile, slow steaming, i.e., slower ship speed, has become a common practice in the maritime industry so as to lower $CO_2$ emissions and reduce bunker fuel consumption. The practice raised various operational decision issues in terms of shipping companies: how much ship speed is, how much to bunker the fuel, and at which port to bunker. In this context, this study addresses an operation problem in a shipping companies, which is the problem of determining the ship speed, bunkering ports, and bunkering amount at the ports over a given ship route to minimize the bunker fuel and ship time costs as well as the carbon tax which is a regulatory measure aiming at reducing $CO_2$ emissions. The ship time cost is included in the problem because slow steaming increases transit times, which implies increased in-transit inventory costs in terms of shippers. We formulate the problem as a nonlinear lot-sizing model and suggest a Lagrangian heuristic to solve the problem. The performance of the heuristic algorithm is evaluated using the data obtained from reliable sources. Although the problem is an operational problem, the heuristic algorithm is used to address various strategic issues facing shipping companies, including the effects of bunker prices, carbon taxes, and ship time costs on the ship speed, bunkering amount and number of bunkering ports. For this, we conduct sensitivity analyses of these factors and finally discuss study findings.

High-density genetic mapping using GBS in Chrysanthemum

  • Chung, Yong Suk;Cho, Jin Woong;Kim, Changsoo
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2017.06a
    • /
    • pp.57-57
    • /
    • 2017
  • Chrysanthemum is one of the most important floral crop in Korea produced about 7 billion dollars (1 billion for pot and 6 billion for cutting) in 2013. However, it is difficult to breed and to do genetic study because 1) it is highly self-incompatible, 2) it is outcrossing crop having heterozygotes, and 3) commercial cultvars are hexaploid (2n = 6x = 54). Although low-density genetic map and QTL study were reported, it is not enough to apply for the marker assisted selection and other genetic studies. Therefore, we are trying to make high-density genetic mapping using GBS with about 100 $F_1s$ of C. boreale that is oHohhfd diploid (2n = 2x = 18, about 2.8Gb) instead of commercial culitvars. Since Chrysanthemum is outcrossing, two-way pseudo-testcross model would be used to construct genetic map. Also, genotype-by-sequencing (GBS) would be utilized to generate sufficient number of markers and to maximize genomic representation in a cost effective manner. Those completed sequences would be analyzed with TASSEL-GBS pipeline. In order to reduce sequence error, only first 64 sequences, which have almost zero percent error, would be incorporated in the pipeline for the analysis. In addition, to reduce errors that is common in heterozygotes crops caused by low coverage, two rare cutters (NsiI and MseI) were used to increase sequence depth. Maskov algorithm would also used to deal with missing data. Further, sparsely placed markers on the physical map would be used as anchors to overcome problems caused by low coverage. For this purpose, were generated from transcriptome of Chrysanthemum using MISA program. Among those, 10 simple sequence repeat (SSR) markers, which are evenly distributed along each chromosome and polymorphic between two parents, would be selected.

  • PDF

A Qualitative Analysis on Familial Caregivers' Burden in Utilizing a Nursing Home for the Elderly (유료 노인전문요양원 이용 경험에 관한 질적 연구)

  • 김완희;박종연;이지전;강임옥
    • Health Policy and Management
    • /
    • v.13 no.1
    • /
    • pp.1-22
    • /
    • 2003
  • The principal objective of this study was to analyze and conceptualize the socio-psychological burden in utilizing a nursing home for elderly. The subjects were five elderly from a private nursing home located in Seoul and their familial caregivers. An old male and three females were currently staying at the facility, and a female had been discharged already from there. Data were collected through depth interviews, observations and review of records at the facility For analysis, the data were classified by similar contents among significant expressions and factors in common. The subjects' motives to consider admission to the nursing home might be attributed to familial caregivers' burden, a shortage of support, environmental improvement and feeling of helplessness for the case elderly. The concept of burden is including family members' being badly off in living, their weariness, complications among family members, feeling psychological uneasiness, and hospital expenses. The identified image of nursing homes for the elderly in Korea was generally negative at the point of high cost, unreasonable requisites and limitations for admission to the facilities, inferior situations, and especially in that there were few long-term care facilities within the community boundary. From their experience of nursing homes, the interviewees have felt the sentiments of sorry for their old parents, with the thought of being an undutiful, bitterness, and empathy. Additionally, they expressed a sense of anxiety of relative deprivation against the fact that there were no long-term care facilities available for the middle class. On the basis of these, multi-dimensional needs could be identified for the elderly with chronic illnesses.

Plasma Electrolytic Oxidation in Surface Modification of Metals for Electronics

  • Sharma, Mukesh Kumar;Jang, Youngjoo;Kim, Jongmin;Kim, Hyungtae;Jung, Jae Pil
    • Journal of Welding and Joining
    • /
    • v.32 no.3
    • /
    • pp.27-33
    • /
    • 2014
  • This paper presents a brief summary on a relatively new plasma aided electrolytic surface treatment process for light metals. A brief discussion regarding the advantages, principle, process parameters and applications of this process is discussed. The process owes its origin to Sluginov who discovered an arc discharge phenomenon in electrolysis in 1880. A similar process was studied and developed by Markov and coworkers in 1970s who successfully deposited an oxide film on aluminium. Several investigation thereafter lead to the establishment of suitable process parameters for deposition of a crystalline oxide film of more than $100{\mu}m$ thickness on the surface of light metals such as aluminium, titanium and magnesium. This process nowadays goes by several names such as plasma electrolytic oxidation (PEO), micro-arc oxidation (MOA), anodic spark deposition (ASD) etc. Several startups and surface treatment companies have taken up the process and deployed it successfully in a range of products, from military grade rifles to common off road sprockets. However, there are certain limitations to this technology such as the formation of an outer porous oxide layer, especially in case of magnesium which displays a Piling Bedworth ratio of less than one and thus an inherent non protective oxide. This can be treated further but adds to the cost of the process. Overall, it can be said the PEO process offers a better solution than the conventional coating processes. It offers advantages considering the fact that he electrolyte used in PEO process is environmental friendly and the temperature control is not as strict as in case of other surface treatment processes.

Analysis of massive data in astronomy (천문학에서의 대용량 자료 분석)

  • Shin, Min-Su
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1107-1116
    • /
    • 2016
  • Recent astronomical survey observations have produced substantial amounts of data as well as completely changed conventional methods of analyzing astronomical data. Both classical statistical inference and modern machine learning methods have been used in every step of data analysis that range from data calibration to inferences of physical models. We are seeing the growing popularity of using machine learning methods in classical problems of astronomical data analysis due to low-cost data acquisition using cheap large-scale detectors and fast computer networks that enable us to share large volumes of data. It is common to consider the effects of inhomogeneous spatial and temporal coverage in the analysis of big astronomical data. The growing size of the data requires us to use parallel distributed computing environments as well as machine learning algorithms. Distributed data analysis systems have not been adopted widely for the general analysis of massive astronomical data. Gathering adequate training data is expensive in observation and learning data are generally collected from multiple data sources in astronomy; therefore, semi-supervised and ensemble machine learning methods will become important for the analysis of big astronomical data.

Self-Calibration for Direction Finding in Multi-Baseline Interferometer System (멀티베이스라인 인터페로미터 시스템에서의 자체 교정 방향 탐지 방법)

  • Kim, Ji-Tae;Kim, Young-Soo;Kang, Jong-Jin;Lee, Duk-Yung;Roh, Ji-Hyun
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.21 no.4
    • /
    • pp.433-442
    • /
    • 2010
  • In this paper, self-calibration algorithm based on covariance matrix is proposed for compensating amplitude/phase mismatch in multi-baseline interferometer direction finding system. The proposed method is a solution to nonlinear constrained minimization problem which dramatically calibrate mismatch error using space sector concept with cost function as defined in this paper. This method, however, has a drawback that requires an estimated initial angle to determine the proper space sector. It is well known that this type of drawback is common in nonlinear optimization problem. Superior calibration capabilities achieved with this approach are illustrated by simulation experiments in comparison with interferometer algorithm for a varitiety of amplitude/phase mismatch error. Furthermore, this approach has been found to provide an exceptional calibration capabilities even in case amplitude and phase mismatch are more than 30 dB and over $5^{\circ}$, respectively, with sector spacing of less than $50^{\circ}$.

Development of Real time Air Quality Prediction System

  • Oh, Jai-Ho;Kim, Tae-Kook;Park, Hung-Mok;Kim, Young-Tae
    • Proceedings of the Korean Environmental Sciences Society Conference
    • /
    • 2003.11a
    • /
    • pp.73-78
    • /
    • 2003
  • In this research, we implement Realtime Air Diffusion Prediction System which is a parallel Fortran model running on distributed-memory parallel computers. The system is designed for air diffusion simulations with four-dimensional data assimilation. For regional air quality forecasting a series of dynamic downscaling technique is adopted using the NCAR/Penn. State MM5 model which is an atmospheric model. The realtime initial data have been provided daily from the KMA (Korean Meteorological Administration) global spectral model output. It takes huge resources of computation to get 24 hour air quality forecast with this four step dynamic downscaling (27km, 9km, 3km, and lkm). Parallel implementation of the realtime system is imperative to achieve increased throughput since the realtime system have to be performed which correct timing behavior and the sequential code requires a large amount of CPU time for typical simulations. The parallel system uses MPI (Message Passing Interface), a standard library to support high-level routines for message passing. We validate the parallel model by comparing it with the sequential model. For realtime running, we implement a cluster computer which is a distributed-memory parallel computer that links high-performance PCs with high-speed interconnection networks. We use 32 2-CPU nodes and a Myrinet network for the cluster. Since cluster computers more cost effective than conventional distributed parallel computers, we can build a dedicated realtime computer. The system also includes web based Gill (Graphic User Interface) for convenient system management and performance monitoring so that end-users can restart the system easily when the system faults. Performance of the parallel model is analyzed by comparing its execution time with the sequential model, and by calculating communication overhead and load imbalance, which are common problems in parallel processing. Performance analysis is carried out on our cluster which has 32 2-CPU nodes.

  • PDF