• Title/Summary/Keyword: Input and Output

Search Result 8,627, Processing Time 0.042 seconds

Hue Shift Model and Hue Correction in High Luminance Display (고휘도 디스플레이의 색상이동모델과 색 보정)

  • Lee, Tae-Hyoung;Kwon, Oh-Seol;Park, Tae-Yong;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.4 s.316
    • /
    • pp.60-69
    • /
    • 2007
  • The human eye usually experiences a loss of color sensitivity when it is subjected to high levels of luminance, and perceives a discrepancy in color between high and normal-luminance displays, generally known as a hue shift. Accordingly, this paper models the hue-shift phenomenon and proposes a hue-correction method to provide perceptual matching between high and normal-luminance displays. The value of hue-shift is determined by perceived hue matching experiments. At first the phenomenon is observed at three lightness levels, that is, the ratio of luminance is the same between high and normal-luminance display when the perceived hue matching experiments we performed. To quantify the hue-shift phenomenon for the whole hue angle, color patches with the same lightness are first created and equally spaced inside the hue angle. These patches are then displayed one-by-one on both displays with the ratio of luminance between two displays. Next, the hue value for each patch appearing on the high-luminance display is adjusted by observers until the perceived hue for the patches on both displays appears the same visually. After obtaining the hue-shift values, these values are fit piecewise to allow shifted-hue amounts to be approximately determined for arbitrary hue values of pixels in a high-luminance display and then used for correction. Essentially, input RGB values of an image is converted to CIELAB values, and then, LCh (lightness, chroma, and hue) values are calculated to obtain the hue values for all the pixels. These hue values are shifted according to the amount calculated by the functions of the hue-shift model. Finally, the corrected CIELAB values are calculated from corrected hue values, after that, output RGB values for all pixels are estimated. For evaluation, an observer's preference test was performed with hue-shift results and Almost observers conclude that the images from hue-shift model were visually matched with images on normal luminance display.

Application of LCA Methodology on Lettuce Cropping Systems in Protected Cultivation (시설재배 상추에 대한 전과정평가 (LCA) 방법론 적용)

  • Ryu, Jong-Hee;Kim, Kye-Hoon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.5
    • /
    • pp.705-715
    • /
    • 2010
  • The adoption of carbon foot print system is being activated mostly in the developed countries as one of the long-term response towards tightened up regulations and standards on carbon emission in the agricultural sector. The Korean Ministry of Environment excluded the primary agricultural products from the carbon foot print system due to lack of LCI (life cycle inventory) database in agriculture. Therefore, the research on and establishment of LCI database in the agriculture for adoption of carbon foot print system is urgent. Development of LCA (life cycle assessment) methodology for application of LCA to agricultural environment in Korea is also very important. Application of LCA methodology to agricultural environment in Korea is an early stage. Therefore, this study was carried out to find out the effect of lettuce cultivation on agricultural environment by establishing LCA methodology. Data collection of agricultural input and output for establishing LCI was carried out by collecting statistical data and documents on income from agro and livestock products prepared by RDA. LCA methodology for agriculture was reviewed by investigating LCA methodology and LCA applications of foreign countries. Results based on 1 kg of lettuce production showed that inputs including N, P, organic fertilizers, compound fertilizers and crop protectants were the main sources of major emission factor during lettuce cropping process. The amount of inputs considering the amount of active ingredients was required to estimate the actual quantity of the inputs used. Major emissions due to agricultural activities were $N_2O$ (emission to air) and ${NO_3}^-$/${PO_4}^-$ (emission to water) from fertilizers, organic compounds from pesticides and air pollutants from fossil fuel combustion in using agricultural machines. The softwares for LCIA (life cycle impact assessment) and LCA used in Korea are 'PASS' and 'TOTAL' which have been developed by the Ministry of Knowledge Economy and the Ministry of Environment. However, the models used for the softwares are the ones developed in foreign countries. In the future, development of models and optimization of factors for characterization, normalization and weighting suitable to Korean agricultural environment need to be done for more precise LCA analysis in the agricultural area.

Effects of Typhoon and Mesoscale Eddy on Generation and Distribution of Near-Inertial Wave Energy in the East Sea (동해에서 태풍과 중규모 소용돌이가 준관성주기파 에너지 생성과 분포에 미치는 영향)

  • SONG, HAJIN;JEON, CHANHYUNG;CHAE, JEONG-YEOB;LEE, EUN-JOO;LEE, KANG-NYEONG;TAKAYAMA, KATSUMI;CHOI, YOUNGSEOK;PARK, JAE-HUN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.25 no.3
    • /
    • pp.55-66
    • /
    • 2020
  • Near-inertial waves (NIW) which are primarily generated by wind can contribute to vertical mixing in the ocean. The energetic NIW can be generated by typhoon due to its strong wind and preferable wind direction changes especially on the right-hand side of the typhoon. Here we investigate the generation and distribution of NIW using the output of a real-time ocean forecasting system. Five-year model outputs during 2013-2017 are analyzed with a focus on cases of energetic NIW generation by the passage of three typhoons (Halong, Goni, and Chaba) over the East Sea. Calculations of wind energy input (${\bar{W}}_I$), and horizontal kinetic energy in the mixed layer (${\bar{HKE}}_{MLD}$) reveal that the spatial distribution of ${\bar{HKE}}_{MLD}$, which is strengthened at the right-hand side of typhoon tracks, is closely related with ${\bar{W}}_I$. Horizontal kinetic energy in the deep layer (${\bar{HKE}}_{DEEP}$) shows patch-shaped distribution mainly located at the southern side of the East Sea. Spatial distribution of ${\bar{HKE}}_{DEEP}$ shows a close relationship with negative relative vorticity regions caused by warm eddies in the upper layer. Monthly-mean ${\bar{HKE}}_{MLD}$ and ${\bar{HKE}}_{DEEP}$ during a typhoon passing over the East Sea shows about 2.5-5.7 times and 1.2-1.6 times larger values than those during summer with no typhoons, respectively. In addition, their magnitudes are respectively about 0.4-1.0 and 0.8-1.0 times from those during winter, suggesting that the typhoon-induced NIW can provide a significant energy to enhance vertical mixing at both the mixed and deep layers during summer.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

An Area-Efficient Time-Shared 10b DAC for AMOLED Column Driver IC Applications (AMOLED 컬럼 구동회로 응용을 위한 시분할 기법 기반의 면적 효율적인 10b DAC)

  • Kim, Won-Kang;An, Tai-Ji;Lee, Seung-Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.5
    • /
    • pp.87-97
    • /
    • 2016
  • This work proposes a time-shared 10b DAC based on a two-step resistor string to minimize the effective area of a DAC channel for driving each AMOLED display column. The proposed DAC shows a lower effective DAC area per unit column driver and a faster conversion speed than the conventional DACs by employing a time-shared DEMUX and a ROM-based two-step decoder of 6b and 4b in the first and second resistor string. In the second-stage 4b floating resistor string, a simple current source rather than a unity-gain buffer decreases the loading effect and chip area of a DAC channel and eliminates offset mismatch between channels caused by buffer amplifiers. The proposed 1-to-24 DEMUX enables a single DAC channel to drive 24 columns sequentially with a single-phase clock and a 5b binary counter. A 0.9pF sampling capacitor and a small-sized source follower in the input stage of each column-driving buffer amplifier decrease the effect due to channel charge injection and improve the output settling accuracy of the buffer amplifier while using the top-plate sampling scheme in the proposed DAC. The proposed DAC in a $0.18{\mu}m$ CMOS shows a signal settling time of 62.5ns during code transitions from '$000_{16}$' to '$3FF_{16}$'. The prototype DAC occupies a unit channel area of $0.058mm^2$ and an effective unit channel area of $0.002mm^2$ while consuming 6.08mW with analog and digital power supplies of 3.3V and 1.8V, respectively.

Feasibility of Application of Roy's Adaptation Model to Family Health Assessment (로이적응모델의 가족건강사정에의 적용가능성)

  • Jang Sun-Ok
    • Journal of Korean Public Health Nursing
    • /
    • v.8 no.2
    • /
    • pp.35-56
    • /
    • 1994
  • This article was intended to survey whether Roy' Adapation model ('Roy Model') can be applied to family health assessment and to study whether application of the Roy Model to a Korean family is feasible. under the Roy Model, a family is viewed as an adaptation system having a series of process of input. process, feedback, and output. Further, the Roy Model indicates that a family contains Physiolosical, self-concept. role function and interdependent mode in respect of internal or external stimuli. In the event where the family health assessed, the adaptation mode of that family must be assess at the first stage. Then, the focal, contextual, residual stimuli affecting the family must be assessed. In 1984 Hanson suggested four types of family adaptation mode based upon the Roy Model and thereby enhanced the possibility for family health assessment. In order survey whether the Roy Model can be applied to the Korean family, the author of this article contracted adults of 169 who live in 'A' city to make open questions regarding family and then analyzed responses from them by utilizing Roy model. This study categorized family Adaptation mode based upon the' four types of family adaptation mode developed by Hanson. As a result of this study, family adaptation mode was categorized into 117 concepts. Those 117 concepts are consisted or Physiolosical mode of 47. self­concept mode of 56, role function mode of 9 and interdependent mode of 5. Further. stimuli affecting family were classified based upon Roy's definition as to three types of stimuli. Stimuli on a family are comprised focal stimuli concept of 19, contextual stimuli concepts of 19, one residual stimuli concept. this result implies that the Roy's Model can be applied to Korean family. Physiological mode shows meaning of survival. while self-concept mode reflects meaning of growth and emphasizes harmony among the family based on the familism. The role function mode shows continuity rather control of family member. By contrast, interdependent mode shows interaction with community to which the family belongs. but the degree of interaction does not appear too high. The analysis of family stimuli led this study to conclude that troubles within a family. changes in family structure and diease of family member generate stimuli. However, an application of the Roy Model contains the following problems: First, Roy argued that the family adaptation mode should be assessed at the first level family health assessment and then stimuli affecting family adaptation should be adaptation assessed at the second stage. To the belief of the author of this article. however, for checking family adaptation level. focal, contextual, residual stimuli should be confirmed by assessing stimuli at first stage. Then, the family adaptation mode in respect of such stimuli should be assessed. The rationale for this is that the family adaptation level is determined depending on degree of strength of focal. contextual. residual stimuli. Second. Whall (1991) raised a question 'Does one assess family adaptation mode and intervene in the stimuli?' 'Likewise, assessment of the family adaptation should be made in the following manner in order for family health to be enhanced. Third. Roy believes that additional stimuli (such as contextual and residual) are same as internal process (including nurturance. support, and socialization). However, the basis for this Roy's belief is not too clear. In spite of these problems which the author indicated above, it can be concluded that the Roy Model can serve as a good device for an assessment of family health and that the Roy Model can be applied to a Korean family. Finally, further research of family adaptation theory and family nursing theory is required for a development of these theories.

  • PDF

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF

Evaluation on the Technique Efficiency of Annual Chestnut Production in South Korea (임업생산비통계를 이용한 연도별 밤 생산량의 기술효율성 평가)

  • Won, Hyun-Kyu;Jeon, Ju-Hyeon;Kim, Chul-Woo;Jeon, Hyun-Sun;Son, Yeung-Mo;Lee, Uk
    • Journal of Korean Society of Forest Science
    • /
    • v.105 no.2
    • /
    • pp.247-252
    • /
    • 2016
  • This study was conducted to evaluate the technical efficiency of Annual Chestnut production in South Korea. In this study, technical efficiency is the maximum possible production for which a certain amount of costs is inputted. For analysis on the technical efficiency we used output-oriented BCC Model, and then we analyzed correlation among input costs, production, gross income, net income, and market price per unit in order to determine the cause of variation in the technical efficiency. As study materials, we used statistics for the forestry production costs for 7 years from 2008 to 2014. The study results showed that the maximum possible production and actual production in 2008, 2009, and 2010 were 1,568 kg, 1,745 kg, and 1,534 kg by hectares in the order which were the same values. Consequently, the technical efficiency of those was all evaluated as 1.00. On the other hand, actual production from 2011 to 2014 was 1,270 kg 1,047 kg, 1,258 kg, and 1,488 kg by hectares in the order and the maximum possible production was 1,524 kg, 1,467 kg, 1,635 kg, and 1,637 kg by hectares in the analysis. From those values, the technical efficiency was evaluated in the following order:0.83, 0.71, 0.75, 0.91. The lowest value of the technical efficiency was 0.71 in 2012, and the values of this increased gradually since 2013. It is indicated that the cause of variation in the technical efficiency was related to the relationship between production and market price, and there was a negative correlation with r = -0.821 (p<0.05). The level of maximum available production per unit area was between 1,488kg in lower limit and 1,745 kg in upper limit, and the average was turned out as 1,548 kg.

A Study on the Component-based GIS Development Methodology using UML (UML을 활용한 컴포넌트 기반의 GIS 개발방법론에 관한 연구)

  • Park, Tae-Og;Kim, Kye-Hyun
    • Journal of Korea Spatial Information System Society
    • /
    • v.3 no.2 s.6
    • /
    • pp.21-43
    • /
    • 2001
  • The environment to development information system including a GIS has been drastically changed in recent years in the perspectives of the complexity and diversity of the software, and the distributed processing and network computing, etc. This leads the paradigm of the software development to the CBD(Component Based Development) based object-oriented technology. As an effort to support these movements, OGC has released the abstract and implementation standards to enable approaching to the service for heterogeneous geographic information processing. It is also common trend in domestic field to develop the GIS application based on the component technology for municipal governments. Therefore, it is imperative to adopt the component technology considering current movements, yet related research works have not been made. This research is to propose a component-based GIS development methodology-ATOM(Advanced Technology Of Methodology)-and to verify its adoptability through the case study. ATOM can be used as a methodology to develop component itself and enterprise GIS supporting the whole procedure for the software development life cycle based on conventional reusable component. ATOM defines stepwise development process comprising activities and work units of each process. Also, it provides input and output, standardized items and specs for the documentation, detailed instructions for the easy understanding of the development methodology. The major characteristics of ATOM would be the component-based development methodology considering numerous features of the GIS domain to generate a component with a simple function, the smallest size, and the maximum reusability. The case study to validate the adoptability of the ATOM showed that it proves to be a efficient tool for generating a component providing relatively systematic and detailed guidelines for the component development. Therefore, ATOM would lead to the promotion of the quality and the productivity for developing application GIS software and eventually contribute to the automatic production of the GIS software, the our final goal.

  • PDF