• Title/Summary/Keyword: Input and Output

Search Result 8,640, Processing Time 0.042 seconds

Design and Implementation of an Execution-Provenance Based Simulation Data Management Framework for Computational Science Engineering Simulation Platform (계산과학공학 플랫폼을 위한 실행-이력 기반의 시뮬레이션 데이터 관리 프레임워크 설계 및 구현)

  • Ma, Jin;Lee, Sik;Cho, Kum-won;Suh, Young-kyoon
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.77-86
    • /
    • 2018
  • For the past few years, KISTI has been servicing an online simulation execution platform, called EDISON, allowing users to conduct simulations on various scientific applications supplied by diverse computational science and engineering disciplines. Typically, these simulations accompany large-scale computation and accordingly produce a huge volume of output data. One critical issue arising when conducting those simulations on an online platform stems from the fact that a number of users simultaneously submit to the platform their simulation requests (or jobs) with the same (or almost unchanging) input parameters or files, resulting in charging a significant burden on the platform. In other words, the same computing jobs lead to duplicate consumption computing and storage resources at an undesirably fast pace. To overcome excessive resource usage by such identical simulation requests, in this paper we introduce a novel framework, called IceSheet, to efficiently manage simulation data based on execution metadata, that is, provenance. The IceSheet framework captures and stores each provenance associated with a conducted simulation. The collected provenance records are utilized for not only inspecting duplicate simulation requests but also performing search on existing simulation results via an open-source search engine, ElasticSearch. In particular, this paper elaborates on the core components in the IceSheet framework to support the search and reuse on the stored simulation results. We implemented as prototype the proposed framework using the engine in conjunction with the online simulation execution platform. Our evaluation of the framework was performed on the real simulation execution-provenance records collected on the platform. Once the prototyped IceSheet framework fully functions with the platform, users can quickly search for past parameter values entered into desired simulation software and receive existing results on the same input parameter values on the software if any. Therefore, we expect that the proposed framework contributes to eliminating duplicate resource consumption and significantly reducing execution time on the same requests as previously-executed simulations.

Analysis of Sawmill Productivity and Optimum Combination of Production Factors (제재생산성(製材生産性)과 적정생산요소투입량(適正生産要素投入量) 계측(計測))

  • Cho, Woong Hyuk
    • Journal of Korean Society of Forest Science
    • /
    • v.32 no.1
    • /
    • pp.29-35
    • /
    • 1976
  • In order to estimate sawmill productivities, rates of technical change and optimum combination of production factors, Cobb-Douglas production functions have been derived using data obtained from 96 sample mills in Busan-Incheon, southwestern and northeastern areas. The results may be summarized as follows: 1. There is a tendency of expanding average sawmill size in the areas. The horse-power holdings per mill have been increased at the rates of 91 percent in Busan-Incheon, 7.7 percent in southwestern and 16.9 percent in northeastern areas. This implies that the mills around log-importing ports have made rapid development, compared with those in forest regions. 2. The regression coefficients (production elasticities) of the functions for the year of 1967 in the above three areas are much similar each other, but significant differencies are found in the production functions of 1975. In other words, sawmill productivity was mainly restricted by capital deficiencies in all areas in 1967, but this situation was succeeded only by N-E area in 1975. The range of sum of regression coefficients is 1.0437-1.4214, this indicates increasing rates of return to scale. 3. The annual rates of technical changes in B-I, S-W and N-E areas for the observed period are 17.6, 7.6 and 2.2 percents respectively. Busan-Incheon is the only area where labor productivity is higher than that of capital. 4. The best combination of production factors for maximizing firm's profit is subject to the changes of input and output prices. With some assumptions of prices and costs, the optimum levels of power and labor input in B-I, S-W and N-E areas are 57:17, 427:94 and 192:27.

  • PDF

A 10b 50MS/s Low-Power Skinny-Type 0.13um CMOS ADC for CIS Applications (CIS 응용을 위해 제한된 폭을 가지는 10비트 50MS/s 저 전력 0.13um CMOS ADC)

  • Song, Jung-Eun;Hwang, Dong-Hyun;Hwang, Won-Seok;Kim, Kwang-Soo;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.5
    • /
    • pp.25-33
    • /
    • 2011
  • This work proposes a skinny-type 10b 50MS/s 0.13um CMOS three-step pipeline ADC for CIS applications. Analog circuits for CIS applications commonly employ a high supply voltage to acquire a sufficiently acceptable dynamic range, while digital circuits use a low supply voltage to minimize power consumption. The proposed ADC converts analog signals in a wide-swing range to low voltage-based digital data using both of the two supply voltages. An op-amp sharing technique employed in residue amplifiers properly controls currents depending on the amplification mode of each pipeline stage, optimizes the performance of op-amps, and improves the power efficiency. In three FLASH ADCs, the number of input stages are reduced in half by the interpolation technique while each comparator consists of only a latch with low kick-back noise based on pull-down switches to separate the input nodes and output nodes. Reference circuits achieve a required settling time only with on-chip low-power drivers and digital correction logic has two kinds of level shifter depending on signal-voltage levels to be processed. The prototype ADC in a 0.13um CMOS to support 0.35um thick-gate-oxide transistors demonstrates the measured DNL and INL within 0.42LSB and 1.19LSB, respectively. The ADC shows a maximum SNDR of 55.4dB and a maximum SFDR of 68.7dB at 50MS/s, respectively. The ADC with an active die area of 0.53$mm^2$ consumes 15.6mW at 50MS/s with an analog voltage of 2.0V and two digital voltages of 2.8V ($=D_H$) and 1.2V ($=D_L$).

Stand Water Balance and Stream Water Quality in Small Forested Watershed Yangpyong Gyeonggido (경기도(京畿道) 양평지역(陽平地域) 산림(山林) 소류역(小流域)의 수수지(水收支)와 계류수(溪流水)의 수질특성(水質特性))

  • Kim, Jung-You;Han, Sang-Sup
    • Journal of Forest and Environmental Science
    • /
    • v.17 no.1
    • /
    • pp.18-28
    • /
    • 2001
  • This study was carried out to investigate the characteristics of water quality variations by stand water balance in YangPyong-Gun Gejung-Lee small forest watershed. Water quantity. pH, $Cl^-$, $NO_3{^-}$, $SO_4{^{2-}}$, $Na^+$, $NH_4{^+}$, $K^+$, $Mg^{2+}$, $Ca^{2+}$ were monitored in open rainfall for one unit storm and long-term stream water in small forest watershed from January. 1998 to December. 1999. The results were summarized as follows: The runoff rate was 46.4% in 1998 and 52.2% in 1999. The average pH values of rainfall were 4.8 to 6.2 and those of stream water were 6.4 to 7.1 in small forest watershed. Total amount of input anion and cation values (kg/ha) in rainfall were $SO_4{^{2-}}>NO_3{^-}>Ca^{2+}>NH_4{^+}>Cl^->Na^+>K^+>Mg^{2+}$ and in stream water were $NO_3{^-}>Ca^{2+}>SO_4{^{2-}}>Na^+>Cl^->K^+>Mg^{2+}>NH_4{^+}$ in the order, respectively. The dissolved $NH_4{^+}$ was stored 5.29kg/ha and output of the other contents were more flow than input in small forest watershed.

  • PDF

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

Design and implementation of Robot Soccer Agent Based on Reinforcement Learning (강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless these algorithms can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL) as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. In this paper we use the AMMQL algorithn as a learning method for dynamic positioning of the robot soccer agent, and implement a robot soccer agent system called Cogitoniks.

An Efficiency Analysis of the Local Cultural Resources Utilization of Local Governments (지방자치단체의 지역문화자원 활용 효율성 분석)

  • Gang, Bobae
    • 지역과문화
    • /
    • v.6 no.2
    • /
    • pp.77-104
    • /
    • 2019
  • This study examines the efficiency of using local cultural resources in local governments. The study does so by DEA(Data Envelope Analysis) using data from the year 2017 for 17 local governments in Korea. In addition, this study tries to estimate environmental efficiency of local cultural resources. For this, the 'Total Efficiency' including the output variables related to the local cultural resource environment was analyzed. After than It compared the 'Total Efficiency' with the 'Utilization Efficiency', to estimate the 'Environmental Efficiency' of local cultural resources. The followings are results which are significant statistically. Firstly, it was evaluated that five of the 17 local governments utilized the local cultural resources efficiently. Secondly, it was result that the inefficiency of the other local governments was relatively influenced by the economies of scale than PTE(Pure Technical Efficiency). Thirdly, It has been confirmed that environmental aspects such as cultural properties and cultural infrastructure have a considerable impact on the increase or decrease of efficiency in local governments. The difference in the efficiency of local governments are influenced by the population density. In order to improve the efficiency in the future, it is necessary to adjust the appropriate level of input according to the local population estimate, which is a major consumer of the local cultural resource utilization. In addition, the local festivals and village festivals held by local governments should be checked to improve in quality by eliminating inefficiencies. Also, it should be considered of environmental factors together, when analyzing the efficiency of the local cultural resource in local governments.

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.

The Development of On-Line Statistics Program for Radiation Oncology (방사선종양학과 On-line 통계처리프로그램의 개발)

  • Kim Yoon-Jong;Lee Dong-Hoon;Ji Young-Hoon;Lee Dong-Han;Jo Chul-Ku;Kim Mi-Sook;Ru Sung-Rul;Hong Seung-Hong
    • Radiation Oncology Journal
    • /
    • v.19 no.4
    • /
    • pp.369-380
    • /
    • 2001
  • Purpose : By developing on-line statistics program to record the information of radiation oncology to share the information with internet. It is possible to supply basic reference data for administrative plans to improve radiation oncology. Materials and methods : The information of radiation oncology statistics had been collected by paper forms about 52 hospitals in the past. Now, we can input the data by internet web browsers. The statistics program used windows NT 4.0 operation system, Internal Information Server 4.0 (IIS4.0) as a web server and the Microsoft Access MDB. We used Structured Query Language (SQL), Visual Basic, VBScript and JAVAScript to display the statistics according to years and hospitals. Results : This program shows present conditions about man power, research, therapy machines, technics, brachytherapy, clinic statistics, radiation safety management, institution, quality assurance and radioisotopes in radiation oncology department. The database consists of 38 inputs and 6 outputs windows. Statistical output windows can be increased continuously according to user's need. Conclusion : We have developed statistics program to process all of the data in department of radiation oncology for reference information. Users easily could input the data by internet web browsers and share the information.

  • PDF

A Study on the Serialized Event Sharing System for Multiple Telecomputing User Environments (원격.다원 사용자 환경에서의 순차적 이벤트 공유기에 관한 연구)

  • 유영진;오용선
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.05a
    • /
    • pp.344-350
    • /
    • 2003
  • In this paper, we propose a novel sharing method ordering the events occurring between users collaborated with the common telecomputing environment. We realize the sharing method with multimedia data to improve the coworking effect using teleprocessing network. This sharing method advances the efficiency of communicating projects such as remote education, tele-conference, and co-authoring of multimedia contents by offering conveniences of presentation, group authoring, common management, and transient event productions of the users. As for the conventional sharing white board system, all the multimedia contents segments should be authored by the exclusive program, and we cannot use any existing contents or program. Moreover we suffer from the problem that ordering error occurs in the teleprocessing operation because we do not have any line-up technology for the input ordering of commands. Therefore we develop a method of retrieving input and output events from the windows system and the message hooking technology which transmits between programs in the operating system In addition, we realize the allocation technology of the processing results for all sharing users of the distributed computing environment without any error. Our sharing technology should contribute to improve the face-to-face coworking efficiency for multimedia contents authoring, common blackboard system in the area of remote educations, and presentation display in visual conference.

  • PDF