• Title/Summary/Keyword: 병렬모델

Search Result 760, Processing Time 0.03 seconds

Hybrid Energy Storage System with Emergency Power Function of Standardization Technology (비상전원 기능을 갖는 하이브리드 에너지저장시스템 표준화 기술)

  • Hong, Kyungjin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.187-192
    • /
    • 2019
  • Hybrid power storage system with emergency power function for demand management and power outage minimizes the investment cost in the building of buildings and factories requiring emergency power generation facilities, We propose a new business model by developing technology that can secure economical efficiency by reducing power cost at all times. Normally, system power is supplied to load through STS (Static Transfer Switch), and PCS is connected to system in parallel to perform demand management. In order to efficiently operate the electric power through demand forecasting, the EMS issues a charge / discharge command to the ESS as a PMS (Power Management System), and the PMS transmits the command to the PCS controller to operate the system. During the power outage, the STS is rapidly disengaged from the system, and the PCS becomes an independent power supply and can supply constant voltage / constant frequency power to the load side. Therefore, it is possible to secure reliability through verification of actual system linkage and independent operation performance of hybrid ESS, By enabling low-carbon green growth technology to operate in conjunction with an efficient grid, it is possible to improve irregular power quality and contribute to peak load by generating renewable energy through ESS linkage. In addition, the ESS is replacing the frequency follow-up reserve, which is currently under the charge of coal-fired power generation, and thus it is anticipated that the operation cost of the LNG generator with high fuel cost can be reduced.

Comparison of Korean Real-time Text-to-Speech Technology Based on Deep Learning (딥러닝 기반 한국어 실시간 TTS 기술 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.640-645
    • /
    • 2021
  • The deep learning based end-to-end TTS system consists of Text2Mel module that generates spectrogram from text, and vocoder module that synthesizes speech signals from spectrogram. Recently, by applying deep learning technology to the TTS system the intelligibility and naturalness of the synthesized speech is as improved as human vocalization. However, it has the disadvantage that the inference speed for synthesizing speech is very slow compared to the conventional method. The inference speed can be improved by applying the non-autoregressive method which can generate speech samples in parallel independent of previously generated samples. In this paper, we introduce FastSpeech, FastSpeech 2, and FastPitch as Text2Mel technology, and Parallel WaveGAN, Multi-band MelGAN, and WaveGlow as vocoder technology applying non-autoregressive method. And we implement them to verify whether it can be processed in real time. Experimental results show that by the obtained RTF all the presented methods are sufficiently capable of real-time processing. And it can be seen that the size of the learned model is about tens to hundreds of megabytes except WaveGlow, and it can be applied to the embedded environment where the memory is limited.

A Thoracic Spine Segmentation Technique for Automatic Extraction of VHS and Cobb Angle from X-ray Images (X-ray 영상에서 VHS와 콥 각도 자동 추출을 위한 흉추 분할 기법)

  • Ye-Eun, Lee;Seung-Hwa, Han;Dong-Gyu, Lee;Ho-Joon, Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.1
    • /
    • pp.51-58
    • /
    • 2023
  • In this paper, we propose an organ segmentation technique for the automatic extraction of medical diagnostic indicators from X-ray images. In order to calculate diagnostic indicators of heart disease and spinal disease such as VHS(vertebral heart scale) and Cobb angle, it is necessary to accurately segment the thoracic spine, carina, and heart in a chest X-ray image. A deep neural network model in which the high-resolution representation of the image for each layer and the structure converted into a low-resolution feature map are connected in parallel was adopted. This structure enables the relative position information in the image to be effectively reflected in the segmentation process. It is shown that learning performance can be improved by combining the OCR module, in which pixel information and object information are mutually interacted in a multi-step process, and the channel attention module, which allows each channel of the network to be reflected as different weight values. In addition, a method of augmenting learning data is presented in order to provide robust performance against changes in the position, shape, and size of the subject in the X-ray image. The effectiveness of the proposed theory was evaluated through an experiment using 145 human chest X-ray images and 118 animal X-ray images.

A Study on the Proposal of an Integration Model for Library Collaboration Instruction (도서관협력수업의 통합모형 제안에 관한 연구)

  • Byeong-Kee Lee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.57 no.4
    • /
    • pp.25-47
    • /
    • 2023
  • Library collaboration instruction (LCI) is a process in which a classroom teacher and librarian collaborate to co-planning, co-implementation, co-assessment instruction. LCI is being studied and modeled in various dimensions such as the level of collaboration, information activities, and time scheduling. However, there is no integrated model that comprehensively covers teacher and librarian collaboration. The purpose of this study is to propose a schematic integration model for LCI by comparing and analyzing various models in five dimensions (level of collaboration, information activities, collaborative approach, time scheduling, and technological integration). The main results of the integration model for LCI reflected in this study are as follows. First, in terms of the level of collaboration, TLC integration model reflected such as library-based teacher-led instruction, cross-curricular integrated curriculum. Second, in terms of information activities, LCI integration model reflected social and science subjects inquiry activities in addition to the information use process. Third, in terms of collaborative approach, LCI integration model is divided into such as lead-observation instruction and parallel station instruction. Fourth, in terms of time management, LCI integration model took into account the Korean national curriculum and scheduling methods. Fifth, in terms of technology integration, LCI integration model reflected the PICRAT model, modified from the perspective of library collaboration instruction.

Thermal Effects on the Development, Fecundity and Life Table Parameters of Aphis craccivora Koch (Hemiptera: Aphididae) on Yardlong Bean (Vigna unguiculata subsp. sesquipedalis (L.)) (갓끈동부콩에서 아카시아진딧물[Aphis craccivora Koch (Hemiptera: Aphididae)]의 온도발육, 성충 수명과 산란 및 생명표분석)

  • Cho, Jum Rae;Kim, Jeong-Hwan;Choi, Byeong-Ryeol;Seo, Bo-Yoon;Kim, Kwang-Ho;Ji, Chang Woo;Park, Chang-Gyu;Ahn, Jeong Joon
    • Korean journal of applied entomology
    • /
    • v.57 no.4
    • /
    • pp.261-269
    • /
    • 2018
  • The cowpea aphid Aphis craccivora Koch (Hemiptera: Aphididae) is a polyphagous species with a worldwide distribution. We investigated the temperature effects on development periods of nymphs, and the longevity and fecundity of apterous female of A. craccivora. The study was conducted at six constant temperatures of 10.0, 15.0, 20.0, 25, 30.0, and $32.5^{\circ}C$. A. craccivora developed successfully from nymph to adult stage at all temperatures subjected. The developmental rate of A. craccivora increased as temperature increased. The lower developmental threshold (LT) and thermal constant (K) of A. craccivora nymph stage were estimated by linear regression as $5.3^{\circ}C$ and 128.4 degree-days (DD), respectively. Lower and higher threshold temperatures (TL, TH and TH-TL, respectively) were calculated by the Sharpe_Schoolfield_Ikemoto (SSI) model as $17.0^{\circ}C$, $34.6^{\circ}C$ and $17.5^{\circ}C$. Developmental completion of nymph stages was described using a three-parameter Weibull function. Life table parameters were estimated. The intrinsic rate of increase was highest at $25^{\circ}C$, while the net reproductive rate was highest at $20^{\circ}C$. Biological characteristics of A. craccivora populations from different geographic areas were discussed.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Ischemic Time Associated with Activation of Rejection-Related Immune Responses (허혈 시간과 거부반응 관련 면역반응)

  • Nam, Hyun-Suk;Choi, Jin-Yeung;Kim, Yoon-Tai;Kang, Kyung-Sun;Kwon, Hyuk-Moo;Hong, Chong-Hae;Kim, Doo;Han, Tae-Wook;Moon, Tae-Young;Kim, Jee-Hee;Cho, Byung-Ryul;Woo, Heung-Myong
    • Journal of Veterinary Clinics
    • /
    • v.26 no.2
    • /
    • pp.138-143
    • /
    • 2009
  • Ischemia/reperfusion injury(I/RI) is the major cause of acute renal failure and delayed graft function(DGF) unavoidable in renal transplantation. Enormous studies on ischemia damage playing a role in activating graft rejection factors, such as T cells or macrophages, are being reported. Present study was performed to determine whether ischemia time would play an important role in activating rejection-related factors or not in rat models of I/RI. Male Sprague-Dawley rats were submitted to 30, 45, and 60 minutes of warm renal ischemia with nephrectomy or control animals underwent sham operation(unilateral nephrectomy). Renal function and survival rates were evaluated on day 0, 1, 2, 3, 5 and 7. Immunofluorescence staining of dendritic cells(DCs), natural killer(NK) cells, macrophages, B cells, CD4+ and CD8+ T cells were measured on day 1 and 7 after renal I/RI. Survival rates dropped below 50% after day 3 in 45 minutes ischemia. Histologic analysis of ischemic kidneys revealed a significant loss of tubular architecture and infiltration of inflammatory cells. DCs, NK cells, macrophages, CD4+ and CD8+ T cells were infiltrated from a day after I/RI depending on ischemia time. Antigen presenting cells(DCs, NK cells or macrophages) and even T cells were infiltrated 24 hours post-I/RI, which is at the time of acute tubular necrosis. During the regeneration phase, not only these cells increased but B cells also appeared in more than 45 minutes ischemia. The numbers of the innate and the adaptive immune cells increased depending on ischemia as well as reperfusion time. These changes of infiltrating cells resulting from each I/RI model show that ischemic time plays a role in activating rejection related immune factors and have consequences on progression of renal disease in transplanted and native kidneys.

Development of Industrial Embedded System Platform (산업용 임베디드 시스템 플랫폼 개발)

  • Kim, Dae-Nam;Kim, Kyo-Sun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.5
    • /
    • pp.50-60
    • /
    • 2010
  • For the last half a century, the personal computer and software industries have been prosperous due to the incessant evolution of computer systems. In the 21st century, the embedded system market has greatly increased as the market shifted to the mobile gadget field. While a lot of multimedia gadgets such as mobile phone, navigation system, PMP, etc. are pouring into the market, most industrial control systems still rely on 8-bit micro-controllers and simple application software techniques. Unfortunately, the technological barrier which requires additional investment and higher quality manpower to overcome, and the business risks which come from the uncertainty of the market growth and the competitiveness of the resulting products have prevented the companies in the industry from taking advantage of such fancy technologies. However, high performance, low-power and low-cost hardware and software platforms will enable their high-technology products to be developed and recognized by potential clients in the future. This paper presents such a platform for industrial embedded systems. The platform was designed based on Telechips TCC8300 multimedia processor which embedded a variety of parallel hardware for the implementation of multimedia functions. And open-source Embedded Linux, TinyX and GTK+ are used for implementation of GUI to minimize technology costs. In order to estimate the expected performance and power consumption, the performance improvement and the power consumption due to each of enabled hardware sub-systems including YUV2RGB frame converter are measured. An analytic model was devised to check the feasibility of a new application and trade off its performance and power consumption. The validity of the model has been confirmed by implementing a real target system. The cost can be further mitigated by using the hardware parts which are being used for mass production products mostly in the cell-phone market.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.