• Title/Summary/Keyword: software algorithms

Search Result 1,093, Processing Time 0.03 seconds

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Power Conscious Disk Scheduling for Multimedia Data Retrieval (저전력 환경에서 멀티미디어 자료 재생을 위한 디스크 스케줄링 기법)

  • Choi, Jung-Wan;Won, Yoo-Jip;Jung, Won-Min
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.242-255
    • /
    • 2006
  • In the recent years, Popularization of mobile devices such as Smart Phones, PDAs and MP3 Players causes rapid increasing necessity of Power management technology because it is most essential factor of mobile devices. On the other hand, despite low price, hard disk has large capacity and high speed. Even it can be made small enough today, too. So it appropriates mobile devices. but it consumes too much power to embed In mobile devices. Due to these motivations, in this paper we had suggested methods of minimizing Power consumption while playing multimedia data in the disk media for real-time and we evaluated what we had suggested. Strict limitation of power consumption of mobile devices has a big impact on designing both hardware and software. One difference between real-time multimedia streaming data and legacy text based data is requirement about continuity of data supply. This fact is why disk drive must persist in active state for the entire playback duration, from power management point of view; it nay be a great burden. A legacy power management function of mobile disk drive affects quality of multimedia playback negatively because of excessive I/O requests when the disk is in standby state. Therefore, in this paper, we analyze power consumption profile of disk drive in detail, and we develop the algorithm which can play multimedia data effectively using less power. This algorithm calculates number of data block to be read and time duration of active/standby state. From this, the algorithm suggested in this paper does optimal scheduling that is ensuring continual playback of data blocks stored in mobile disk drive. And we implement our algorithms in publicly available MPEG player software. This MPEG player software saves up to 60% of power consumption as compared with full-time active stated disk drive, and 38% of power consumption by comparison with disk drive controlled by native power management method.

An Implementation of OTB Extension to Produce TOA and TOC Reflectance of LANDSAT-8 OLI Images and Its Product Verification Using RadCalNet RVUS Data (Landsat-8 OLI 영상정보의 대기 및 지표반사도 산출을 위한 OTB Extension 구현과 RadCalNet RVUS 자료를 이용한 성과검증)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.449-461
    • /
    • 2021
  • Analysis Ready Data (ARD) for optical satellite images represents a pre-processed product by applying spectral characteristics and viewing parameters for each sensor. The atmospheric correction is one of the fundamental and complicated topics, which helps to produce Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance from multi-spectral image sets. Most remote sensing software provides algorithms or processing schemes dedicated to those corrections of the Landsat-8 OLI sensors. Furthermore, Google Earth Engine (GEE), provides direct access to Landsat reflectance products, USGS-based ARD (USGS-ARD), on the cloud environment. We implemented the Orfeo ToolBox (OTB) atmospheric correction extension, an open-source remote sensing software for manipulating and analyzing high-resolution satellite images. This is the first tool because OTB has not provided calibration modules for any Landsat sensors. Using this extension software, we conducted the absolute atmospheric correction on the Landsat-8 OLI images of Railroad Valley, United States (RVUS) to validate their reflectance products using reflectance data sets of RVUS in the RadCalNet portal. The results showed that the reflectance products using the OTB extension for Landsat revealed a difference by less than 5% compared to RadCalNet RVUS data. In addition, we performed a comparative analysis with reflectance products obtained from other open-source tools such as a QGIS semi-automatic classification plugin and SAGA, besides USGS-ARD products. The reflectance products by the OTB extension showed a high consistency to those of USGS-ARD within the acceptable level in the measurement data range of the RadCalNet RVUS, compared to those of the other two open-source tools. In this study, the verification of the atmospheric calibration processor in OTB extension was carried out, and it proved the application possibility for other satellite sensors in the Compact Advanced Satellite (CAS)-500 or new optical satellites.

A Dynamic Load Balancing Scheme based on Host Load Information in a Wireless Internet Proxy Server Cluster (무선 인터넷 프록시 서버 클러스터에서 호스트 부하 정보에 기반한 동적 부하 분산 방안)

  • Kwak Hu-Keun;Chung Kyu-Sik
    • Journal of KIISE:Information Networking
    • /
    • v.33 no.3
    • /
    • pp.231-246
    • /
    • 2006
  • A server load balancer is used to accept and distribute client requests to one of servers in a wireless internet proxy server cluster. LVS(Linux Virtual Server), a software based server load balancer, can support several load balancing algorithms where client requests are distributed to servers in a round robin way, in a hashing-based way or in a way to assign first to the server with the least number of its concurrent connections to LVS. An improved load balancing algorithm to consider server performance was proposed where they check upper and lower limits of concurrent connection numbers to be allowed within each maximum server performance in advance and apply the static limits to load balancing. However, they do not apply run-time server load information dynamically to load balancing. In this paper, we propose a dynamic load balancing scheme where the load balancer keeps each server CPU load information at run time and assigns a new client request first to the server with the lowest load. Using a cluster consisting of 16 PCs, we performed experiments with static content(image and HTML). Compared to the existing schemes, experimental results show performance improvement in the cases of client requests requiring CPU-intensive processing and a cluster consisting of servers with difference performance.

The 64-Bit Scrambler Design of the OFDM Modulation for Vehicles Communications Technology (차량 통신 기술을 위한 OFDM 모듈레이션의 64-비트 스크램블러 설계)

  • Lee, Dae-Sik
    • Journal of Internet Computing and Services
    • /
    • v.14 no.1
    • /
    • pp.15-22
    • /
    • 2013
  • WAVE(Wireless Access for Vehicular Environment) is new concepts and Vehicles communications technology using for ITS(Intelligent Transportation Systems) service by IEEE standard 802.11p. Also it increases the efficiency and safety of the traffic on the road. However, the efficiency of Scrambler bit computational algorithms of OFDM modulation in WAVE systems will fall as it is not able to process in parallel in terms of hardware and software. This paper proposes an algorithm to configure 64-bits matrix table in scambler bit computation as well as an algorithm to compute 64-bits matrix table and input data in parallel. The proposed algorithm on this thesis is executed using 64-bits matrix table. In the result, the processing speed for 1 and 1000 times is improved about 40.08% ~ 40.27% and processing rate per sec is performed more than 468.35 compared to bit operation scramble. And processing speed for 1 and 1000 times is improved about 7.53% ~ 7.84% and processing rate per sec is performed more than 91.44 compared to 32-bits operation scramble. Therefore, if the 64 bit-CPU is used for 64-bits executable scramble algorithm, it is improved more than 40% compare to 32-bits scrambler.

A Facial Feature Area Extraction Method for Improving Face Recognition Rate in Camera Image (일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법)

  • Kim, Seong-Hoon;Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.5
    • /
    • pp.251-260
    • /
    • 2016
  • Face recognition is a technology to extract feature from a facial image, learn the features through various algorithms, and recognize a person by comparing the learned data with feature of a new facial image. Especially, in order to improve the rate of face recognition, face recognition requires various processing methods. In the training stage of face recognition, feature should be extracted from a facial image. As for the existing method of extracting facial feature, linear discriminant analysis (LDA) is being mainly used. The LDA method is to express a facial image with dots on the high-dimensional space, and extract facial feature to distinguish a person by analyzing the class information and the distribution of dots. As the position of a dot is determined by pixel values of a facial image on the high-dimensional space, if unnecessary areas or frequently changing areas are included on a facial image, incorrect facial feature could be extracted by LDA. Especially, if a camera image is used for face recognition, the size of a face could vary with the distance between the face and the camera, deteriorating the rate of face recognition. Thus, in order to solve this problem, this paper detected a facial area by using a camera, removed unnecessary areas using the facial feature area calculated via a Gabor filter, and normalized the size of the facial area. Facial feature were extracted through LDA using the normalized facial image and were learned through the artificial neural network for face recognition. As a result, it was possible to improve the rate of face recognition by approx. 13% compared to the existing face recognition method including unnecessary areas.

Data Level Parallelism for H.264/AVC Decoder on a Multi-Core Processor and Performance Analysis (멀티코어 프로세서에서의 H.264/AVC 디코더를 위한 데이터 레벨 병렬화 성능 예측 및 분석)

  • Cho, Han-Wook;Jo, Song-Hyun;Song, Yong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.8
    • /
    • pp.102-116
    • /
    • 2009
  • There have been lots of researches for H.264/AVC performance enhancement on a multi-core processor. The enhancement has been performed through parallelization methods. Parallelization methods can be classified into a task-level parallelization method and a data level parallelization method. A task-level parallelization method for H.264/AVC decoder is implemented by dividing H.264/AVC decoder algorithms into pipeline stages. However, it is not suitable for complex and large bitstreams due to poor load-balancing. Considering load-balancing and performance scalability, we propose a horizontal data level parallelization method for H.264/AVC decoder in such a way that threads are assigned to macroblock lines. We develop a mathematical performance expectation model for the proposed parallelization methods. For evaluation of the mathematical performance expectation, we measured the performance with JM 13.2 reference software on ARM11 MPCore Evaluation Board. The cycle-accurate measurement with SoCDesigner Co-verification Environment showed that expected performance and performance scalability of the proposed parallelization method was accurate in relatively high level

Development of Algorithms for the Construction of Hydrogeologic Thematic Maps using AvenueTM Language in ArcView GIS (ArcView GIS의 AvenueTM Language를 활용한 수문지질도 작성 알고리즘 개발 및 적용 사례 연구)

  • Kim, Gyoo-Bum;Son, Young-Chul;Kim, Jong-Wook;Lee, Jang-Yong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.8 no.3
    • /
    • pp.107-120
    • /
    • 2005
  • In Korea, MOCT and KOWACO published a standard for lineament map drawings, "The Handbook for the Drawing and Management of Hydrogeologic Map" in 2003. According to this guideline, hydrogeologic and related thematic maps should include characteristics of groundwater quality and quantity. These maps are generally drawn with ArcView GIS 3.x software. The activities of well notation on groundwater level map and Stiff diagram drawings on groundwater quality map require a great deal of efforts because hundreds or thousands of well data, water level data and hydrogeochemical data are produced through many kinds of investigations. As well, lineament density map is very important to survey and explore groundwater in a deep aquifer. In this study we developed some modules for well notation, Stiff diagram drawings, and lineament density value calculation with Avenue$^{TM}$ script and it was revealed that they can be very useful and easy for drawing groundwater thematic maps.

  • PDF

Cache Memory and Replacement Algorithm Implementation and Performance Comparison

  • Park, Na Eun;Kim, Jongwan;Jeong, Tae Seog
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.3
    • /
    • pp.11-17
    • /
    • 2020
  • In this paper, we propose practical results for cache replacement policy by measuring cache hit and search time for each replacement algorithm through cache simulation. Thus, the structure of each cache memory and the four types of alternative policies of FIFO, LFU, LRU and Random were implemented in software to analyze the characteristics of each technique. The paper experiment showed that the LRU algorithm showed hit rate and search time of 36.044% and 577.936ns in uniform distribution, 45.636% and 504.692ns in deflection distribution, while the FIFO algorithm showed similar performance to the LRU algorithm at 36.078% and 554.772ns in even distribution and 45.662% and 489.574ns in bias distribution. Then LFU followed, Random algorithm was measured at 30.042% and 622.866ns at even distribution, 36.36% at deflection distribution and 553.878ns at lowest performance. The LRU replacement method commonly used in cache memory has the complexity of implementation, but it is the most efficient alternative to conventional alternative algorithms, indicating that it is a reasonable alternative method considering the reference information of data.

Pre/Post processor for structural analysis simulation integration with open source solver (Calculix, Code_Aster) (오픈소스 솔버(Calculix, Code_Aster)를 통합한 구조해석 시뮬레이션 전·후처리기 개발)

  • Seo, Dong-Woo;Kim, Jae-Sung;Kim, Myung-Il
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.9
    • /
    • pp.425-435
    • /
    • 2017
  • Structural analysis is used not only for large enterprises, but also for small and medium sized ones, as a necessary procedure for strengthening the certification process for product delivery and shortening the time in the process from concept design to detailed design. Open-source solvers that can be used atlow cost differ from commercial solvers. If there is a problem with the input data, such as with the grid, errors or failures can occur in the calculation step. In this paper, we propose a pre- and post-processor that can be easily applied to the analysis of mechanical structural problems by using the existing structural analysis open source solver (Caculix, Code_Aster). In particular, we propose algorithms for analyzing different types of data using open source solvers in order to extract and generate accurate information,such as 3D models, grids and simulation conditions, and develop and apply information analysis. In addition, to improve the accuracy of open source solvers and to prevent errors, we created a grid that matches the solver characteristics and developed an automatic healing function for the grid model. Finally, to verify the accuracy of the system, the verification and utilization results are compared with the software used.