• Title/Summary/Keyword: one-to-one computing

Search Result 2,213, Processing Time 0.033 seconds

An Efficient Dual Queue Strategy for Improving Storage System Response Times (저장시스템의 응답 시간 개선을 위한 효율적인 이중 큐 전략)

  • Hyun-Seob Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.19-24
    • /
    • 2024
  • Recent advances in large-scale data processing technologies such as big data, cloud computing, and artificial intelligence have increased the demand for high-performance storage devices in data centers and enterprise environments. In particular, the fast data response speed of storage devices is a key factor that determines the overall system performance. Solid state drives (SSDs) based on the Non-Volatile Memory Express (NVMe) interface are gaining traction, but new bottlenecks are emerging in the process of handling large data input and output requests from multiple hosts simultaneously. SSDs typically process host requests by sequentially stacking them in an internal queue. When long transfer length requests are processed first, shorter requests wait longer, increasing the average response time. To solve this problem, data transfer timeout and data partitioning methods have been proposed, but they do not provide a fundamental solution. In this paper, we propose a dual queue based scheduling scheme (DQBS), which manages the data transfer order based on the request order in one queue and the transfer length in the other queue. Then, the request time and transmission length are comprehensively considered to determine the efficient data transmission order. This enables the balanced processing of long and short requests, thus reducing the overall average response time. The simulation results show that the proposed method outperforms the existing sequential processing method. This study presents a scheduling technique that maximizes data transfer efficiency in a high-performance SSD environment, which is expected to contribute to the development of next-generation high-performance storage systems

The Role of Home Economics Education in the Fourth Industrial Revolution (4차 산업혁명시대 가정과교육의 역할)

  • Lee, Eun-hee
    • Journal of Korean Home Economics Education Association
    • /
    • v.31 no.4
    • /
    • pp.149-161
    • /
    • 2019
  • At present, we are at the point of change of the 4th industrial revolution era due to the development of artificial intelligence(AI) and rapid technological innovation that no one can predict until now. This study started from the question of 'What role should home economics education play in the era of the Fourth Industrial Revolution?'. The Fourth Industrial Revolution is characterized by AI, cloud computing, Internet of Things(IoT), big data, and Online to Offline(O2O). It will drastically change the social system, science and technology and the structure of the profession. Since the dehumanization of robots and artificial intelligence may occur, the 4th Industrial Revolution Education should be sought to foster future human resources with humanity and citizenship for the future community. In addition, the implication of education in the fourth industrial revolution, which will bring about a change to a super-intelligent and hyper-connected society, is that the role of education should be emphasized so that humans internalize their values as human beings. Character education should be established as a generalized and internalized consciousness with a concept established in the integration of the curriculum, and concrete practical strategies should be prepared. In conclusion, home economics education in the 4th industrial revolution era should play a leading role in the central role of character education, and intrinsic improvement of various human lives. The fourth industrial revolution will change not only what we do, or human mental and physical activities, but also who we are, or human identity. In the information society and digital society, it is important how quickly and accurately it is possible to acquire scattered knowledge. In the information society, it is required to learn how to use knowledge for human beings in rapid change. As such, the fourth industrial revolution seeks to lead the family, organization, and community positively by influencing the systems that shape our lives. Home economics education should take the lead in this role.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

Evaluation of Web Service Similarity Assessment Methods (웹서비스 유사성 평가 방법들의 실험적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction and integration both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of techniques for Web service discovery have been proposed, but the discovery challenge has not been satisfactorily addressed. Unfortunately, most existing solutions are either too rudimentary to be useful or too domain dependent to be generalizable. In this paper, we propose a Web service organizing framework that combines clustering techniques with string matching and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. Our proposed approach has several appealing features : (1) It minimizes the requirement of prior knowledge from both service consumers and publishers; (2) It avoids exploiting domain dependent ontologies; and (3) It is able to visualize the semantic relationships among Web services. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service registries. We report on some preliminary results demonstrating the efficacy of the proposed approach.

  • PDF

A Study on Market Expansion Strategy via Two-Stage Customer Pre-segmentation Based on Customer Innovativeness and Value Orientation (고객혁신성과 가치지향성 기반의 2단계 사전 고객세분화를 통한 시장 확산 전략)

  • Heo, Tae-Young;Yoo, Young-Sang;Kim, Young-Myoung
    • Journal of Korea Technology Innovation Society
    • /
    • v.10 no.1
    • /
    • pp.73-97
    • /
    • 2007
  • R&D into future technologies should be conducted in conjunction with technological innovation strategies that are linked to corporate survival within a framework of information and knowledge-based competitiveness. As such, future technology strategies should be ensured through open R&D organizations. The development of future technologies should not be conducted simply on the basis of future forecasts, but should take into account customer needs in advance and reflect them in the development of the future technologies or services. This research aims to select as segmentation variables the customers' attitude towards accepting future telecommunication technologies and their value orientation in their everyday life, as these factors wilt have the greatest effect on the demand for future telecommunication services and thus segment the future telecom service market. Likewise, such research seeks to segment the market from the stage of technology R&D activities and employ the results to formulate technology development strategies. Based on the customer attitude towards accepting new technologies, two groups were induced, and a hierarchical customer segmentation model was provided to conduct secondary segmentation of the two groups on the basis of their respective customer value orientation. A survey was conducted in June 2006 on 800 consumers aged 15 to 69, residing in Seoul and five other major South Korean cities, through one-on-one interviews. The samples were divided into two sub-groups according to their level of acceptance of new technology; a sub-group demonstrating a high level of technology acceptance (39.4%) and another sub-group with a comparatively lower level of technology acceptance (60.6%). These two sub-groups were further divided each into 5 smaller sub-groups (10 total smaller sub-groups) through two rounds of segmentation. The ten sub-groups were then analyzed in their detailed characteristics, including general demographic characteristics, usage patterns in existing telecom services such as mobile service, broadband internet and wireless internet and the status of ownership of a computing or information device and the desire or intention to purchase one. Through these steps, we were able to statistically prove that each of these 10 sub-groups responded to telecom services as independent markets. We found that each segmented group responds as an independent individual market. Through correspondence analysis, the target segmentation groups were positioned in such a way as to facilitate the entry of future telecommunication services into the market, as well as their diffusion and transferability.

  • PDF

An Installation and Model Assessment of the UM, U.K. Earth System Model, in a Linux Cluster (U.K. 지구시스템모델 UM의 리눅스 클러스터 설치와 성능 평가)

  • Daeok Youn;Hyunggyu Song;Sungsu Park
    • Journal of the Korean earth science society
    • /
    • v.43 no.6
    • /
    • pp.691-711
    • /
    • 2022
  • The state-of-the-art Earth system model as a virtual Earth is required for studies of current and future climate change or climate crises. This complex numerical model can account for almost all human activities and natural phenomena affecting the atmosphere of Earth. The Unified Model (UM) from the United Kingdom Meteorological Office (UK Met Office) is among the best Earth system models as a scientific tool for studying the atmosphere. However, owing to the expansive numerical integration cost and substantial output size required to maintain the UM, individual research groups have had to rely only on supercomputers. The limitations of computer resources, especially the computer environment being blocked from outside network connections, reduce the efficiency and effectiveness of conducting research using the model, as well as improving the component codes. Therefore, this study has presented detailed guidance for installing a new version of the UM on high-performance parallel computers (Linux clusters) owned by individual researchers, which would help researchers to easily work with the UM. The numerical integration performance of the UM on Linux clusters was also evaluated for two different model resolutions, namely N96L85 (1.875° ×1.25° with 85 vertical levels up to 85 km) and N48L70 (3.75° ×2.5° with 70 vertical levels up to 80 km). The one-month integration times using 256 cores for the AMIP and CMIP simulations of N96L85 resolution were 169 and 205 min, respectively. The one-month integration time for an N48L70 AMIP run using 252 cores was 33 min. Simulated results on 2-m surface temperature and precipitation intensity were compared with ERA5 re-analysis data. The spatial distributions of the simulated results were qualitatively compared to those of ERA5 in terms of spatial distribution, despite the quantitative differences caused by different resolutions and atmosphere-ocean coupling. In conclusion, this study has confirmed that UM can be successfully installed and used in high-performance Linux clusters.

Drought Estimation Model Using a Evaporation Pan with 50 mm Depth (50mm 깊이 증발(蒸發) 팬을 이용한 한발 평가 모델 설정)

  • Oh, Yong Taeg;Oh, Dong Shig;Song, Kwan Cheol;Um, Ki Cheol;Shin, Jae Sung;Im, Jung Nam
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.29 no.2
    • /
    • pp.92-106
    • /
    • 1996
  • Imaginary grass field was assumed suitable as the representative one for simplified estimation of local drought, and a moisture balance booking model computing drought was developed with the limited numbers of its determining factors, such as crop coefficient of the field, reservoir capacity of the soil, and the beginning point of drought as defined by soil moisture status. The maximum effective rainfall was assumed to be the same as the available free space of soil reservoir capacity. The model is similar to a definite depth evaporation pan, which stores rainfall as much as the available free space on the water in it and consumes the water by evaporation. When the pan keeps water less than a certain defined level, it is droughty. The model simulates soil moisture deficit on the assumed grass field for the drought estimation. The model can assess the water requirement, drought intensity, and the index of yield decrement due to drought. The influencing intensity indices of the selected factors were 100, 21, and 16 respectively for crop coefficient, reservoir capacity, and drought beginning point, determined by the annual water requirements as influenced by them in the model. The optimum values of the selected factors for the model were respectively 58% for crop coefficient defined on the energy indicator scale of the small copper pan evaporation, 50 mm for reservoir capacity on the basis of the average of experimentally determined values for sandy loam, loam, clay loam, and clay soils, and 65% of the reservoir capacity for the beginning point of drought.

  • PDF

The Design of a Structure of Network Co-processor for SDR(Software Defined Radio) (SDR(Software Defined Radio)에 적합한 네트워크 코프로세서 구조의 설계)

  • Kim, Hyun-Pil;Jeong, Ha-Young;Ham, Dong-Hyeon;Lee, Yong-Surk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2A
    • /
    • pp.188-194
    • /
    • 2007
  • In order to become ubiquitous world, the compatibility of wireless machines has become the significant characteristic of a communication terminal. Thus, SDR is the most necessary technology and standard. However, among the environment which has different communication protocol, it's difficult to make a terminal with only hardware using ASIC or SoC. This paper suggests the processor that can accelerate several communication protocol. It can be connected with main-processor, and it is specialized PHY layer of network The C-program that is modeled with the wireless protocol IEEE802.11a and IEEE802.11b which are based on widely used modulation way; OFDM and CDM is compiled with ARM cross compiler and done simulation and profiling with Simplescalar-Arm version. The result of profiling, most operations were Viterbi operations and complex floating point operations. According to this result we suggested a co-processor which can accelerate Viterbi operations and complex floating point operations and added instructions. These instructions are simulated with Simplescalar-Arm version. The result of this simulation, comparing with computing only one ARM core, the operations of Viterbi improved as fast as 4.5 times. And the operations of complex floating point improved as fast as twice. The operations of IEEE802.11a are 3 times faster, and the operations of IEEE802.11b are 1.5 times faster.

Implementation of Motion Detection based on Extracting Reflected Light using 3-Successive Video Frames (3개의 연속된 프레임을 이용한 반사된 빛 영역추출 기반의 동작검출 알고리즘 구현)

  • Kim, Chang Min;Lee, Kyu Woong
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.3
    • /
    • pp.133-138
    • /
    • 2016
  • Motion detection algorithms based on difference image are classified into background subtraction and previous frame subtraction. 1) Background subtraction is a convenient and effective method for detecting foreground objects in a stationary background. However in real world scenarios, especially outdoors, this restriction, (i.e., stationary background) often turns out to be impractical since the background may not be stable. 2) Previous frame subtraction is a simple technique for detecting motion in an image. The difference between two frames depends upon the amount of motion that occurs from one frame to the next. Both these straightforward methods fail when the object moves very "slightly and slowly". In order to efficiently deal with the problem, in this paper we present an algorithm for motion detection that incorporates "reflected light area" and "difference image". This reflected light area is generated during the frame production process. It processes multiplex difference image and AND-arithmetic of bitwise. This process incorporates the accuracy of background subtraction and environmental adaptability of previous frame subtraction and reduces noise generation. Also, the performance of the proposed method is demonstrated by the performance assessment of each method using Gait database sample of CASIA.

A Study on Stochastic Wave Propagation Model to Generate Various Uninterrupted Traffic Flows (다양한 연속 교통류 구현을 위한 확률파장전파모형의 개발)

  • Chang, Hyun-Ho;Baek, Seung-Kirl;Park, Jae-Beom
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.4 s.75
    • /
    • pp.147-158
    • /
    • 2004
  • A class of SWP(Stochastic Wane Propagation) models microscopically mimics individual vehicles' stochastic behavior and traffic jam propagation with simplified car-following models based on CA(Cellular Automata) theory and macroscopically captures dynamic traffic flow relationships based on statistical physics. SWP model, a program-oriented model using both discrete time-space and integer data structure, can simulate a huge road network with high-speed computing time. However, the model has shortcomings to both the capturing of low speed within a jam microscopically and that of the density and back propagation speed of traffic congestion macroscopically because of the generation of spontaneous jam through unrealistic collision avoidance. In this paper, two additional rules are integrated into the NaSch model. The one is SMR(Stopping Maneuver Rule) to mimic vehicles' stopping process more realistically in the tail of traffic jams. the other is LAR(Low Acceleration Rule) for the explanation of low speed characteristics within traffic jams. Therefore, the CA car-following model with the two rules prevents the lockup condition within a heavily traffic density capturing both the stopping maneuver behavior in the tail of traffic jam and the low acceleration behavior within jam microscopically, and generates more various macroscopic traffic flow mechanism than NaSch model's with the explanation of propagation speed and density of traffic jam.