• Title/Summary/Keyword: High scalability

Search Result 430, Processing Time 0.026 seconds

An efficient interconnection network topology in dual-link CC-NUMA systems (이중 연결 구조 CC-NUMA 시스템의 효율적인 상호 연결망 구성 기법)

  • Suh, Hyo-Joong
    • The KIPS Transactions:PartA
    • /
    • v.11A no.1
    • /
    • pp.49-56
    • /
    • 2004
  • The performance of the multiprocessor systems is limited by the several factors. The system performance is affected by the processor speed, memory delay, and interconnection network bandwidth/latency. By the evolution of semiconductor technology, off the shelf microprocessor speed breaks beyond GHz, and the processors can be scalable up to multiprocessor system by connecting through the interconnection networks. In this situation, the system performances are bound by the latencies and the bandwidth of the interconnection networks. SCI, Myrinet, and Gigabit Ethernet are widely adopted as a high-speed interconnection network links for the high performance cluster systems. Performance improvement of the interconnection network can be achieved by the bandwidth extension and the latency minimization. Speed up of the operation clock speed is a simple way to accomplish the bandwidth and latency betterment, while its physical distance makes the difficulties to attain the high frequency clock. Hence the system performance and scalability suffered from the interconnection network limitation. Duplicating the link of the interconnection network is one of the solutions to resolve the bottleneck of the scalable systems. Dual-ring SCI link structure is an example of the interconnection network improvement. In this paper, I propose a network topology and a transaction path algorism, which optimize the latency and the efficiency under the duplicated links. By the simulation results, the proposed structure shows 1.05 to 1.11 times better latency, and exhibits 1.42 to 2.1 times faster execution compared to the dual ring systems.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Current Status and Prospects of High-Power Fiber Laser Technology (Invited Paper) (고출력 광섬유 레이저 기술의 현황 및 전망)

  • Kwon, Youngchul;Park, Kyoungyoon;Lee, Dongyeul;Chang, Hanbyul;Lee, Seungjong;Vazquez-Zuniga, Luis Alonso;Lee, Yong Soo;Kim, Dong Hwan;Kim, Hyun Tae;Jeong, Yoonchan
    • Korean Journal of Optics and Photonics
    • /
    • v.27 no.1
    • /
    • pp.1-17
    • /
    • 2016
  • Over the past two decades, fiber-based lasers have made remarkable progress, now having reached power levels exceeding kilowatts and drawing a huge amount of attention from academy and industry as a replacement technology for bulk lasers. In this paper we review the significant factors that have led to the progress of fiber lasers, such as gain-fiber regimes based on ytterbium-doped silica, optical pumping schemes through the combination of laser diodes and double-clad fiber geometries, and tandem schemes for minimizing quantum defects. Furthermore, we discuss various power-limitation issues that are expected to incur with respect to the ultimate power scaling of fiber lasers, such as efficiency degradation, thermal hazard, and system-instability growth in fiber lasers, and various relevant methods to alleviate the aforementioned issues. This discussion includes fiber nonlinear effects, fiber damage, and modal-instability issues, which become more significant as the power level is scaled up. In addition, we also review beam-combining techniques, which are currently receiving a lot of attention as an alternative solution to the power-scaling limitation of high-power fiber lasers. In particular, we focus more on the discussion of the schematics of a spectral beam-combining system and their individual requirements. Finally, we discuss prospects for the future development of fiber laser technologies, for them to leap forward from where they are now, and to continue to advance in terms of their power scalability.

An Efficient Estimation of Place Brand Image Power Based on Text Mining Technology (텍스트마이닝 기반의 효율적인 장소 브랜드 이미지 강도 측정 방법)

  • Choi, Sukjae;Jeon, Jongshik;Subrata, Biswas;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.113-129
    • /
    • 2015
  • Location branding is a very important income making activity, by giving special meanings to a specific location while producing identity and communal value which are based around the understanding of a place's location branding concept methodology. Many other areas, such as marketing, architecture, and city construction, exert an influence creating an impressive brand image. A place brand which shows great recognition to both native people of S. Korea and foreigners creates significant economic effects. There has been research on creating a strategically and detailed place brand image, and the representative research has been carried out by Anholt who surveyed two million people from 50 different countries. However, the investigation, including survey research, required a great deal of effort from the workforce and required significant expense. As a result, there is a need to make more affordable, objective and effective research methods. The purpose of this paper is to find a way to measure the intensity of the image of the brand objective and at a low cost through text mining purposes. The proposed method extracts the keyword and the factors constructing the location brand image from the related web documents. In this way, we can measure the brand image intensity of the specific location. The performance of the proposed methodology was verified through comparison with Anholt's 50 city image consistency index ranking around the world. Four methods are applied to the test. First, RNADOM method artificially ranks the cities included in the experiment. HUMAN method firstly makes a questionnaire and selects 9 volunteers who are well acquainted with brand management and at the same time cities to evaluate. Then they are requested to rank the cities and compared with the Anholt's evaluation results. TM method applies the proposed method to evaluate the cities with all evaluation criteria. TM-LEARN, which is the extended method of TM, selects significant evaluation items from the items in every criterion. Then the method evaluates the cities with all selected evaluation criteria. RMSE is used to as a metric to compare the evaluation results. Experimental results suggested by this paper's methodology are as follows: Firstly, compared to the evaluation method that targets ordinary people, this method appeared to be more accurate. Secondly, compared to the traditional survey method, the time and the cost are much less because in this research we used automated means. Thirdly, this proposed methodology is very timely because it can be evaluated from time to time. Fourthly, compared to Anholt's method which evaluated only for an already specified city, this proposed methodology is applicable to any location. Finally, this proposed methodology has a relatively high objectivity because our research was conducted based on open source data. As a result, our city image evaluation text mining approach has found validity in terms of accuracy, cost-effectiveness, timeliness, scalability, and reliability. The proposed method provides managers with clear guidelines regarding brand management in public and private sectors. As public sectors such as local officers, the proposed method could be used to formulate strategies and enhance the image of their places in an efficient manner. Rather than conducting heavy questionnaires, the local officers could monitor the current place image very shortly a priori, than may make decisions to go over the formal place image test only if the evaluation results from the proposed method are not ordinary no matter what the results indicate opportunity or threat to the place. Moreover, with co-using the morphological analysis, extracting meaningful facets of place brand from text, sentiment analysis and more with the proposed method, marketing strategy planners or civil engineering professionals may obtain deeper and more abundant insights for better place rand images. In the future, a prototype system will be implemented to show the feasibility of the idea proposed in this paper.

H.264/SVC Spatial Scalability Coding based Terrestrial Multi-channel Hybrid HD Broadcasting Service Framework and Performance Analysis on H.264/SVC (H.264/SVC 공간 계위 부호화 기반 지상파 다채널 하이브리드 고화질 방송 서비스 프레임워크 및 H.264/SVC 부호화 성능 평가)

  • Kim, Dae-Eun;Lee, Bum-Shik;Kim, Mun-Churl;Kim, Byung-Sun;Hahm, Sang-Jin;Lee, Keun-Sik
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.640-658
    • /
    • 2012
  • One of the existing terrestrial multi-channel DTV service frameworks, called KoreaView, provides four programs, composed of MPEG-2 based one HD video and H.264/AVC based three SD videos within one single 6MHz frequency bandwidth. However the additional 3 SD videos can not provide enough quality due to its reduced spatial resolution and low target bitrates. In this paper, we propose a framework, which is called a terrestrial multi-channel high quality hybrid DTV service, to overcome such a weakness of KoreaView services. In the proposed framework, the three additional SD videos are encoded based on an H.264/SVC Spatial Base layer, which is compliant with H.264/AVC, and are delivered via broadcasting networks. On the other hand, and the corresponding three additional HD videos are encoded based on an H.264/SVC Spatial Enhancement layer, which are transmitted over broadband networks such as Internet, thus allowing the three additional videos for users with better quality of experience. In order to verify the effectiveness of the proposed framework, various experimental results are provided for real video contents being used for DTV services. First, the experimental results show that, when the SD sequences are encoded by the H.264/SVC Spatial Base layer at a target bitrate of 1.5Mbps, the resulting PSNR values are ranged from 34.5dB to 42.9dB, which is a sufficient level of service quality. Also it is noted that 690kbps-8,200kbps are needed for the HD test sequences when they are encoded in the H.264/SVC Spatial Enhancement layer at similar PSNR values for the same HD sequences encoded by MPEG-2 at a target bitrate of 12 Mbps.

(A Scalable Multipoint-to-Multipoint Routing Protocol in Ad-Hoc Networks) (애드-혹 네트워크에서의 확장성 있는 다중점 대 다중점 라우팅 프로토콜)

  • 강현정;이미정
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.3
    • /
    • pp.329-342
    • /
    • 2003
  • Most of the existing multicast routing protocols for ad-hoc networks do not take into account the efficiency of the protocol for the cases when there are large number of sources in the multicast group, resulting in either large overhead or poor data delivery ratio when the number of sources is large. In this paper, we propose a multicast routing protocol for ad-hoc networks, which particularly considers the scalability of the protocol in terms of the number of sources in the multicast groups. The proposed protocol designates a set of sources as the core sources. Each core source is a root of each tree that reaches all the destinations of the multicast group. The union of these trees constitutes the data delivery mesh, and each of the non-core sources finds the nearest core source in order to delegate its data delivery. For the efficient operation of the proposed protocol, it is important to have an appropriate number of core sources. Having too many of the core sources incurs excessive control and data packet overhead, whereas having too little of them results in a vulnerable and overloaded data delivery mesh. The data delivery mesh is optimally reconfigured through the periodic control message flooding from the core sources, whereas the connectivity of the mesh is maintained by a persistent local mesh recovery mechanism. The simulation results show that the proposed protocol achieves an efficient multicast communication with high data delivery ratio and low communication overhead compared with the other existing multicast routing protocols when there are multiple sources in the multicast group.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on the Evaluation of Nepal's Inclusive Business Solution: Focusing on the Application of OECD DAC Evaluation Criteria (네팔의 포용적 비즈니스 프로그램 평가에 관한 연구: 경제협력개발기구 개발원조위원회 평가기준 적용을 중심으로)

  • Kim, Yeon-Hong;Lee, Sung-Soon
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.4
    • /
    • pp.177-192
    • /
    • 2021
  • The Development Assistance Committee of the Organization for Economic Cooperation and Development discusses the reorganization of the five evaluation criteria of the Public Development Assistance Committee, which are used internationally, and the five evaluation criteria including adequacy, efficiency, effectiveness, impact, and sustainability when assessing public development assistance in 1991. This study is to derive alternatives by applying the evaluation criteria of the Development Assistance Committee of the Organization for Economic Cooperation and Development in the evaluation of the inclusive business program being implemented in Nepal since 2019. As a result of the study, the adequacy of Nepal's inclusive business program was consistent with continuous employment and job creation for vulnerable groups such as disabled and orphan women. Efficiency can be said to be efficient in that processes such as work order and work confirmation are made with an electronic management tool, and delivery of the result is transmitted online, saving time and cost compared to other industries. The effectiveness of this project can be said to be an effective program in that it provides high-quality jobs such as providing specialized computer graphics education for the vulnerable, such as disabled and orphan women in Nepal, and hiring graduates as employees. Sustainability is the point that KOICA's inclusive business program has enabled vulnerable groups in the existing fields of agriculture and manufacturing to engage in the computer graphics industry, and the scalability of movies, characters, education businesses, and role models in other countries.However, considering that the scale of public development assistance will continue to increase in the future, it is necessary to establish a systematic monitoring system and a recirculation system so that the project between the donor and recipient countries can continue.

Case Analysis on Platform Business Models for IT Service Planning (IT서비스 기획을 위한 플랫폼 비즈니스 모델 사례 분석연구)

  • Kim, Hyun Ji;Cha, yun so;Kim, Kyung Hoon
    • Korea Science and Art Forum
    • /
    • v.25
    • /
    • pp.103-118
    • /
    • 2016
  • Due to the rapid development of ICT, corporate business models quickly changed and because of the radical growth of IT technology, sequential or gradual survival has become difficult. Internet-based new businesses such as IT service companies are seeking for new convergence business models that have not existed before to create business models that are more competitive, but the economic efficiency of business models that were successful in the past is wearing off. Yet, as reaching the critical point where the platform value becomes extremely high for platforms via the Internet is happening at a much higher speed than before, platform-ization has become a very important condition for rapid business expansion for all kinds of businesses. This study analyzes the necessity of establishing platform business models in IT service planning and identifies their characteristics through case analyses of platform business models. The results derived features First, there is a need to ensure sufficient buyers and sellers, and second, platform business model should provide customers with distinctive value of the only platforms are generating. third, the common interests between platform-driven company and a partner, participants Should be existing. Fourthly, by expanding base of participants and upgrades, expansion of adjacent areas we must have a continuous scalability and evolution must be sustainable. While it is expected that the identified characteristics will cause tremendous impacts to the establishment of platform business models and to the graphing of service planning, we also look forward to this study serving as the starting point for the establishment of theories of profit models for platform businesses, which were not mentioned in the study, so that planners responsible for platform-based IT service planning will spend less time and draw bigger schemes in building planning drafts.

Case Analysis of the Promotion Methodologies in the Smart Exhibition Environment (스마트 전시 환경에서 프로모션 적용 사례 및 분석)

  • Moon, Hyun Sil;Kim, Nam Hee;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.171-183
    • /
    • 2012
  • In the development of technologies, the exhibition industry has received much attention from governments and companies as an important way of marketing activities. Also, the exhibitors have considered the exhibition as new channels of marketing activities. However, the growing size of exhibitions for net square feet and the number of visitors naturally creates the competitive environment for them. Therefore, to make use of the effective marketing tools in these environments, they have planned and implemented many promotion technics. Especially, through smart environment which makes them provide real-time information for visitors, they can implement various kinds of promotion. However, promotions ignoring visitors' various needs and preferences can lose the original purposes and functions of them. That is, as indiscriminate promotions make visitors feel like spam, they can't achieve their purposes. Therefore, they need an approach using STP strategy which segments visitors through right evidences (Segmentation), selects the target visitors (Targeting), and give proper services to them (Positioning). For using STP Strategy in the smart exhibition environment, we consider these characteristics of it. First, an exhibition is defined as market events of a specific duration, which are held at intervals. According to this, exhibitors who plan some promotions should different events and promotions in each exhibition. Therefore, when they adopt traditional STP strategies, a system can provide services using insufficient information and of existing visitors, and should guarantee the performance of it. Second, to segment automatically, cluster analysis which is generally used as data mining technology can be adopted. In the smart exhibition environment, information of visitors can be acquired in real-time. At the same time, services using this information should be also provided in real-time. However, many clustering algorithms have scalability problem which they hardly work on a large database and require for domain knowledge to determine input parameters. Therefore, through selecting a suitable methodology and fitting, it should provide real-time services. Finally, it is needed to make use of data in the smart exhibition environment. As there are useful data such as booth visit records and participation records for events, the STP strategy for the smart exhibition is based on not only demographical segmentation but also behavioral segmentation. Therefore, in this study, we analyze a case of the promotion methodology which exhibitors can provide a differentiated service to segmented visitors in the smart exhibition environment. First, considering characteristics of the smart exhibition environment, we draw evidences of segmentation and fit the clustering methodology for providing real-time services. There are many studies for classify visitors, but we adopt a segmentation methodology based on visitors' behavioral traits. Through the direct observation, Veron and Levasseur classify visitors into four groups to liken visitors' traits to animals (Butterfly, fish, grasshopper, and ant). Especially, because variables of their classification like the number of visits and the average time of a visit can estimate in the smart exhibition environment, it can provide theoretical and practical background for our system. Next, we construct a pilot system which automatically selects suitable visitors along the objectives of promotions and instantly provide promotion messages to them. That is, based on the segmentation of our methodology, our system automatically selects suitable visitors along the characteristics of promotions. We adopt this system to real exhibition environment, and analyze data from results of adaptation. As a result, as we classify visitors into four types through their behavioral pattern in the exhibition, we provide some insights for researchers who build the smart exhibition environment and can gain promotion strategies fitting each cluster. First, visitors of ANT type show high response rate for promotion messages except experience promotion. So they are fascinated by actual profits in exhibition area, and dislike promotions requiring a long time. Contrastively, visitors of GRASSHOPPER type show high response rate only for experience promotion. Second, visitors of FISH type appear favors to coupon and contents promotions. That is, although they don't look in detail, they prefer to obtain further information such as brochure. Especially, exhibitors that want to give much information for limited time should give attention to visitors of this type. Consequently, these promotion strategies are expected to give exhibitors some insights when they plan and organize their activities, and grow the performance of them.