• Title/Summary/Keyword: multiple-access system

Search Result 1,216, Processing Time 0.026 seconds

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

INFLUENCES OF DRY METHODS OF RETROCAVITY ON THE APICAL SEAL (치근단 역충전와동의 건조방법이 폐쇄성에 미치는 영향)

  • Lee, Jung-Tae;Kim, Sung-Kyo
    • Restorative Dentistry and Endodontics
    • /
    • v.24 no.1
    • /
    • pp.166-179
    • /
    • 1999
  • Apical sealing is essential for the success of surgical endodontic treatment. Root-end cavity is apt to be contaminated with moisture or blood, and is not always easy to be dried completely. The purpose of this study was to evaluate the influence of dry methods of retrocavity on the apical seal in endodontic surgery. Apical seal was investigated through the evaluation of apical leakage and adaptation of filling material over the cavity wall. To investigate the influence of various dry methods on the apical leakage, 125 palatal roots of extracted human maxillary molar teeth were used. The clinical crown of each tooth was removed at 10 mm from the root apex using a slow-speed diamond saw and water spray. Root canals of the all the specimens were prepared with step-back technique and filled with gutta-percha by lateral condensation method. After removing of the coronal 2 mm of filling material, the access cavities were closed with Cavit$^{(R)}$. Two coats of nail polish were applied to the external surface of each root. Apical three millimeters of each root was resected perpendicular to the long axis of the root with a diamond saw. Class I retrograde cavities were prepared with ultrasonic instruments. Retrocavities were washed with physiologic saline solution and dried with various methods or contaminated with human blood. Retrocavities were filled either with IRM, Super EBA or composite resin. All the specimens were immersed in 2% methylene blue solution for 7 days in an incubator at $37^{\circ}C$. The teeth were dissolved in 14 ml of 35% nitric acid solution and the dye present within the root canal system was returned to solution. The leakage of dye was quantitatively measured via spectrophotometric method. The obtained data were analysed statistically using one-way ANOVA and Duncan's Multiple Range Test. To evaluate the influence of various dry methods on the adaptation of filling material over the cavity wall, 12 palatal roots of extracted human maxillary molar teeth were used. After all the roots were prepared and filled, and retrograde cavities were made and filled as above, roots were sectioned longitudinally. Filling-dentin interface of cut surfaces were examined by scanning electron microscope. The results were as follows: 1. Cavities dried with paper point or compressed air showed less leakage than those dried with cotton pellet in Super EBA filled cavity (p<0.05). However, there was no difference between paper point- and compressed air-dried cavities. 2. When cavities were dried with compressed air, dentin-bonded composite resin-filled cavities showed less apical leakage than IRM- or Super EBA-filled ones (p<0.05). 3. Regardless of the filling material, cavities contaminated with human blood showed significantly more apical leakage than those dried with compressed air after saline irrigation (p<0.05). 4. Outer half of the cavity showed larger dentin-filling interface gap than inner half did when cavities were filled with IRM or Super EBA. 5. In all the filling material groups, cavities contaminated with blood or dried with cotton pellets only showed larger defects at the base of the cavity than ones dried with paper points or compressed air.

  • PDF

Domestic and International Experts' Perception of Policy and Direction on STEAM Education (융합인재교육(STEAM)의 정책과 실행 방향에 대한 국내외 전문가들의 인식)

  • Jung, Jaehwa;Jeon, Jaedon;Lee, Hyonyong
    • Journal of Science Education
    • /
    • v.39 no.3
    • /
    • pp.358-375
    • /
    • 2015
  • The purposes of this study were to investigate the value, necessity and legitimacy of STEAM Education and to propose practical approaching methods for STEAM Education to be applicable in Korea through a variety of literature review, case studies and collecting suggestions from domestic and international educational experts. The research questions are as follows: (1) To investigate the perception, understanding and recognitions of domestic and foreign professionals in STEAM education. (2) To analyze policy implications for an improvement in STEAM. The following aspects of STEAM were found to be challenges in our current STEAM policy after analyzing multiple questionnaires with the professionals and case studies including their experiences, understanding, supports and directions of the policy from the governments. The results indicate that (1) there was a lack of precise and conceptual understanding of STEAM in respect to experience. Training sessions for teachers in this field to help transform their perception is necessary. Development of practical programs with an easy access is also required. It is important to get the aims of related educational activities recognized by the professionals and established standards for an evaluation. The experts perceived that a theme-based learning is the most preferred and effective approaching method and the programs that develop creative thinking and learning applicable to practice are required to promote. (2) The results indicate that there was a lack of programs and inducements for supporting outstanding STEAM educators. It is shown that making an appropriate environment for STEAM education takes the first priority before training numbers of teachers unilaterally, thus securing enough budget seems critical. The professionals also emphasize on developing specialized teaching materials that include diverse inter-related subjects such as science technology, engineering, arts and humanities and social science with diverse viewpoints and advanced technology. This work requires a STEAM network for teachers to link up and share their materials, documents and experiences. It is necessary to get corporations, universities, and research centers participated in the network. (3) With respect to direction, it is necessary to propose policy that makes STEAM education ordinary and more practical in the present education system. The professionals have recommended training sessions that help develop creative thinking and amalgamative problem-solving techniques. They require reducing the workload of teachers and changing teachers' perspectives towards STEAM. They further urge a tight cooperation between departments of the government related with STEAM.

  • PDF

A digital Audio Watermarking Algorithm using 2D Barcode (2차원 바코드를 이용한 오디오 워터마킹 알고리즘)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.97-107
    • /
    • 2011
  • Nowadays there are a lot of issues about copyright infringement in the Internet world because the digital content on the network can be copied and delivered easily. Indeed the copied version has same quality with the original one. So, copyright owners and content provider want a powerful solution to protect their content. The popular one of the solutions was DRM (digital rights management) that is based on encryption technology and rights control. However, DRM-free service was launched after Steve Jobs who is CEO of Apple proposed a new music service paradigm without DRM, and the DRM is disappeared at the online music market. Even though the online music service decided to not equip the DRM solution, copyright owners and content providers are still searching a solution to protect their content. A solution to replace the DRM technology is digital audio watermarking technology which can embed copyright information into the music. In this paper, the author proposed a new audio watermarking algorithm with two approaches. First, the watermark information is generated by two dimensional barcode which has error correction code. So, the information can be recovered by itself if the errors fall into the range of the error tolerance. The other one is to use chirp sequence of CDMA (code division multiple access). These make the algorithm robust to the several malicious attacks. There are many 2D barcodes. Especially, QR code which is one of the matrix barcodes can express the information and the expression is freer than that of the other matrix barcodes. QR code has the square patterns with double at the three corners and these indicate the boundary of the symbol. This feature of the QR code is proper to express the watermark information. That is, because the QR code is 2D barcodes, nonlinear code and matrix code, it can be modulated to the spread spectrum and can be used for the watermarking algorithm. The proposed algorithm assigns the different spread spectrum sequences to the individual users respectively. In the case that the assigned code sequences are orthogonal, we can identify the watermark information of the individual user from an audio content. The algorithm used the Walsh code as an orthogonal code. The watermark information is rearranged to the 1D sequence from 2D barcode and modulated by the Walsh code. The modulated watermark information is embedded into the DCT (discrete cosine transform) domain of the original audio content. For the performance evaluation, I used 3 audio samples, "Amazing Grace", "Oh! Carol" and "Take me home country roads", The attacks for the robustness test were MP3 compression, echo attack, and sub woofer boost. The MP3 compression was performed by a tool of Cool Edit Pro 2.0. The specification of MP3 was CBR(Constant Bit Rate) 128kbps, 44,100Hz, and stereo. The echo attack had the echo with initial volume 70%, decay 75%, and delay 100msec. The sub woofer boost attack was a modification attack of low frequency part in the Fourier coefficients. The test results showed the proposed algorithm is robust to the attacks. In the MP3 attack, the strength of the watermark information is not affected, and then the watermark can be detected from all of the sample audios. In the sub woofer boost attack, the watermark was detected when the strength is 0.3. Also, in the case of echo attack, the watermark can be identified if the strength is greater and equal than 0.5.

A Study on the Operation Plan of the Gangwon-do Disaster Management Resources Integrated Management Center (강원도 재난관리자원 통합관리센터 운영방안에 관한 연구)

  • Hang-Il Jo;Sang-Beom Park;Kye-Won Jun
    • Journal of Korean Society of Disaster and Security
    • /
    • v.17 no.1
    • /
    • pp.9-16
    • /
    • 2024
  • In Korea, as disasters become larger and more complex, there is a trend of shifting from a focus on response and recovery to a focus on prevention and preparedness. In order to prevent and prepare for disasters, each local government manages disaster management resources by stockpiling them. However, although disaster management resources are stored in individual warehouses, they are managed by department rather than by warehouse, resulting in insufficient management of disaster management resources due to the heavy workload of those in charge. In order to intensively manage these disaster management resources, an integrated disaster management resource management center is established and managed at the metropolitan/provincial level. In the case of Gangwon-do, the subject of this study, a warehouse is rented and operated as an integrated disaster management resource management center. When leasing an integrated management center, there is the inconvenience of having to move the location every 1 to 2 years, so it is deemed necessary to build a dedicated facility in an available site. To select a location candidate, network analysis was used to measure access to and use of facilities along interconnected routes of networks such as roads and railways. During network analysis, the Location-Allocation method, which was widely used in the past to determine the location of multiple facilities, was applied. As a result, Hoengseong-gun in Gangwon-do was identified as a suitable candidate site. In addition, if the integrated management center uses our country's logistics system to stockpile disaster management resources, local governments can mobilize disaster management resources in 3 days, and it is said that it takes 3 days to return to normal life after a disaster occurs. Each city's disaster management resource stockpile is 3 days' worth per week, and the integrated management center stores 3 times the maximum of the city's 4-day stockpile.

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.