• Title/Summary/Keyword: 웹정보시스템

Search Result 5,824, Processing Time 0.035 seconds

Development of Intelligent Learning Tool based on Human eyeball Movement Analysis for Improving Foreign Language Competence (외국어 능력 향상을 위한 사용자 안구운동 분석 기반의 지능형 학습도구 개발)

  • Shin, Jihye;Jang, Young-Min;Kim, Sangwook;Mallipeddi, Rammohan;Bae, Jungok;Choi, Sungmook;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.11
    • /
    • pp.153-161
    • /
    • 2013
  • Recently, there has been a tremendous increase in the availability of educational materials for foreign language learning. As part of this trend, there has been an increase in the amount of electronically mediated materials available. However, conventional educational contents developed using computer technology has provided typically one-way information, which is not the most helpful thing for users. Providing the user's convenience requires additional off-line analysis for diagnosing an individual user's learning. To improve the user's comprehension of texts written in a foreign language, we propose an intelligent learning tool based on the analysis of the user's eyeball movements, which is able to diagnose and improve foreign language reading ability by providing necessary supplementary aid just when it is needed. To determine the user's learning state, we correlate their eye movements with findings from research in cognitive psychology and neurophysiology. Based on this, the learning tool can distinguish whether users know or do not know words when they are reading foreign language sentences. If the learning tool judges a word to be unknown, it immediately provides the student with the meaning of the word by extracting it from an on-line dictionary. The proposed model provides a tool which empowers independent learning and makes access to the meanings of unknown words automatic. In this way, it can enhance a user's reading achievement as well as satisfaction with text comprehension in a foreign language.

Design and Implementation of File Cloud Server by Using JAVA SDK (Java SDK를 이용한 파일 클라우드 시스템의 설계 및 구현)

  • Lee, Samuel Sangkon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.2
    • /
    • pp.86-100
    • /
    • 2015
  • Cloud computing is a computing term that evolved in the late 2000s, based on utility and consumption of computer resources. Google say that "Cloud computing involves deploying groups of remote servers and software networks that allow different kinds of data sources be uploaded for real time processing to generate computing results without the need to store processed data on the cloud. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services. Cloud computing, or in simpler shorthand just "the cloud", also focuses on maximizing the effectiveness of the shared resources." The cloud service is a smart and/or intelligent service to save private files in any device, anytime, anywhere. Dropbox, OAuth, PAClous are required that the accumulated user's data are archives with cloud service. Currently we suggest an implementation technique to process many tasks to the cloud server with a thread pooling. Thread pooling is one of efficient implementating technique for client and service environment. In this paper, to present the implementation technique we suggest three diagrams in the consideration of software engineering.

Evolution of Relationship Marketing in the New Reality: Focused on the Pervasiveness of Digital New Media and the Enlargement of Customer Participation (21세기 새로운 현실에서 Relationship Marketing의 진화: 디지털 뉴미디어 환경의 보편화와 고객 참여의 고도화를 중심으로)

  • Lim, Jong Won;Cho, Ho Hyeon;Lee, Jeong Hoon
    • Asia Marketing Journal
    • /
    • v.13 no.4
    • /
    • pp.105-137
    • /
    • 2012
  • After relationship marketing emerged as a new approach in the marketing field in the 1980s, it has been widely studied in the United States, Europe and Asia. Rapid environmental changes and global competition has made it inevitable for companies to consider their relationships with the environment more closely. Under these circumstances, relationship marketing has held a position as a pivotal paradigm in the field of strategy as well as in marketing. In addition, relationship marketing has overcome the limitations of a traditional marketing research while providing richer implications in company's marketing activities. The paradigm shift to relationship marketing has brought fundamental changes in a marketing point of view. First, in philosophical aspects, unlike past research which focused solely on customer satisfaction, organizational relationship parameters which focuses on trust and commitment has become key elements of successful relationship marketing while shifts in thoughts naturally take place from adaptive marketing to strategic marketing. Second, in structural aspects, the relational mechanism of governance such as network structure with a variety of relational partners has emerged as a new marketing organization from the previous simple structure focusing on the micro-economic, marketbased trading between seller and customer. Third, in behavioral aspects, it proposed the strategic course of the action of gaining an advantage over the competition on the individual firm level by focusing on building long-term relationships and considering partnership with the components in the entire marketing system, rather than with one-time transaction-centric action between a seller and a customer. Fourth, in the aspects of marketing performance, marketing performance was sought through the long-term and cooperative relationship with various stakeholders, including customers in the marketing system, focusing on the overall competitive advantage based on relationship rather than individual performance of individual companies' marketing activities, such as market share and customer satisfaction. However, studies of relationship marketing were mostly centered in interorganizational relationships focusing on the relational structure and properties of commercial sector in the marketing system. Paradoxically, the circumstance of the consumer's side that must be considered is evolving again in relationship marketing. In structural aspects, a community, as the new relationship governance structure in the digital environment, and in behavioral aspects, the changing role of consumer participation demanding big changes in the digital environment engaged in the marketing system. The possibility of building a relationship marketing community for common value creation is presented in terms of organization of consumers with the focus on changing marketing environment and marketing system according to the new realities of the 21st century- the popularity of digital environments and the diffusion of customer participation. Therefore, future research of relationship marketing must seek for a truly integrated model including all of the existing structure and properties of the research oriented relationship from both the commercial and consumer sector.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

Improvement in the Future of the Dental Internet Homepage (치과 인터넷 홈페이지의 개선 방안)

  • Kim, Bit-Na
    • Journal of dental hygiene science
    • /
    • v.3 no.2
    • /
    • pp.77-82
    • /
    • 2003
  • The purpose of this study was to examine the characteristics of dental homepages in korea to discuss how they could be improved better. The findings of this study could be described as below : First, dental homepage should include differentiated, specialized content and features. Second, the use of three-dimensional image or multimedia would contribute to increasing people's understanding of dental treatment or general dental information and elevating the effectiveness of dental publicity activities. Third, the want ad and order system used by the business sector or hospital would serve to multiply the management efficiency of dental institutions. Fourth, dental hospitals and clinics that belong to the same network or franchise need to make publicity banner for feasible mutual link, and the use of the same homepage design or common logo would be effective for better image and publicity activities. Fifth, it would be convenient to add the map search function or inquiry system. Seventh, if multiple types of services, such as entertainment or game, are prepared, it would be possible for dental institutions to project a better image and to induce visitors to hit the sites again.

  • PDF

Possible Ways to Make a Strategical Use of CRM for Facilitating Performing Arts (공연예술 활성화를 위한 CRM의 전략적 활용방안)

  • Kim, Chung-Eon
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.4
    • /
    • pp.225-234
    • /
    • 2012
  • The purpose of this study is to explore possible ways to make a strategical use of CRM(Customer Relationship Management) for facilitating performing arts. In order to satisfy the purpose, this study investigated actual cases of CRM, primarily focusing on LG Art Center, one of representative performance venues in South Korea, and CREDIA, a performing art planning agency in South Korea. Here, it was found that LG Art Center operated its independent TMS(Theater Management System) and thereby could afford to successfully plan performing art programs on the basis of customer-oriented convenient ticketing system as well as a pile of customer information. On the other hand, CREDIA introduced advanced performance management system and has successfully attracted larger membership than before. Moreover, it organized specialized personnel in membership management and thereby could manage membership in systematic manner. And it was found that based on web log analysis, CREDIA developed a variety of products to comply with customer needs and thereby could realize higher returns and better customer satisfaction through cross-selling activities as well as performance ticketing. However, it was found that CREDIA still operated its membership system and mileage point system in stereotypes manner. Thus, it is required to operate differentiated membership system based on membership grades and diversify practical ways to save and use mileage points, so that CRM can be strategically applied to develop new audience and maintain loyal customers.

Modeling Nutrient Uptake of Cucumber Plant Based on EC and Nutrient Solution Uptake in Closed Perlite Culture (순환식 펄라이트재배에서 EC와 양액흡수량을 이용한 오이 양분흡수 모델링)

  • 김형준;우영회;김완순;조삼증;남윤일
    • Proceedings of the Korean Society for Bio-Environment Control Conference
    • /
    • 2001.04b
    • /
    • pp.75-76
    • /
    • 2001
  • 순환식 펄라이트재배에서 배액 재사용을 위한 양분흡수 모델링을 작성하고자 EC 처리(1.5, 1.8, 2.1, 2.4, 2.7 dSㆍm-1)를 수행하였다. 생육 중기까지 EC 수준에 따른 양액흡수량은 차이가 없었지만 중기 이후 EC가 높을수록 흡수량이 감소되는 경항을 보였다(Fig. 1). NO$_3$-N, P 및 K의 흡수량은 생육기간 동안 처리간 차이를 유지하였는데 N과 K는 생육 중기 이후 일정 수준을 유지하였으나 P는 생육기간 동안 다소 증가되는 경향을 보였다. S의 흡수량은 생육 중기 이후 모든 처리에서 급격한 감소를 보였으며 생육 후기에는 처리간에 차이가 없었다(Fig. 2). 오이의 무기이온 흡수율에서와 같이 흡수량에서도 EC간 차이를 보여 EC를 무기이온 흡수량을 추정하는 요소로 이용할 수 있을 것으로 생각되었다. 무기이온 흡수량은 모든 EC 처리간에 생육 초기에는 차이를 보이지 않았으나 생육중기 이후에는 뚜렷한 차이를 보인 후 생육 후기의 높은 농도에서 그 차이가 다소 감소되는 경향을 보였다. 단위일사량에 따른 양액흡수량과 EC를 주된 변수로 한 오이의 이온 흡수량 예측 회귀식을 작성하였는데 모든 무기이온 흡수량 추정식의 상관계수는 S를 제외한 모든 이온에서 높게 나타났는데 특히 N, P, K 및 Ca에서 높았다. S이온에서의 상관계수는 0.47로 낮게 나타났으나 각 이온들의 회귀식에 대한 상관계수는 모두 1% 수준에서 유의성을 보여 위의 모델식을 순환식 양액재배에서 무기이온 추정식으로 사용이 가능할 것으로 생각되었다(Table 1). 이를 이용한 실측치와의 비교는 신뢰구간 1%내에서 높은 정의상관을 보여 실제적인 적용이 가능할 것으로 생각되었다(Fig 3)..ble 3D)를 바탕으로 MPEG-4 시스템의 특징들을 수용하여 구성되고 BIFS와 일대일로 대응된다. 반면에 XMT-0는 멀티미디어 문서를 웹문서로 표현하는 SMIL 2.0 을 그 기반으로 하였기에 MPEG-4 시스템의 특징보다는 컨텐츠를 저작하는 제작자의 초점에 맞추어 개발된 형태이다. XMT를 이용하여 컨텐츠를 저작하기 위해서는 사용자 인터페이스를 통해 입력되는 저작 정보들을 손쉽게 저장하고 조작할 수 있으며, 또한 XMT 파일 형태로 출력하기 위한 API 가 필요하다. 이에, 본 논문에서는 XMT 형태의 중간 자료형으로의 저장 및 조작을 위하여 XML 에서 표준 인터페이스로 사용하고 있는 DOM(Document Object Model)을 기반으로 하여 XMT 문법에 적합하게 API를 정의하였으며, 또한, XMT 파일을 생성하기 위한 API를 구현하였다. 본 논문에서 제공된 API는 객체기반 제작/편집 도구에 응용되어 다양한 멀티미디어 컨텐츠 제작에 사용되었다.x factorization (NMF), generative topographic mapping (GTM)의 구조와 학습 및 추론알고리즘을소개하고 이를 DNA칩 데이터 분석 평가 대회인 CAMDA-2000과 CAMDA-2001에서 사용된cancer diagnosis 문제와 gene-drug dependency analysis 문제에 적용한 결과를 살펴본다.0$\mu$M이 적당하며, 초기배발달을 유기할 때의 효과적인 cysteamine의 농도는 25~50$\mu$M인 것으로 판단된다.N)A(N)/N을 제시하였다(A(N)=N에 대한 A값). 위의 실험식을 사용하여 헝가리산 Zempleni 시료(15%

  • PDF

Development of Mobile Application for Ship Officers' Job Stress Measurement and Management (해기사 직무스트레스 측정 및 관리 모바일 애플리케이션 개발)

  • Yang, Dong-Bok;Kim, Joo-Sung;Kim, Deug-Bong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.2
    • /
    • pp.266-274
    • /
    • 2021
  • Ship officers are subject to excessive job stress, which has negative physical and psychological impacts and may adversely affect the smooth supply and demand of human resources. In this study, a mobile web application was developed as a tool for systematic job stress measurement and management of officers and verified through quality evaluation. Requirement analysis was performed by ship officers and staff in charge of human resources of shipping companies, and the results were reflected in the application configuration step. The application was designed according to the waterfall model, which is a traditional software development method, and functions were implemented using JSP and Spring Framework. Performance evaluation on the user interface, confirmed that proper input and output results were implemented, and the respondent results and the database were configured in the administrator interface. The results of evaluation questionnaires for quality evaluation of the interface based on ISO/IEC 9126-2 metric were significant 4.60 for the user interface and 4.65 for the administrator interface in a 5-point scale. In the future, it is necessary to conduct follow-up research on the development of data analysis system through utilization of the collected big-data sets.