• Title/Summary/Keyword: Research Information Systems

Search Result 12,210, Processing Time 0.041 seconds

Wearable Computers

  • Cho, Gil-Soo;Barfield, Woodrow;Baird, Kevin
    • Fiber Technology and Industry
    • /
    • v.2 no.4
    • /
    • pp.490-508
    • /
    • 1998
  • One of the latest fields of research in the area of output devices is tactual display devices [13,31]. These tactual or haptic devices allow the user to receive haptic feedback output from a variety of sources. This allows the user to actually feel virtual objects and manipulate them by touch. This is an emerging technology and will be instrumental in enhancing the realism of wearable augmented environments for certain applications. Tactual displays have previously been used for scientific visualization in virtual environments by chemists and engineers to improve perception and understanding of force fields and of world models populated with the impenetrable. In addition to tactual displays, the use of wearable audio displays that allow sound to be spatialized are being developed. With wearable computers, designers will soon be able to pair spatialized sound to virtual representations of objects when appropriate to make the wearable computer experience even more realistic to the user. Furthermore, as the number and complexity of wearable computing applications continues to grow, there will be increasing needs for systems that are faster, lighter, and have higher resolution displays. Better networking technology will also need to be developed to allow all users of wearable computers to have high bandwidth connections for real time information gathering and collaboration. In addition to the technology advances that make users need to wear computers in everyday life, there is also the desire to have users want to wear their computers. In order to do this, wearable computing needs to be unobtrusive and socially acceptable. By making wearables smaller and lighter, or actually embedding them in clothing, users can conceal them easily and wear them comfortably. The military is currently working on the development of the Personal Information Carrier (PIC) or digital dog tag. The PIC is a small electronic storage device containing medical information about the wearer. While old military dog tags contained only 5 lines of information, the digital tags may contain volumes of multi-media information including medical history, X-rays, and cardiograms. Using hand held devices in the field, medics would be able to call this information up in real time for better treatment. A fully functional transmittable device is still years off, but this technology once developed in the military, could be adapted tp civilian users and provide ant information, medical or otherwise, in a portable, not obstructive, and fashionable way. Another future device that could increase safety and well being of its users is the nose on-a-chip developed by the Oak Ridge National Lab in Tennessee. This tiny digital silicon chip about the size of a dime, is capable of 'smelling' natural gas leaks in stoves, heaters, and other appliances. It can also detect dangerous levels of carbon monoxide. This device can also be configured to notify the fire department when a leak is detected. This nose chip should be commercially available within 2 years, and is inexpensive, requires low power, and is very sensitive. Along with gas detection capabilities, this device may someday also be configured to detect smoke and other harmful gases. By embedding this chip into workers uniforms, name tags, etc., this could be a lifesaving computational accessory. In addition to the future safety technology soon to be available as accessories are devices that are for entertainment and security. The LCI computer group is developing a Smartpen, that electronically verifies a user's signature. With the increase in credit card use and the rise in forgeries, is the need for commercial industries to constantly verify signatures. This Smartpen writes like a normal pen but uses sensors to detect the motion of the pen as the user signs their name to authenticate the signature. This computational accessory should be available in 1999, and would bring increased peace of mind to consumers and vendors alike. In the entertainment domain, Panasonic is creating the first portable hand-held DVD player. This device weight less than 3 pounds and has a screen about 6' across. The color LCD has the same 16:9 aspect ratio of a cinema screen and supports a high resolution of 280,000 pixels and stereo sound. The player can play standard DVD movies and has a hour battery life for mobile use. To summarize, in this paper we presented concepts related to the design and use of wearable computers with extensions to smart spaces. For some time, researchers in telerobotics have used computer graphics to enhance remote scenes. Recent advances in augmented reality displays make it possible to enhance the user's local environment with 'information'. As shown in this paper, there are many application areas for this technology such as medicine, manufacturing, training, and recreation. Wearable computers allow a much closer association of information with the user. By embedding sensors in the wearable to allow it to see what the user sees, hear what the user hears, sense the user's physical state, and analyze what the user is typing, an intelligent agent may be able to analyze what the user is doing and try to predict the resources he will need next or in the near future. Using this information, the agent may download files, reserve communications bandwidth, post reminders, or automatically send updates to colleagues to help facilitate the user's daily interactions. This intelligent wearable computer would be able to act as a personal assistant, who is always around, knows the user's personal preferences and tastes, and tries to streamline interactions with the rest of the world.

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Development of intranet-based Program Management Information System of multi-complex project with application of BIM (BIM을 활용한 다중복합 프로젝트의 인트라넷 기반 통합사업관리체계 구축 방안)

  • Song, Il-Bab;Hur, Young-Ran;Seo, Jong-Won
    • Journal of KIBIM
    • /
    • v.2 no.1
    • /
    • pp.27-39
    • /
    • 2012
  • Public construction projects need complex and multi-functional management skill, since the most of public construction projects are comprised of multi-project and mega-projects. In order to effectively manage construction projects, PMIS is widely used. However the majority of the current PMIS have been developed as a single project-oriented business management system. Thus compatibility problems are encountered during the process of integrating the entire systems to manage the multi-complex projects. In addition, the form of orders applying BIM are increased recently, but the research and development of BIM based PMIS are still lacking. In this study, therefore, the functions of PMIS main objectives based on the analysis of PMIS As-Is and To-Be of PMIS, the dual management system utilizing Internet and Intranet will be proposed to integrate the individual PMIS with Integrated Program Management System. Rather than combining commercial BIM tool and PMIS directly, which is the common method of failure, the sequential process model to adopt BIM based PMIS is also explained. Step-by-step development method of BIM based PMIS is suggested to prepare for the activation of BIM technology in the nearest future.

Design of a Web-Based System for Collaborative Power-Boat Manufacturing (파워보트 협업 생산을 위한 웹기반 컨텐츠 관리 시스템 설계)

  • Lee, Philippe;Lee, Dong-Kun;Back, Myung-Gi;Oh, Dae-Kyun;Choi, Yang-Ryul
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.3
    • /
    • pp.265-273
    • /
    • 2012
  • The business environment is changing rapidly because of the global crisis. In order to survive and enhance competitiveness in the global market, global manufacturing companies are trying to overcome the crisis through the convergence of production infrastructure and IT technology. The importance of systems to support the integration of manufacturing processes, collaboration in product development, and information integration of providers and producers is therefore increasing. In this paper, research is conducted on the design and implementation of a collaboration system to support a power-boat manufacturing company in this situation of increased demand for collaboration and information integration. The system was designed through product-structure and production-process analysis, support product data management, and enterprise contents management. The company involved in the power-boat development project is expected to show an improvement in productivity through the integrated management of information and collaboration provided by this system.

Design and Implementation of Low-power Neuromodulation S/W based on MSP430 (MSP430 기반 저전력 뇌 신경자극기 S/W 설계 및 구현)

  • Hong, Sangpyo;Quan, Cheng-Hao;Shim, Hyun-Min;Lee, Sangmin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.7
    • /
    • pp.110-120
    • /
    • 2016
  • A power-efficient neuromodulator is needed for implantable systems. In spite of their stimulation signal's simplicity of wave shape and waiting time of MCU(micro controller unit) much longer than execution time, there is no consideration for low-power design. In this paper, we propose a novel of low-power algorithm based on the characteristics of stimulation signals. Then, we designed and implement a neuromodulation software that we call NMS(neuro modulation simulation). In order to implement low-power algorithm, first, we analyze running time of every function in existing NMS. Then, we calculate execution time and waiting time for these functions. Subsequently, we estimate the transition time between active mode (AM) and low-power mode (LPM). By using these results, we redesign the architecture of NMS in the proposed low-power algorithm: a stimulation signal divided into a number of segments by using characteristics of the signal from which AM or LPM segments are defined for determining the MCU power reduces to turn off or not. Our experimental results indicate that NMS with low-power algorithm reducing current consumption of MCU by 76.31 percent compared to NMS without low-power algorithm.

Development of a Mid-/Long-term Prediction Algorithm for Traffic Speed Under Foggy Weather Conditions (안개시 도시고속도로 통행속도 중장기 예측 알고리즘 개발)

  • JEONG, Eunbi;OH, Cheol;KIM, Youngho
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.3
    • /
    • pp.256-267
    • /
    • 2015
  • The intelligent transportation systems allow us to have valuable opportunities for collecting wide-area coverage traffic data. The significant efforts have been made in many countries to provide the reliable traffic conditions information such as travel time. This study analyzes the impacts of the fog weather conditions on the traffic stream. Also, a strategy for predicting the long-term traffic speeds is developed under foggy weather conditions. The results show that the average of speed reductions are 2.92kph and 5.36kph under the slight and heavy fog respectively. The best prediction performance is achieved when the previous 45 pattern cases data is used, and the 14.11% of mean absolute percentage error(MAPE) is obtained. The outcomes of this study support the development of more reliable traffic information for providing advanced traffic information service.

Preference-based Supply Chain Partner Selection Using Fuzzy Ontology (퍼지 온톨로지를 이용한 선호도 기반 공급사슬 파트너 선정)

  • Lee, Hae-Kyung;Ko, Chang-Seong;Kim, Tai-Oun
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.37-52
    • /
    • 2011
  • Supply chain management is a strategic thinking which enhances the value of supply chain and adapts more promptly for the changing environment. For the seamless partnership and value creation in supply chains, information and knowledge sharing and proper partner selection criteria must be applied. Thus, the partner selection criteria are critical to maintain product quality and reliability. Each part of a product is supplied by an appropriate supply partner. The criteria for selecting partners are technological capability, quality, price, consistency, etc. In reality, the criteria for partner selection may change according to the characteristics of the components. When the part is a core component, quality factor is the top priority compared to the price. For a standardized component, lower price has a higher priority. Sometimes, unexpected case occurs such as emergency order in which the preference may shift on the top. Thus, SCM partner selection criteria must be determined dynamically according to the characteristics of part and its context. The purpose of this research is to develop an OWL model for the supply chain partnership depending on its context and characteristics of the parts. The uncertainty of variable is tackled through fuzzy logic. The parts with preference of numerical value and context are represented using OWL. Part preference is converted into fuzzy membership function using fuzzy logic. For the ontology reasoning, SWRL (Semantic Web Rule Language) is applied. For the implementation of proposed model, starter motor of an automobile is adopted. After the fuzzy ontology is constructed, the process of selecting preference-based supply partner for each part is presented.

Social Network : A Novel Approach to New Customer Recommendations (사회연결망 : 신규고객 추천문제의 새로운 접근법)

  • Park, Jong-Hak;Cho, Yoon-Ho;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.1
    • /
    • pp.123-140
    • /
    • 2009
  • Collaborative filtering recommends products using customers' preferences, so it cannot recommend products to the new customer who has no preference information. This paper proposes a novel approach to new customer recommendations using the social network analysis which is used to search relationships among social entities such as genetics network, traffic network, organization network, etc. The proposed recommendation method identifies customers most likely to be neighbors to the new customer using the centrality theory in social network analysis and recommends products those customers have liked in the past. The procedure of our method is divided into four phases : purchase similarity analysis, social network construction, centrality-based neighborhood formation, and recommendation generation. To evaluate the effectiveness of our approach, we have conducted several experiments using a data set from a department store in Korea. Our method was compared with the best-seller-based method that uses the best-seller list to generate recommendations for the new customer. The experimental results show that our approach significantly outperforms the best-seller-based method as measured by F1-measure.

  • PDF

The Effect of the Quality of Pre-Assigned Subject Categories on the Text Categorization Performance (학습문헌집합에 기 부여된 범주의 정확성과 문헌 범주화 성능)

  • Shim, Kyung;Chung, Young-Mee
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.2
    • /
    • pp.265-285
    • /
    • 2006
  • In text categorization a certain level of correctness of labels assigned to training documents is assumed without solid knowledge on that of real-world collections. Our research attempts to explore the quality of pre-assigned subject categories in a real-world collection, and to identify the relationship between the quality of category assignment in training set and text categorization performance. Particularly, we are interested in to what extent the performance can be improved by enhancing the quality (i.e., correctness) of category assignment in training documents. A collection of 1,150 abstracts in computer science is re-classified by an expert group, and divided into 907 training documents and 227 test documents (15 duplicates are removed). The performances of before and after re-classification groups, called Initial set and Recat-1/Recat-2 sets respectively, are compared using a kNN classifier. The average correctness of subject categories in the Initial set is 16%, and the categorization performance with the Initial set shows 17% in $F_1$ value. On the other hand, the Recat-1 set scores $F_1$ value of 61%, which is 3.6 times higher than that of the Initial set.

Building an Analytical Platform of Big Data for Quality Inspection in the Dairy Industry: A Machine Learning Approach (유제품 산업의 품질검사를 위한 빅데이터 플랫폼 개발: 머신러닝 접근법)

  • Hwang, Hyunseok;Lee, Sangil;Kim, Sunghyun;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.125-140
    • /
    • 2018
  • As one of the processes in the manufacturing industry, quality inspection inspects the intermediate products or final products to separate the good-quality goods that meet the quality management standard and the defective goods that do not. The manual inspection of quality in a mass production system may result in low consistency and efficiency. Therefore, the quality inspection of mass-produced products involves automatic checking and classifying by the machines in many processes. Although there are many preceding studies on improving or optimizing the process using the data generated in the production process, there have been many constraints with regard to actual implementation due to the technical limitations of processing a large volume of data in real time. The recent research studies on big data have improved the data processing technology and enabled collecting, processing, and analyzing process data in real time. This paper aims to propose the process and details of applying big data for quality inspection and examine the applicability of the proposed method to the dairy industry. We review the previous studies and propose a big data analysis procedure that is applicable to the manufacturing sector. To assess the feasibility of the proposed method, we applied two methods to one of the quality inspection processes in the dairy industry: convolutional neural network and random forest. We collected, processed, and analyzed the images of caps and straws in real time, and then determined whether the products were defective or not. The result confirmed that there was a drastic increase in classification accuracy compared to the quality inspection performed in the past.