• Title/Summary/Keyword: 동적정보

Search Result 5,276, Processing Time 0.032 seconds

Recognition of Resident Registration Card using ART2-based RBF Network and face Verification (ART2 기반 RBF 네트워크와 얼굴 인증을 이용한 주민등록증 인식)

  • Kim Kwang-Baek;Kim Young-Ju
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.1-15
    • /
    • 2006
  • In Korea, a resident registration card has various personal information such as a present address, a resident registration number, a face picture and a fingerprint. A plastic-type resident card currently used is easy to forge or alter and tricks of forgery grow to be high-degree as time goes on. So, whether a resident card is forged or not is difficult to judge by only an examination with the naked eye. This paper proposed an automatic recognition method of a resident card which recognizes a resident registration number by using a refined ART2-based RBF network newly proposed and authenticates a face picture by a template image matching method. The proposed method, first, extracts areas including a resident registration number and the date of issue from a resident card image by applying Sobel masking, median filtering and horizontal smearing operations to the image in turn. To improve the extraction of individual codes from extracted areas, the original image is binarized by using a high-frequency passing filter and CDM masking is applied to the binaried image fur making image information of individual codes better. Lastly, individual codes, which are targets of recognition, are extracted by applying 4-directional contour tracking algorithm to extracted areas in the binarized image. And this paper proposed a refined ART2-based RBF network to recognize individual codes, which applies ART2 as the loaming structure of the middle layer and dynamicaly adjusts a teaming rate in the teaming of the middle and the output layers by using a fuzzy control method to improve the performance of teaming. Also, for the precise judgement of forgey of a resident card, the proposed method supports a face authentication by using a face template database and a template image matching method. For performance evaluation of the proposed method, this paper maked metamorphoses of an original image of resident card such as a forgey of face picture, an addition of noise, variations of contrast variations of intensity and image blurring, and applied these images with original images to experiments. The results of experiment showed that the proposed method is excellent in the recognition of individual codes and the face authentication fur the automatic recognition of a resident card.

  • PDF

Verifying Execution Prediction Model based on Learning Algorithm for Real-time Monitoring (실시간 감시를 위한 학습기반 수행 예측모델의 검증)

  • Jeong, Yoon-Seok;Kim, Tae-Wan;Chang, Chun-Hyon
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.243-250
    • /
    • 2004
  • Monitoring is used to see if a real-time system provides a service on time. Generally, monitoring for real-time focuses on investigating the current status of a real-time system. To support a stable performance of a real-time system, it should have not only a function to see the current status of real-time process but also a function to predict executions of real-time processes, however. The legacy prediction model has some limitation to apply it to a real-time monitoring. First, it performs a static prediction after a real-time process finished. Second, it needs a statistical pre-analysis before a prediction. Third, transition probability and data about clustering is not based on the current data. We propose the execution prediction model based on learning algorithm to solve these problems and apply it to real-time monitoring. This model gets rid of unnecessary pre-processing and supports a precise prediction based on current data. In addition, this supports multi-level prediction by a trend analysis of past execution data. Most of all, We designed the model to support dynamic prediction which is performed within a real-time process' execution. The results from some experiments show that the judgment accuracy is greater than 80% if the size of a training set is set to over 10, and, in the case of the multi-level prediction, that the prediction difference of the multi-level prediction is minimized if the number of execution is bigger than the size of a training set. The execution prediction model proposed in this model has some limitation that the model used the most simplest learning algorithm and that it didn't consider the multi-regional space model managing CPU, memory and I/O data. The execution prediction model based on a learning algorithm proposed in this paper is used in some areas related to real-time monitoring and control.

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

Implementation of Reporting Tool Supporting OLAP and Data Mining Analysis Using XMLA (XMLA를 사용한 OLAP과 데이타 마이닝 분석이 가능한 리포팅 툴의 구현)

  • Choe, Jee-Woong;Kim, Myung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.3
    • /
    • pp.154-166
    • /
    • 2009
  • Database query and reporting tools, OLAP tools and data mining tools are typical front-end tools in Business Intelligence environment which is able to support gathering, consolidating and analyzing data produced from business operation activities and provide access to the result to enterprise's users. Traditional reporting tools have an advantage of creating sophisticated dynamic reports including SQL query result sets, which look like documents produced by word processors, and publishing the reports to the Web environment, but data source for the tools is limited to RDBMS. On the other hand, OLAP tools and data mining tools have an advantage of providing powerful information analysis functions on each own way, but built-in visualization components for analysis results are limited to tables or some charts. Thus, this paper presents a system that integrates three typical front-end tools to complement one another for BI environment. Traditional reporting tools only have a query editor for generating SQL statements to bring data from RDBMS. However, the reporting tool presented by this paper can extract data also from OLAP and data mining servers, because editors for OLAP and data mining query requests are added into this tool. Traditional systems produce all documents in the server side. This structure enables reporting tools to avoid repetitive process to generate documents, when many clients intend to access the same dynamic document. But, because this system targets that a few users generate documents for data analysis, this tool generates documents at the client side. Therefore, the tool has a processing mechanism to deal with a number of data despite the limited memory capacity of the report viewer in the client side. Also, this reporting tool has data structure for integrating data from three kinds of data sources into one document. Finally, most of traditional front-end tools for BI are dependent on data source architecture from specific vendor. To overcome the problem, this system uses XMLA that is a protocol based on web service to access to data sources for OLAP and data mining services from various vendors.

LCD 연구 개발 동향

  • 이종천
    • The Magazine of the IEIE
    • /
    • v.29 no.6
    • /
    • pp.76-80
    • /
    • 2002
  • 'Liquid Crystal의 상전이(相轉移)와 광학적 이방성(異方性)이 1888년과 1889년 F. Reinitzer와 O. Lehmann에 의해 Monatsch Chem.과 Z.Physikal.Chem.에 각각 보고된 후 부터 제2차 세계대전이 끝난 뒤인 1950년대 까지는 Liquid Crystal을 단지실험실에서의 기초학문 차원의 연구 대상으로만 다루어 왔다. 1963년 Williams가 Liquid Crystal Device로는 최초로 특허 출원을 하였으며, 1968년 RCA사의 Heilmeier등은 Nematic 액정(液晶)에 저주파(低周波) 전압(電壓)을 인가하면 투명한 액정이 혼탁(混濁)상태로 변화하는 '동적산란(動的散亂)'(Dynamic Scattering) 현상을 이용하여 최초의 DSM(Dynamic Scattering Mode) LCD(Liquid Crystal Display)를 발명하였다. 비록 150V 이상의 높은 구동전압과 과소비전력의 특성 때문에 실용화에는 실패하였지만 Guest-Host효과와 Memory효과 등을 발견하였다. 1970년대에 이르러 실온에서 안정되게 사용 가능한 액정물질들이 합성되고(H. Kelker에 의해 MBBA, G. Gray에 의한 Cyano-Biphenyl 액정의 합성), CMOS 트랜지스터의 발명, 투명도전막(ITO), 수은전지등의 주변기술들의 발전으로 인하여 LCD의 상품화가 본격적으로 이루어지게 되었다. 1971년에는 M. Shadt, W. Helfrich, J.L. Fergason등이 TN(Twisted Nematic) LCD를 발명하여 전자 계산기와 손목시계에 응용되었고, 1970년대 말에는 Sharp에서 Dot Matrix형의 휴대형 컴퓨터를 발매하였다. 이러한 단순 구동형의 TN LCD는 그래픽 정보를 표시하는 데에는 품질의 한계가 있어 1979년 영국의 Le Comber에 의해 a-Si TFT(amorphous Silicon Thin Film Transistor) LCD의 연구가 시작되었고, 1983년 T.J. Scheffer, J. Nehring, G. Waters에 의해 STN(Super Twisted Nematic) LCD가 창안되었고, 1980년 N. Clark, S. Lagerwall 및 1983년 K.Yossino에 의해 Ferroelectric LCD가 등장하여 LCD의 정보 표시량 증대에 크게 기여하였다. Color화의 진전은 1972년 A.G. Ficher의 셀 외부에 RGB(Red, Green, Blue) filter를 부착하는 방안과, 1981년 T. Uchida 등에 의한 셀 내부에 RGB filter를 부착하는 방법에 의해 상품화가 되었다. 1985년에는 J.L. Fergason에 의해 Polymer Dispersed LCD가 발명되었고, 1980년대 중반에 이르러 동화상(動畵像) 표시가 가능한 a-Si TFT LCD의 시제품(試製品) 개발이 이루어지고 1990년부터는 본격적인 양산 시대에 접어들게 되었다. 1990년대 초에는 STN LCD의 Color화 및 대형화(大型化) 고(高)품위화에 힘입어 Note-Book PC에 LCD가 본격적으로 적용이 되었고, 1990년대 후반에는TFT LCD의 표시품질 대비 가격경쟁력 확보로 인하여 Note-Book PC 시장을 독점하기에 이르렀다. 이후로는 TFT LCD의 대형화가 중요한 쟁점으로 부각되고 있고, 1995년 삼성전자는 당시 세계최대 크기의 22' TFT LCD를 개발하였다. 또한 LCD의 고정세(高情細)화를 위해 Poly Si TFT LCD의 개발이 이루어졌고, 디지타이져 일체형 LCD의 상품화가 그 응용의 폭을 넓혔으며, LCD의 대형화를 위해 1994년 Canon에 의해 14.8', 21' 등의 FLCD가 개발되었다. 대형화 방안으로 Tiled LCD 기술이 개발되고 있으며, 1995년에 Sharp에 의해 21' 두장의 Panel을 이어 붙인 28' TFT LCD가 전시되었고 1996년에는 21' 4장의 Panel을 이어 붙인 40'급 까지의 개발이 시도 되었으며 현재는 LCD의 특성향상과 생산설비의 성능개선과 안정적인 공정관리기술을 바탕으로 삼성전자에서 단패널 40' TFT LCD가 최근에 개발되었다. Projection용 디스플레이로는 Poly-Si TFT LCD를 이용하여 $25'{\sim}100'$사이의 배면투사형과 전면투사형 까지 개발되어 대형 TV시장을 주도하고 있다. 21세기 디지털방송 시대를 맞아 플라즈마디스플레이패널(PDP) TV, 액정표시장치 (LCD)TV, 강유전성액정(FLCD) TV 등 2005년에 약 1500만대 규모의 거대 시장을 형성할 것으로 예상되는 이른바 '벽걸이TV'로 불리는 차세대 초박형 TV 시장을 선점하기 위하여 세계 가전업계들이 양산에 총력을 기울이고 있다. 벽걸이TV 시장이 본격적으로 형성되더라도 PDP TV와 LCD TV가 직접적으로 시장에서 경쟁을 벌이는 일은 별로 없을 것으로 보인다. 향후 디지털TV 시장이 본격적으로 열리면 40인치 이하의 중대형 시장은 LCD TV가 주도하고 40인치 이상 대화면 시장은 PDP TV가 주도할 것으로 보는 시각이 지배적이기 때문이다. 그러나 이러한 직시형 중대형(重大型)디스플레이는 그 가격이 너무 높아서 현재의 브라운관 TV를 대체(代替)하기에는 시일이 많이 소요될 것으로 추정되고 있다. 그 대안(代案)으로는 비교적 저가격(低價格)이면서도 고품질의 디지털 화상구현이 가능한 고해상도 프로젝션 TV가 유력시되고 있다. 이러한 고해상도 프로젝션 TV용으로 DMD(Digital Micro-mirror Display), Poly-Si TFT LCD와 LCOS(Liquid Crystals on Silicon) 등의 상품화가 진행되고 있다. 인터넷과 정보통신 기술의 발달로 휴대형 디스플레이의 시장이 예상 외로 급성장하고 있으며, 요구되는 디스플레이의 품질도 단순한 문자표시에서 그치지 않고 고해상도의 그래픽 동화상 표시와 칼라 표시 및 3차원 화상표시까지 점차로 그 영역이 넓어지고 있다. <표 1>에서 보여주는 바와 같이 LCD의 시장규모는 적용분야 별로 지속적인 성장이 예상되며, 새로운 응용분야의 시장도 성장성을 어느 정도 예측할 수 있다. 따라서 LCD기술의 연구개발 방향은 크게 두가지로 분류할 수 있으며 첫째로는, 현재 양산되고 있는 LCD 상품의 경쟁력강화를 위하여 원가(原價) 절감(節減)과 표시품질을 향상시키는 것이며 둘째로는, 새로운 타입의 LCD를 개발하여 기존 상품을 대체하거나 새로운 시장을 창출하는 분야로 나눌 수 있다. 이와 같은 관점에서 현재 진행되고 있는 LCD기술개발은 다음과 같이 분류할 수 있다. 1) 원가 절감 2) 특성 향상 3) New Type LCD 개발.

  • PDF

A Lower Bound Estimation on the Number of Micro-Registers in Time-Multiplexed FPGA Synthesis (시분할 FPGA 합성에서 마이크로 레지스터 개수에 대한 하한 추정 기법)

  • 엄성용
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.9
    • /
    • pp.512-522
    • /
    • 2003
  • For a time-multiplexed FPGA, a circuit is partitioned into several subcircuits, so that they temporally share the same physical FPGA device by hardware reconfiguration. In these architectures, all the hardware reconfiguration information called contexts are generated and downloaded into the chip, and then the pre-scheduled context switches occur properly and timely. Typically, the size of the chip required to implement the circuit depends on both the maximum number of the LUT blocks required to implement the function of each subcircuit and the maximum number of micro-registers to store results over context switches in the same time. Therefore, many partitioning or synthesis methods try to minimize these two factors. In this paper, we present a new estimation technique to find the lower bound on the number of micro-registers which can be obtained by any synthesis methods, respectively, without performing any actual synthesis and/or design space exploration. The lower bound estimation is very important in sense that it greatly helps to evaluate the results of the previous work and even the future work. If the estimated lower bound exactly matches the actual number in the actual design result, we can say that the result is guaranteed to be optimal. In contrast, if they do not match, the following two cases are expected: we might estimate a better (more exact) lower bound or we find a new synthesis result better than those of the previous work. Our experimental results show that there are some differences between the numbers of micro-registers and our estimated lower bounds. One reason for these differences seems that our estimation tries to estimate the result with the minimum micro-registers among all the possible candidates, regardless of usage of other resources such as LUTs, while the previous work takes into account both LUTs and micro-registers. In addition, it implies that our method may have some limitation on exact estimation due to the complexity of the problem itself in sense that it is much more complicated than LUT estimation and thus needs more improvement, and/or there may exist some other synthesis results better than those of the previous work.

Personalized Exhibition Booth Recommendation Methodology Using Sequential Association Rule (순차 연관 규칙을 이용한 개인화된 전시 부스 추천 방법)

  • Moon, Hyun-Sil;Jung, Min-Kyu;Kim, Jae-Kyeong;Kim, Hyea-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.195-211
    • /
    • 2010
  • An exhibition is defined as market events for specific duration to present exhibitors' main product range to either business or private visitors, and it also plays a key role as effective marketing channels. Especially, as the effect of the opinions of the visitors after the exhibition impacts directly on sales or the image of companies, exhibition organizers must consider various needs of visitors. To meet needs of visitors, ubiquitous technologies have been applied in some exhibitions. However, despite of the development of the ubiquitous technologies, their services cannot always reflect visitors' preferences as they only generate information when visitors request. As a result, they have reached their limit to meet needs of visitors, which consequently might lead them to loss of marketing opportunity. Recommendation systems can be the right type to overcome these limitations. They can recommend the booths to coincide with visitors' preferences, so that they help visitors who are in difficulty for choices in exhibition environment. One of the most successful and widely used technologies for building recommender systems is called Collaborative Filtering. Traditional recommender systems, however, only use neighbors' evaluations or behaviors for a personalized prediction. Therefore, they can not reflect visitors' dynamic preference, and also lack of accuracy in exhibition environment. Although there is much useful information to infer visitors' preference in ubiquitous environment (e.g., visitors' current location, booth visit path, and so on), they use only limited information for recommendation. In this study, we propose a booth recommendation methodology using Sequential Association Rule which considers the sequence of visiting. Recent studies of Sequential Association Rule use the constraints to improve the performance. However, since traditional Sequential Association Rule considers the whole rules to recommendation, they have a scalability problem when they are adapted to a large exhibition scale. To solve this problem, our methodology composes the confidence database before recommendation process. To compose the confidence database, we first search preceding rules which have the frequency above threshold. Next, we compute the confidences of each preceding rules to each booth which is not contained in preceding rules. Therefore, the confidence database has two kinds of information which are preceding rules and their confidence to each booth. In recommendation process, we just generate preceding rules of the target visitors based on the records of the visits, and recommend booths according to the confidence database. Throughout these steps, we expect reduction of time spent on recommendation process. To evaluate proposed methodology, we use real booth visit records which are collected by RFID technology in IT exhibition. Booth visit records also contain the visit sequence of each visitor. We compare the performance of proposed methodology with traditional Collaborative Filtering system. As a result, our proposed methodology generally shows higher performance than traditional Collaborative Filtering. We can also see some features of it in experimental results. First, it shows the highest performance at one booth recommendation. It detects preceding rules with some portions of visitors. Therefore, if there is a visitor who moved with very a different pattern compared to the whole visitors, it cannot give a correct recommendation for him/her even though we increase the number of recommendation. Trained by the whole visitors, it cannot correctly give recommendation to visitors who have a unique path. Second, the performance of general recommendation systems increase as time expands. However, our methodology shows higher performance with limited information like one or two time periods. Therefore, not only can it recommend even if there is not much information of the target visitors' booth visit records, but also it uses only small amount of information in recommendation process. We expect that it can give real?time recommendations in exhibition environment. Overall, our methodology shows higher performance ability than traditional Collaborative Filtering systems, we expect it could be applied in booth recommendation system to satisfy visitors in exhibition environment.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Stand-alone Real-time Healthcare Monitoring Driven by Integration of Both Triboelectric and Electro-magnetic Effects (실시간 헬스케어 모니터링의 독립 구동을 위한 접촉대전 발전과 전자기 발전 원리의 융합)

  • Cho, Sumin;Joung, Yoonsu;Kim, Hyeonsu;Park, Minseok;Lee, Donghan;Kam, Dongik;Jang, Sunmin;Ra, Yoonsang;Cha, Kyoung Je;Kim, Hyung Woo;Seo, Kyoung Duck;Choi, Dongwhi
    • Korean Chemical Engineering Research
    • /
    • v.60 no.1
    • /
    • pp.86-92
    • /
    • 2022
  • Recently, the bio-healthcare market is enlarging worldwide due to various reasons such as the COVID-19 pandemic. Among them, biometric measurement and analysis technology are expected to bring about future technological innovation and socio-economic ripple effect. Existing systems require a large-capacity battery to drive signal processing, wireless transmission part, and an operating system in the process. However, due to the limitation of the battery capacity, it causes a spatio-temporal limitation on the use of the device. This limitation can act as a cause for the disconnection of data required for the user's health care monitoring, so it is one of the major obstacles of the health care device. In this study, we report the concept of a standalone healthcare monitoring module, which is based on both triboelectric effects and electromagnetic effects, by converting biomechanical energy into suitable electric energy. The proposed system can be operated independently without an external power source. In particular, the wireless foot pressure measurement monitoring system, which is rationally designed triboelectric sensor (TES), can recognize the user's walking habits through foot pressure measurement. By applying the triboelectric effects to the contact-separation behavior that occurs during walking, an effective foot pressure sensor was made, the performance of the sensor was verified through an electrical output signal according to the pressure, and its dynamic behavior is measured through a signal processing circuit using a capacitor. In addition, the biomechanical energy dissipated during walking is harvested as electrical energy by using the electromagnetic induction effect to be used as a power source for wireless transmission and signal processing. Therefore, the proposed system has a great potential to reduce the inconvenience of charging caused by limited battery capacity and to overcome the problem of data disconnection.

Effects of Motion Correction for Dynamic $[^{11}C]Raclopride$ Brain PET Data on the Evaluation of Endogenous Dopamine Release in Striatum (동적 $[^{11}C]Raclopride$ 뇌 PET의 움직임 보정이 선조체 내인성 도파민 유리 정량화에 미치는 영향)

  • Lee, Jae-Sung;Kim, Yu-Kyeong;Cho, Sang-Soo;Choe, Yearn-Seong;Kang, Eun-Joo;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Kim, Sang-Eun
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.413-420
    • /
    • 2005
  • Purpose: Neuroreceptor PET studies require 60-120 minutes to complete and head motion of the subject during the PET scan increases the uncertainty in measured activity. In this study, we investigated the effects of the data-driven head mutton correction on the evaluation of endogenous dopamine release (DAR) in the striatum during the motor task which might have caused significant head motion artifact. Materials and Methods: $[^{11}C]raclopride$ PET scans on 4 normal volunteers acquired with bolus plus constant infusion protocol were retrospectively analyzed. Following the 50 min resting period, the participants played a video game with a monetary reward for 40 min. Dynamic frames acquired during the equilibrium condition (pre-task: 30-50 min, task: 70-90 min, post-task: 110-120 min) were realigned to the first frame in pre-task condition. Intra-condition registrations between the frames were performed, and average image for each condition was created and registered to the pre-task image (inter-condition registration). Pre-task PET image was then co-registered to own MRI of each participant and transformation parameters were reapplied to the others. Volumes of interest (VOI) for dorsal putamen (PU) and caudate (CA), ventral striatum (VS), and cerebellum were defined on the MRI. Binding potential (BP) was measured and DAR was calculated as the percent change of BP during and after the task. SPM analyses on the BP parametric images were also performed to explore the regional difference in the effects of head motion on BP and DAR estimation. Results: Changes in position and orientation of the striatum during the PET scans were observed before the head motion correction. BP values at pre-task condition were not changed significantly after the intra-condition registration. However, the BP values during and after the task and DAR were significantly changed after the correction. SPM analysis also showed that the extent and significance of the BP differences were significantly changed by the head motion correction and such changes were prominent in periphery of the striatum. Conclusion: The results suggest that misalignment of MRI-based VOI and the striatum in PET images and incorrect DAR estimation due to the head motion during the PET activation study were significant, but could be remedied by the data-driven head motion correction.