• Title/Summary/Keyword: Current matching

Search Result 495, Processing Time 0.029 seconds

Research on successful model application of Indonesian Cultural Content "Batik" to E-biz/local Informatization "Green Smart Village" (인도네시아 문화콘텐츠 "바��"을 통한 e-비즈/지역정보화 "그린스마트빌리지" 성공모델 적용에 관한 연구)

  • Lee, Eun-Ryoung;Kim, Kio-Chung
    • Journal of Digital Contents Society
    • /
    • v.12 no.4
    • /
    • pp.601-609
    • /
    • 2011
  • In developing countries, economically under-privileged are mostly consisted of women, therefore supporting those women signify supporting the local society and the family. Advancement of women's economic status not only contributes to her family but also to the local society, nation, and to the global world as a whole. This paper is a research on local informatization and successful model in e-business for Indonesia, which established interactive research model networking with Korea-Indonesia Research Institution and policy-makers for two years and susggested practical research model through visiting Pekalongan. Through activated interaction between women enterprises and policy makers from Korea and Indonesia, the research paper seeks to create research based network and provide opportunities of information access and business matching to local informatized and e-business enterprises. In research adopted regions, city development project has been accomplished in human, business and environmental field since 2005, and have selected Pekalongan region where infra is settled to certain extent. With the information about Indonesia's city development project, investigation on Pekalongan's current geographical, humanistic status quo, the paper aims to and create Indonesian female e-business professionals, e-business user, e-business producers and provide successful model on Pekalongan's local informatization and e-business.

Highly Linear Wideband LNA Design Using Inductive Shunt Feedback (Inductive Shunt 피드백을 이용한 고선형성 광대역 저잡음 증폭기)

  • Jeonng, Nam Hwi;Cho, Choon Sik
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.24 no.11
    • /
    • pp.1055-1063
    • /
    • 2013
  • Low noise amplifiers(LNAs) are an integral component of RF receivers and are frequently required to operate at wide frequency bands for various wireless systems. For wideband operation, important performance metrics such as voltage gain, return loss, noise figures and linearity have been carefully investigated and characterized for the proposed LNA. An inductive shunt feedback configuration is successfully employed in the input stage of the proposed LNA which incorporates cascaded networks with a peaking inductor in the buffer stage. Design equations for obtaining low and high input matching frequencies are easily derived, leading to a relatively simple method for circuit implementation. Careful theoretical analysis explains that poles and zeros are characterized and utilized for realizing the wideband response. Linearity is significantly improved because the inductor between gate and drain decreases the third-order harmonics at the output. Fabricated in $0.18{\mu}m$ CMOS process, the chip area of this LNA is $0.202mm^2$, including pads. Measurement results illustrate that input return loss shows less than -7 dB, voltage gain greater than 8 dB, and a little high noise figure around 7~8 dB over 1.5~13 GHz. In addition, good linearity(IIP3) of 2.5 dBm is achieved at 8 GHz and 14 mA of current is consumed from a 1.8 V supply.

Education Needs for Home Care Nurse (가정간호 교육요구도 조사 연구)

  • Kim Cho-Ja;Kang Kyu-Sook;Baek Hee-Chon
    • Journal of Korean Academy of Fundamentals of Nursing
    • /
    • v.6 no.2
    • /
    • pp.228-239
    • /
    • 1999
  • In 1990 Home Care Education Programs started when legislation established certification for Home Care Nurses. The Ministry of Health and Welfare proposed a home care education curriculum which has 352 class hours and 248 hours of 'family nursing and practice'. Though Home Care Education Programs have been offered in 11 home care educational institutes, there has been no formal revision for the home care education programs. Also a first and second home care demonstration projects have been carried out, but there has been no research on outcomes for home care education as applied in home care practice. The purposes of this study were to identify the important content areas for home care nursing as perceived by home care nurses, and to identify their clinical competence in each of these areas, and from these to identify the education needs. The sample was 107 home care nurses who were working in home care demonstration hospitals and community-based institutions which have been offering home care services. Responses were received from 88 nurses, comprising a 82.2% return rate, and 86 were included in the final analysis. The instrument used was a modification of the instrument developed by Caie-Lawrence et(1995) and Moon's(1991) instrument on home care knowledge. The instrument's Cronbach's coefficient was 0.982. Among the respondents, 64% were working at home care demonstration hospitals and 36% were working at community-based institutions. Their home care experiences were from one month to six years, with a mean of 20.6 months. The importance rating for home care education content was 3.42 0.325, which means importance was rated relatively high. Technical aspects of home care were identified the most important. Five items 'education skill', 'counseling skill', 'interview skill', 'wound care skill', 'bed sore care skill' received 100% importance ratings. The competency rating was 2.87 0.367 and 'technical aspects of home care' was the highest, and 'application to home care skill' was the lowest. Home care nurses' education needs were identified and compared to the importance ratings and competency ratings. Eleven items were identified as the highest in the importance areas and eleven items were in the lowest competency areas. High importance ratings matched with low competency ratings determined training needs, but there was no matching items in this study. In the lowest competency areas four items were excluded, because of not being applicable in current home care practice. Therefore total eighteen items were identified as home care education needs. These items are 'bed sore care skill', 'malpractice', 'wound care skill', 'general infection control', 'change and management of tracheostomy tubes', 'CVA patient care', 'Hospice care', 'pain management', 'urinary catheterization and management', 'L-tube insertion and managements', 'Respirator use and management skill', 'infant care', 'prevention to burnout', 'child assessment', 'CAPD', 'infant assessment', 'computer literacy', and 'psychiatry patient care'.

  • PDF

Detecting Cadastral Discrepancy Method based on MMAS (MMAS 기법에 의한 지적불부합지 탐색기법)

  • Cho, Sung-Hwan;Huh, Yong
    • Journal of Cadastre & Land InformatiX
    • /
    • v.45 no.2
    • /
    • pp.149-160
    • /
    • 2015
  • This paper suggests the MMAS(Map Matching using Additional Surveying) method to improve the cadastral discrepancy search algorithm that currently does not include corrections of mis-represented parcel data. The MMAS is a method to search for cadastral discrepancy after correcting mis-represented parcel data using nearby anchor points confirmed by surveys. The MMAS first transforms the coordinate system of the digital cadastral map by overlaying anchor points obtained in the field surveying process over the corresponding edges of buildings and facility points on the digital topographic map. Then, it searches for cadastral discrepancy by checking if the area differences exceed the tolerance limit. This method improves the current method for searching for cadastral discrepancy by performing the process after correcting extortion of the digital cadastral map. This helps to identify cadastral discrepancies that are not detectable within the distorted digital cadastral map. With our experiment, this method identified more discrepancies compared to the method without the correcting the distortion of the digital cadastral map. We believe this method will be able to help the national cadastral re-survey by identifying potential cadastral discrepancy more accurately.

Non Duplicated Extract Method of Heterogeneous Data Sources for Efficient Spatial Data Load in Spatial Data Warehouse (공간 데이터웨어하우스에서 효율적인 공간 데이터 적재를 위한 이기종 데이터 소스의 비중복 추출기법)

  • Lee, Dong-Wook;Baek, Sung-Ha;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.2
    • /
    • pp.143-150
    • /
    • 2009
  • Spatial data warehouses are a system managing manufactured data through ETL step with extracted spatial data from spatial DBMS or various data sources. In load period, duplicated spatial data in the same subject are not useful in extracted spatial data dislike aspatial data and waste the storage space by the feature of spatial data. Also, in case of extracting source data on heterogeneous system, as those have different spatial type and schema, the spatial extract method is required for them. Processing a step matching address about extracted spatial data using a standard Geocoding DB, the exiting methods load formal data set. However, the methods cause the comparison operation of extracted data with Geocoding DB, and according to integrate spatial data by subject it has problems which do not consider duplicated data among heterogeneous spatial DBMS. This paper proposes efficient extracting method to integrate update query extracted from heterogeneous source systems in data warehouse constructer. The method eliminates unnecessary extracting operation cost to choose related update queries like insertion or deletion on queries generated from loading to current point. Also, we eliminate and integrate extracted spatial data using update query in source spatial DBMS. The proposed method can reduce wasting storage space caused by duplicate storage and support rapidly analyzing spatial data by loading integrated data per loading point.

  • PDF

A Experimental Study on the 3-D Image Restoration Technique of Submerged Area by Chung-ju Dam (충주댐 수몰지구의 3차원 영상복원 기법에 관한 실험적 연구)

  • 연상호
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.22 no.1
    • /
    • pp.21-27
    • /
    • 2004
  • It will be a real good news fer the people who were lost their hometown by the construction of a large dam to be restored to the farmer state. Focused on Cheung-pyung around where most part were submerged by the Chungju large Dam founded in eurly 1980s, It used remote sensing image restoration Technique in this study in order to restore topographical features before the flood with stereo effects. We gathered comparatively good satellite photos and remotely sensed digital images, then its made a new fusion image from these various satellite images and the topographical map which had been made before the water filled by the DAM. This task was putting together two kinds of different timed images. And then, we generated DEM including the outskirts of that area as matching current contour lines with the map. That could be a perfect 3D image of test areas around before when it had been water filled by making perspective images from all directions included north, south, east and west, fer showing there in 3 dimensions. Also, for close range visiting made of flying simulation can bring to experience their real space at that time. As a result of this experimental task, it made of new fusion images and 3-D perspective images and simulation live images by remotely sensed photos and images, old paper maps about vanished submerged Dam areas and gained of possibility 3-D terrain image restoration about submerged area by large Dam construction.

A Study on Effective Moving Object Segmentation and Fast Tracking Algorithm (효율적인 이동물체 분할과 고속 추적 알고리즘에 관한 연구)

  • Jo, Yeong-Seok;Lee, Ju-Sin
    • The KIPS Transactions:PartB
    • /
    • v.9B no.3
    • /
    • pp.359-368
    • /
    • 2002
  • In this paper, we propose effective boundary line extraction algorithm for moving objects by matching error image and moving vectors, and fast tracking algorithm for moving object by partial boundary lines. We extracted boundary line for moving object by generating seeds with probability distribution function based on Watershed algorithm, and by extracting boundary line for moving objects through extending seeds, and then by using moving vectors. We processed tracking algorithm for moving object by using a part of boundary lines as features. We set up a part of every-direction boundary line for moving object as the initial feature vectors for moving objects. Then, we tracked moving object within current frames by using feature vector for the previous frames. As the result of the simulation for tracking moving object on the real images, we found that tracking processing of the proposed algorithm was simple due to tracking boundary line only for moving object as a feature, in contrast to the traditional tracking algorithm for active contour line that have varying processing cost with the length of boundary line. The operations was reduced about 39% as contrasted with the full search BMA. Tracking error was less than 4 pixel when the feature vector was $(15\times{5)}$ through the information of every-direction boundary line. The proposed algorithm just needed 200 times of search operation.

A Smart Image Classification Algorithm for Digital Camera by Exploiting Focal Length Information (초점거리 정보를 이용한 디지털 사진 분류 알고리즘)

  • Ju, Young-Ho;Cho, Hwan-Gue
    • Journal of the Korea Computer Graphics Society
    • /
    • v.12 no.4
    • /
    • pp.23-32
    • /
    • 2006
  • In recent years, since the digital camera has been popularized, so users can easily collect hundreds of photos in a single usage. Thus the managing of hundreds of digital photos is not a simple job comparing to the keeping paper photos. We know that managing and classifying a number of digital photo files are burdensome and annoying sometimes. So people hope to use an automated system for managing digital photos especially for their own purposes. The previous studies, e.g. content-based image retrieval, were focused on the clustering of general images, which it is not to be applied on digital photo clustering and classification. Recently, some specialized clustering algorithms for images clustering digital camera images were proposed. These algorithms exploit mainly the statistics of time gap between sequent photos. Though they showed a quite good result in image clustering for digital cameras, still lots of improvements are remained and unsolved. For example the current tools ignore completely the image transformation with the different focal lengths. In this paper, we present a photo considering focal length information recorded in EXIF. We propose an algorithms based on MVA(Matching Vector Analysis) for classification of digital images taken in the every day activity. Our experiment shows that our algorithm gives more than 95% success rates, which is competitive among all available methods in terms of sensitivity, specificity and flexibility.

  • PDF

A Dynamic Service Supporting Model for Semantic Web-based Situation Awareness Service (시맨틱 웹 기반 상황인지 서비스를 위한 동적 서비스 제공 모델)

  • Choi, Jung-Hwa;Park, Young-Tack
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.9
    • /
    • pp.732-748
    • /
    • 2009
  • The technology of Semantic Web realizes the base technology for context-awareness that creates new services by dynamically and flexibly combining various resources (people, concepts, etc). According to the realization of ubiquitous computing technology, many researchers are currently working for the embodiment of web service. However, most studies of them bring about the only predefined results those are limited to the initial description by service designer. In this paper, we propose a new service supporting model to provide an automatic method for plan related tasks which achieve goal state from initial state. The inputs on an planner are intial and goal descriptions which are mapped to the current situation and to the user request respectively. The idea of the method is to infer context from world model by DL-based ontology reasoning using OWL domain ontology. The context guide services to be loaded into planner. Then, the planner searches and plans at least one service to satisfy the goal state from initial state. This is STRIPS-style backward planner, and combine OWL-S services based on AI planning theory that enabling reduced search scope of huge web-service space. Also, when feasible service do not find using pattern matching, we give user alternative services through DL-based semantic searching. The experimental result demonstrates a new possibility for realizing dynamic service modeler, compared to OWLS-XPlan, which has been known as an effective application for service composition.

Design and Implementation of Priority Retrieval Technique based on SIF (SIF기반 우선순위 검색기법의 설계 및 구현)

  • Lee, Eun-Sik;Cho, Dae-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.11
    • /
    • pp.2535-2540
    • /
    • 2010
  • In traditional Publish/Subscribe system, the first procedure to deliver event from publisher to subscriber is that publisher publishes publisher's event to broker. Next step is that broker checks simple binary notion of matching : an event either matches a subscription or it does not. Lastly, broker delivers the event matched with subscriptions to the corresponding subscribers. In this system, information delivery has been accomplished in one way only. However, current some applications require two way delivery between subscriber and publisher. Therefore, we initiate an extended Publish/Subscribe system that supports two way delivery. Extended Publish/Subscribe system requires additional functions of delivering subscription to publisher and especially deciding top-n subscriptions using priority because broker might has a number of subscriptions. In this paper, we propose two priority retrieval techniques based on SIF using IS-List with deciding priority among subscriptions and defining SIF(Specific Interval First). The performance measurements show that RSO(resulting set sorting) technique results in better performance in index creation time and ITS&IS(insertion time sorting and inverse search using stack) technique results in better performance in search time.