• Title/Summary/Keyword: 데이터 분할 기준

Search Result 211, Processing Time 0.026 seconds

A Reliable Data Capture in Multi-Reader RFID Environments (다중 태그 인식 기반의 신뢰성 있는 데이터 수집 환경)

  • Lee, Young-Ran
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.9
    • /
    • pp.4133-4137
    • /
    • 2011
  • Reliable Multi-Reader RFID identification is one of issues in Multi-Reader RFID realization program in recent. And the Multi-Reader RFID reader has difficulty to obtain reliable data in data capture layer. The reason is that unreliable readings such as a false positive reading and a false negative reading and missed readings can happen by reader collision problems, noise, or the mobility of tagged objects. We introduce performance metrics to solve these reader problems. We propose three solutions the Minimum Overlapped Read Zone (MORZ) with Received Signal Strength Indicator (RSSI), the Spatial-Temporal Division Access (STDA) method, and double bigger size of tags attached on the object. To show the improvement of the proposed methods, we calculate tag's successful read rates in a smart office, which consists of Multi-Reader RFID systems.

A Distributed SPARQL Query Processing Scheme Considering Data Locality and Query Execution Path (데이터 지역성 및 질의 수행 경로를 고려한 분산 SPARQL 질의 처리 기법)

  • Kim, Byounghoon;Kim, Daeyun;Ko, Geonsik;Noh, Yeonwoo;Lim, Jongtae;Bok, kyoungsoo;Lee, Byoungyup;Yoo, Jaesoo
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.275-283
    • /
    • 2017
  • A large amount of RDF data has been generated along with the increase of semantic web services. Various distributed storage and query processing schemes have been studied to efficiently use the massive amounts of RDF data. In this paper, we propose a distributed SPARQL query processing scheme that considers the data locality and query execution path of large RDF data. The proposed scheme considers the data locality and query execution path in order to reduce join and communication costs. In a distributed environment, when processing a SPARQL query, it is divided into several sub-queries according to the conditions of the WHERE clause by considering the data locality. The proposed scheme reduces data communication costs by grouping and processing the sub-queries through the index based on associated nodes. In addition, in order to reduce unnecessary joins and latency when processing the query, it creates an efficient query execution path considering data parsing cost, the amount of each node's data communication, and latency. It is shown through various performance evaluations that the proposed scheme outperforms the existing scheme.

Encoder Type Semantic Segmentation Algorithm Using Multi-scale Learning Type for Road Surface Damage Recognition (도로 노면 파손 인식을 위한 Multi-scale 학습 방식의 암호화 형식 의미론적 분할 알고리즘)

  • Shim, Seungbo;Song, Young Eun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.2
    • /
    • pp.89-103
    • /
    • 2020
  • As we face an aging society, the demand for personal mobility for disabled and aged people is increasing. In fact, as of 2017, the number of electric wheelchair in the country continues to increase to 90,000. However, people with disabilities and seniors are more likely to have accidents while driving, because their judgment and coordination are inferior to normal people. One of the causes of the accident is the interference of personal vehicle steering control due to unbalanced road surface conditions. In this paper, we introduce a encoder type semantic segmentation algorithm that can recognize road conditions at high speed to prevent such accidents. To this end, more than 1,500 training data and 150 test data including road surface damage were newly secured. With the data, we proposed a deep neural network composed of encoder stages, unlike the Auto-encoding type consisting of encoder and decoder stages. Compared to the conventional method, this deep neural network has a 4.45% increase in mean accuracy, a 59.2% decrease in parameters, and an 11.9% increase in computation speed. It is expected that safe personal transportation will be come soon by utilizing such high speed algorithm.

On-board Realtime Orbit Parameter Generator for Geostationary Satellite (정지궤도위성 탑재용 실시간 궤도요소 생성기)

  • Park, Bong-Kyu;Yang, Koon-Ho
    • Aerospace Engineering and Technology
    • /
    • v.8 no.2
    • /
    • pp.61-67
    • /
    • 2009
  • This paper proposes an on-board orbit data generation algorithm for geostationary satellites. The concept of the proposed algorithm is as follows. From the ground, the position and velocity deviations with respect to the assumed reference orbit are computed for 48 hours of time duration in 30 minutes interval, and the generated data are up-loaded to the satellite to be stored. From the table, three nearest data sets are selected to compute position and velocity deviation for asked epoch time by applying $2^{nd}$ order polynomial interpolation. The computed position and velocity deviation data are added to reference orbit to recover absolute orbit information. Here, the reference orbit is selected to be ideal geostationary orbit with a zero inclination and zero eccentricity. Thanks to very low computational burden, this algorithm allows us to generate orbit data at 1Hz or even higher. In order to support 48 hours autonomy, maximum 3K byte memory is required as orbit data storage. It is estimated that this additional memory requirement is acceptable for geostationary satellite application.

  • PDF

Real-time Remote Monitoring System of Chemical Accident Response based on Multi-hop Communication (멀티 홉 통신을 기반한 화학 사고 대응 실시간 원격 모니터링 시스템)

  • Lee, Seung-Chul;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1706-1712
    • /
    • 2022
  • Recently, the safety of chemical substances has gained attention due to incidents occurring in petrochemical industrial complexes, such as gas leaks and fires. In particular, industrial complexes in Ulsan and Yeosu (South Korea) are valuable as they significantly contribute to the petrochemical industry, but accidents may occur due to chemical leakage. Therefore, in this study, sensor nodes are configured at an interval of 20 [m] based on outdoor facilities standards to respond to chemical accidents, and exposure consideration of 8 h (TWA) and 15 min (STEL) are proposed in TLVs. The proposed system pre-processes data collected in multi-hop communication at a cycle of 0.6-0.75 [s] using Python and stores it in the MySQL database through SQL and a real-time remote monitoring system that updates the stored data once every 5 s is implemented by linking MySQL and Grafana.

Construction Progress Measurement System by tracking the Work-done Performance (내역물량 측정에 의한 건설공사진도율 산정시스템)

  • Choi Yoon-Ki
    • Korean Journal of Construction Engineering and Management
    • /
    • v.4 no.3 s.15
    • /
    • pp.137-145
    • /
    • 2003
  • The project control system based on the actual values of three objects shall be operated continuously in a timely manner. For collecting/tracking accurate actual performance data, a reasonable basis of measuring work performance and its related measuring methods are needed. Therefore, this research proposes a method of developing and operating the construction progress measurement system. The problem of the conventional method is the difficulty to construct control accounts and to define the basis of measuring the performance of each control account. Therefore, this research proposes the preferable, formal methodology that produces the progress value of the smallest work unit by surveying the installed quantities and estimates percent complete of groups of works or entire project by earned value concept. This research in connection with the hereafter research of the weight value of control accounts will contribute to apply in practice and to develop the scientific construction management technique in the construction industry. Further researches how to trend and forecast the project using the measured progress value are recommended for putting the prosed system of this research to practical use.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

In-Vivo Heat Transfer Measurement using Proton Resonance Frequency Method of Magnetic Resonance Imaging (자기 공명영상 시스템의 수소원자 공명 주파수법을 이용한 생체 내 열 전달 관찰)

  • 조지연;조종운;이현용;신운재;은충기;문치웅
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.40 no.3
    • /
    • pp.172-180
    • /
    • 2003
  • The purpose of this study is to observe the heat transfer process in in-vivo human muscle based on Proton Resonance Frequency(PRF) method in Magnetic Resonance Imaging(MRI). MRI was obtained to measure the temperature variation according to the heat transfer in phantom and in-vivo human calf muscle. A phantom(2% agarose gel) was used in this experiment. MR temperature measurement was compared with the direct temperature measurement using a T-type thermocouple. After heating agarose gel to more than 5$0^{\circ}C$ in boiling hot water, raw data were acquired every 3 minutes during one hour cooling period for a phantom case. For human study heat was forced to deliver into volunteer's calf muscle using hot pack. Reference data were once acquired before a hot pack emits heat and raw data were acquired every 2 minutes during 30minutes. Acquired raw data were reconstructed to phase-difference images with reference image to observe the temperature change. Phase-difference of the phantom was linearly proportional to the temperature change in the range of 34.2$^{\circ}C$ and 50.2$^{\circ}C$. Temperature resolution was 0.0457 radian /$^{\circ}C$(0.0038 ppm/$^{\circ}C$) in phantom case. In vivo-case, mean phase-difference in near region from the hot pack is smaller than that in far region. Different temperature distribution was observed in proportion to a distance from heat source.

An evaluation methodology for cement concrete lining crack segmentation deep learning model (콘크리트 라이닝 균열 분할 딥러닝 모델 평가 방법)

  • Ham, Sangwoo;Bae, Soohyeon;Lee, Impyeong;Lee, Gyu-Phil;Kim, Donggyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.513-524
    • /
    • 2022
  • Recently, detecting damages of civil infrastructures from digital images using deep learning technology became a very popular research topic. In order to adapt those methodologies to the field, it is essential to explain robustness of deep learning models. Our research points out that the existing pixel-based deep learning model evaluation metrics are not sufficient for detecting cracks since cracks have linear appearance, and proposes a new evaluation methodology to explain crack segmentation deep learning model more rationally. Specifically, we design, implement and validate a methodology to generate tolerance buffer alongside skeletonized ground truth data and prediction results to consider overall similarity of topology of the ground truth and the prediction rather than pixel-wise accuracy. We could overcome over-estimation or under-estimation problem of crack segmentation model evaluation through using our methodology, and we expect that our methodology can explain crack segmentation deep learning models better.

An Efficient Iterative Receiver for OFDMA Systems in Uplink Environments (직교 주파수 분할 다중 접속 시스템 상향 전송에 알맞은 효율적인 반복 수신 기법)

  • Hwang, Hae-Gwang;Sang, Young-Jin;Byun, Il-Mu;Kim, Kwang-Soon
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.11 s.353
    • /
    • pp.8-15
    • /
    • 2006
  • In this paper, we propose the iterative receiver for LDPC-coded OFDMA systems in uplink environments. Applying the Wiener filtering to pilot symbols, an initial channel estimation can be performed effectively. To reduce the complexity of the Wiener filtering, we approximate Wiener filtering coefficients to pre-determined coefficients according to estimated correlation of channel. After an LDPC decoding process, soft symbol derived by extrinsic information of decoder outputs is used to estimate channel. we also derive the error variance of channel estimation and maximum ratio combined results. Using combined results, the channel correlation is re-estimated. Then the proper Wiener filtering coefficients are chosen according to the re-estimated result of the channel correction. Using a computer simulation, we show that the proposed receiver structure has the better performance than the receiver using only pilot symbols.