• Title/Summary/Keyword: Order tracking

Search Result 1,750, Processing Time 0.027 seconds

A Study for Design and Performance Improvement of the High-Sensitivity Receiver Architecture based on Global Navigation Satellite System (GNSS 기반의 고감도 수신기 아키텍처 설계 및 성능 향상에 관한 연구)

  • Park, Chi-Ho;Oh, Young-Hwan
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.45 no.4
    • /
    • pp.9-21
    • /
    • 2008
  • In this paper, we propose a GNSS-based RF receiver, A high precision localization architecture, and a high sensitivity localization architecture in order to solve the satellite navigation system's problem mentioned above. The GNSS-based RF receiver model should have the structure to simultaneously receive both the conventional GPS and navigation information data of future-usable Galileo. As a result, it is constructed as the multi-band which can receive at the same time Ll band (1575.42MHz) of GPS and El band (1575.42MHz), E5A band (1207.1MHz), and E4B band (1176.45MHz) of Galileo This high precision localization architecture proposes a delay lock loop with the structure of Early_early code, Early_late code, Prompt code, Late_early code, and Late_late code other than Early code, Prompt code, and Late code which a previous delay lock loop structure has. As we suggest the delay lock loop structure of 1/4chips spacing, we successfully deal with the synchronization problem with the C/A code derived from inaccuracy of the signal received from the satellite navigation system. The synchronization problem with the C/A code causes an acquisition delay time problem of the vehicle navigation system and leads to performance reduction of the receiver. In addition, as this high sensitivity localization architecture is designed as an asymmetry structure using 20 correlators, maximizes reception amplification factor, and minimizes noise, it improves a reception rate. Satellite navigation system repeatedly transmits the same C/A code 20 times. Consequently, we propose a structure which can use all of the same C/A code. Since this has an adaptive structure and can limit(offer) the number of the correlator according to the nearby environment, it can reduce unnecessary delay time of the system. With the use of this structure, we can lower the acquisition delay time and guarantee the continuity of tracking.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

A Study on the Determinants of Blockchain-oriented Supply Chain Management (SCM) Services (블록체인 기반 공급사슬관리 서비스 활용의 결정요인 연구)

  • Kwon, Youngsig;Ahn, Hyunchul
    • Knowledge Management Research
    • /
    • v.22 no.2
    • /
    • pp.119-144
    • /
    • 2021
  • Recently, as competition in the market evolves from the competition among companies to the competition among their supply chains, companies are struggling to enhance their supply chain management (hereinafter SCM). In particular, as blockchain technology with various technical advantages is combined with SCM, a lot of domestic manufacturing and distribution companies are considering the adoption of blockchain-oriented SCM (BOSCM) services today. Thus, it is an important academic topic to examine the factors affecting the use of blockchain-oriented SCM. However, most prior studies on blockchain and SCMs have designed their research models based on Technology Acceptance Model (TAM) or the Unified Theory of Acceptance and Use of Technology (UTAUT), which are suitable for explaining individual's acceptance of information technology rather than companies'. Under this background, this study presents a novel model of blockchain-oriented SCM acceptance model based on the Technology-Organization-Environment (TOE) framework to consider companies as the unit of analysis. In addition, Value-based Adoption Model (VAM) is applied to the research model in order to consider the benefits and the sacrifices caused by a new information system comprehensively. To validate the proposed research model, a survey of 126 companies were collected. Among them, by applying PLS-SEM (Partial Least Squares Structural Equation Modeling) with data of 122 companies, the research model was verified. As a result, 'business innovation', 'tracking and tracing', 'security enhancement' and 'cost' from technology viewpoint are found to significantly affect 'perceived value', which in turn affects 'intention to use blockchain-oriented SCM'. Also, 'organization readiness' is found to affect 'intention to use' with statistical significance. However, it is found that 'complexity' and 'regulation environment' have little impact on 'perceived value' and 'intention to use', respectively. It is expected that the findings of this study contribute to preparing practical and policy alternatives for facilitating blockchain-oriented SCM adoption in Korean firms.

A Study on the Connectivity Modeling Considering the Habitat and Movement Characteristics of Wild Boars (Sus scrofa) (멧돼지(Sus scrofa) 서식지 및 이동 특성을 고려한 연결성 모델링 연구)

  • Lee, Hyun-Jung;Kim, Whee-Moon;Kim, Kyeong-Tae;Jeong, Seung-Gyu;Kim, Yu-Jin;Lee, Kyung Jin;Kim, Ho Gul;Park, Chan;Song, Won-Kyong
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.25 no.4
    • /
    • pp.33-47
    • /
    • 2022
  • Wild boars(Sus scrofa) are expanding their range of behavior as their habitats change. Appearing in urban centers and private houses, it caused various social problems, including damage to crops. In order to prevent damage and effectively manage wild boars, there is a need for ecological research considering the characteristics and movement characteristics of wild boars. The purpose of this study is to analyze home range and identify land cover types in key areas through tracking wild boars, and to predict the movement connectivity of wild boars in consideration of previous studies and their preferred land use characteristics. In this study, from January to June 2021, four wild boars were captured and tracked in Jinju city, Gyeongsangnam-do, and the preferred land cover type of wild boars was identified based on the MCP 100%, KDE 95%, and KDE 50% results. As a result of the analysis of the home range for each individual, it was found that 100% of MCP was about 0.68km2, 2.77km2, 2.42km2, and 0.16km2, and the three individuals overlapped the home range, refraining from habitat movement and staying in the preferred area. The core areas were analyzed as about 0.55km2, 2.05km2, 0.82km2, and 0.14km2 with KDE 95%., and about 0.011km2, 0.033km2, 0.004km2, and 0.003km2 with KDE 50%. When the preferred land cover type of wild boar was confirmed based on the results of analysis of the total home range area and core area that combined all individuals, forests were 55.49% (MCP 100%), 54.00% (KDE 95%), 77.69% (KDE 50%), respectively, with the highest ratio, and the urbanization area, grassland, and agricultural area were relatively high. A connectivity scenario was constructed in which the ratio of the land cover type preferred by the analyzed wild boar was reflected as a weight for the resistance value of the connectivity analysis, and this was compared with the connectivity evaluation results analyzed based on previous studies and wild boar characteristics. When the current density values for the wild boar movement data were compared, the average value of the existing scenario was 2.76, the minimum 1.12, and the maximum 4.36, and the weighted scenario had an average value of 2.84, the minimum 0.96, and the maximum 4.65. It was confirmed that, on average, the probability of movement predictability was about 2.90% better even though the weighted scenario had movement restrictions due to large resistance values. It is expected that the identification of the movement route through the movement connectivity analysis of wild boars can be suggested as an alternative to prevent damage by predicting the point of appearance. In the future, when analyzing the connectivity of species including wild boar, it is judged that it will be effective to use movement data on actual species.

Feasibility of Environmental DNA Metabarcoding for Invasive Species Detection According to Taxa (분류군별 외래생물 탐지를 위한 환경 DNA 메타바코딩 활용 가능성)

  • Yujin Kang;Jeongeun Jeon;Seungwoo Han;Suyeon Won;Youngkeun Song
    • Journal of Environmental Impact Assessment
    • /
    • v.32 no.2
    • /
    • pp.94-111
    • /
    • 2023
  • In order to establish an effective management strategy for invasive species early detection and regular monitoring are required to assess their introduction or dispersal. Environmental DNA (eDNA) is actively applied to evaluate the fauna including the presence of invasive species as it has high detection sensitivity and can detect multiple species simultaneously. In Korea, the applicability evaluation of metabarcoding is being conducted mainly on fish, and research on other taxa is insufficient. Therefore, this study identified the feasibility of detecting invasive species in Korea using eDNA metabarcoding. In addition, to confirm the possibility of detection by taxa, the detection of target species was evaluated using four universal primers (MiFish, MiMammal, Mibird, Amp16S) designed for fish, mammals, birds, and amphibians. As a result, target species (Trachemys scripta, 3 sites; Cervus nippon, 3 sites; Micropterus salmoides, 7 sites; Rana catesbeiana, 4 sites) were detected in 17 of the total 55 sites. Even in the selection of dense sampling sites within the study area, there was a difference in the detection result by reflecting the ecological characteristics of the target species. A comparison of community structures (species richness, abundance and diversity) based on the presence of invasive species focused on M.salmoides and T.scripta, showed higher diversity at the point where invasive species were detected. Also, 1 to 4 more species were detected and abundance was also up to 1.7 times higher. The results of invasive species detection through metabarcoding and the comparison of community structures indicate that the accumulation of large amounts of monitoring data through eDNA can be efficiently utilized for multidimensional ecosystem evaluation. In addition, it suggested that eDNA can be used as major data for evaluation and prediction, such as tracking biological changes caused by artificial and natural factors and environmental impact assessment.

A study on Convergence Weapon Systems of Self propelled Mobile Mines and Supercavitating Rocket Torpedoes (자항 기뢰와 초공동 어뢰의 융복합 무기체계 연구)

  • Lee, Eunsu;Shin, Jin
    • Maritime Security
    • /
    • v.7 no.1
    • /
    • pp.31-60
    • /
    • 2023
  • This study proposes a new convergence weapon system that combines the covert placement and detection abilities of a self-propelled mobile mine with the rapid tracking and attack abilities of supercavitating rocket torpedoes. This innovative system has been designed to counter North Korea's new underwater weapon, 'Haeil'. The concept behind this convergence weapon system is to maximize the strengths and minimize the weaknesses of each weapon type. Self-propelled mobile mines, typically placed discreetly on the seabed or in the water, are designed to explode when a vessel or submarine passes near them. They are generally used to defend or control specific areas, like traditional sea mines, and can effectively limit enemy movement and guide them in a desired direction. The advantage that self-propelled mines have over traditional sea mines is their ability to move independently, ensuring the survivability of the platform responsible for placing the sea mines. This allows the mines to be discreetly placed even deeper into enemy lines, significantly reducing the time and cost of mine placement while ensuring the safety of the deployed platforms. However, to cause substantial damage to a target, the mine needs to detonate when the target is very close - typically within a few yards. This makes the timing of the explosion crucial. On the other hand, supercavitating rocket torpedoes are capable of traveling at groundbreaking speeds, many times faster than conventional torpedoes. This rapid movement leaves little room for the target to evade, a significant advantage. However, this comes with notable drawbacks - short range, high noise levels, and guidance issues. The high noise levels and short range is a serious disadvantage that can expose the platform that launched the torpedo. This research proposes the use of a convergence weapon system that leverages the strengths of both weapons while compensating for their weaknesses. This strategy can overcome the limitations of traditional underwater kill-chains, offering swift and precise responses. By adapting the weapon acquisition criteria from the Defense force development Service Order, the effectiveness of the proposed system was independently analyzed and proven in terms of underwater defense sustainability, survivability, and cost-efficiency. Furthermore, the utility of this system was demonstrated through simulated scenarios, revealing its potential to play a critical role in future underwater kill-chain scenarios. However, realizing this system presents significant technical challenges and requires further research.

  • PDF

A Study of Equipment Accuracy and Test Precision in Dual Energy X-ray Absorptiometry (골밀도검사의 올바른 질 관리에 따른 임상적용과 해석 -이중 에너지 방사선 흡수법을 중심으로-)

  • Dong, Kyung-Rae;Kim, Ho-Sung;Jung, Woon-Kwan
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.17-23
    • /
    • 2008
  • Purpose : Because there is a difference depending on the environment as for an inspection equipment the important part of bone density scan and the precision/accuracy of a tester, the management of quality must be made systematically. The equipment failure caused by overload effect due to the aged equipment and the increase of a patient was made frequently. Thus, the replacement of equipment and additional purchases of new bonedensity equipment caused a compatibility problem in tracking patients. This study wants to know whether the clinical changes of patient's bonedensity can be accurately and precisely reflected when used it compatiblly like the existing equipment after equipment replacement and expansion. Materials and methods : Two equipments of GE Lunar Prodigy Advance(P1 and P2) and the Phantom HOLOGIC Spine Road(HSP) were used to measure equipment precision. Each device scans 20 times so that precision data was acquired from the phantom(Group 1). The precision of a tester was measured by shooting twice the same patient, every 15 members from each of the target equipment in 120 women(average age 48.78, 20-60 years old)(Group 2). In addition, the measurement of the precision of a tester and the cross-calibration data were made by scanning 20 times in each of the equipment using HSP, based on the data obtained from the management of quality using phantom(ASP) every morning (Group 3). The same patient was shot only once in one equipment alternately to make the measurement of the precision of a tester and the cross-calibration data in 120 women(average age 48.78, 20-60 years old)(Group 4). Results : It is steady equipment according to daily Q.C Data with $0.996\;g/cm^2$, change value(%CV) 0.08. The mean${\pm}$SD and a %CV price are ALP in Group 1(P1 : $1.064{\pm}0.002\;g/cm^2$, $%CV=0.190\;g/cm^2$, P2 : $1.061{\pm}0.003\;g/cm^2$, %CV=0.192). The mean${\pm}$SD and a %CV price are P1 : $1.187{\pm}0.002\;g/cm^2$, $%CV=0.164\;g/cm^2$, P2 : $1.198{\pm}0.002\;g/cm^2$, %CV=0.163 in Group 2. The average error${\pm}$2SD and %CV are P1 - (spine: $0.001{\pm}0.03\;g/cm^2$, %CV=0.94, Femur: $0.001{\pm}0.019\;g/cm^2$, %CV=0.96), P2 - (spine: $0.002{\pm}0.018\;g/cm^2$, %CV=0.55, Femur: $0.001{\pm}0.013\;g/cm^2$, %CV=0.48) in Group 3. The average error${\pm}2SD$, %CV, and r value was spine : $0.006{\pm}0.024\;g/cm^2$, %CV=0.86, r=0.995, Femur: $0{\pm}0.014\;g/cm^2$, %CV=0.54, r=0.998 in Group 4. Conclusion: Both LUNAR ASP CV% and HOLOGIC Spine Phantom are included in the normal range of error of ${\pm}2%$ defined in ISCD. BMD measurement keeps a relatively constant value, so showing excellent repeatability. The Phantom has homogeneous characteristics, but it has limitations to reflect the clinical part including variations in patient's body weight or body fat. As a result, it is believed that quality control using Phantom will be useful to check mis-calibration of the equipment used. A value measured a patient two times with one equipment, and that of double-crossed two equipment are all included within 2SD Value in the Bland - Altman Graph compared results of Group 3 with Group 4. The r value of 0.99 or higher in Linear regression analysis(Regression Analysis) indicated high precision and correlation. Therefore, it revealed that two compatible equipment did not affect in tracking the patients. Regular testing equipment and capabilities of a tester, then appropriate calibration will have to be achieved in order to calculate confidential BMD.

  • PDF

Evaluation of the Neural Fiber Tractography Associated with Aging in the Normal Corpus Callosum Using the Diffusion Tensor Imaging (DTI) (확산텐서영상(Diffusion Tensor Imaging)을 이용한 정상 뇌량에서의 연령대별 신경섬유로의 변화)

  • Im, In-Chul;Goo, Eun-Hoe;Lee, Jae-Seung
    • Journal of the Korean Society of Radiology
    • /
    • v.5 no.4
    • /
    • pp.189-194
    • /
    • 2011
  • This study used magnetic resonance diffusion tensor imaging (DTI) to quantitatively analyze the neural fiber tractography according to the age of normal corpus callosum and to evaluate of usefulness. The research was intended for the applicants of 60 persons that was in a good state of health with not brain or other disease. The test parameters were TR: 6650 ms, TE: 66 ms, FA: $90^{\circ}$, NEX: 2, thickness: 2 mm, no gap, FOV: 220 mm, b-value: $800s/mm^2$, sense factor: 2, acquisition matrix size: $2{\times}2{\times}2mm^3$, and the test time was 3 minutes 46 seconds. The evaluation method was constructed the color-cored FA map include to the skull vertex from the skull base in scan range. We set up the five ROI of corpus callosum of genu, anterior-mid body, posterior-mid body, isthmus, and splenium, tracking, respectively, and to quantitatively measured the length of neural fiber. As a result, the length of neural fiber, for the corpus callosum of genu was 20's: $61.8{\pm}6.8$, 30's: $63.9{\pm}3.8$, 40's: $65.5{\pm}6.4$, 50's: $57.8{\pm}6.0$, 60's: $58.9{\pm}4.5$, more than 70's: $54.1{\pm}8.1mm$, for the anterior-mid body was 20's: $54.8{\pm}8.8$, 30's: $58.5{\pm}7.9$, 40's: $54.8{\pm}7.8$, 50's: $56.1{\pm}10.2$, 60's: $48.5{\pm}6.2$, more than 70's: $48.6{\pm}8.3mm$, for the posterior-mid body was 20's: $72.7{\pm}9.1$, 30's: $61.6{\pm}9.1$, 40's: $60.9{\pm}10.5$, 50's: $61.4{\pm}11.7$, 60's: $54.9{\pm}10.0$, more than 70's: $53.1{\pm}10.5mm$, for the isthmus was 20's: $71.5{\pm}17.4$, 30's: $74.1{\pm}14.9$, 40's: $73.6{\pm}14.2$, 50's: $66.3{\pm}12.9$, 60's: $56.5{\pm}11.2$, more than 70's: $56.8{\pm}11.3mm$, and for the splenium was 20's: $82.6{\pm}6.8$, 30's: $86.9{\pm}6.4$, 40's: $83.1{\pm}7.1$, 50's: $81.5{\pm}7.4$, 60's: $78.6{\pm}6.0$, more than 70's: $80.55{\pm}8.6mm$. The length of neural fiber for normal corpus callosum were statistically significant in the genu(P=0.001), posterior-mid body(P=0.009), and istumus(P=0.012) of corpus callosum. In order of age, the length of neural fiber increased from 30s to 40s, as one grows older tended to decrease. For this reason, the nerve cells of brain could be confirmed through the neural fiber tractography to progress actively in middle age.

A Study on the Extraction Rate of Brain Tissues from a $^{99m}Tc$-HMPAO Cerebral Blood flow SPECT Examination of a Patient ($^{99m}Tc$-HMPAO 뇌혈류 SPECT 검사 시 환자에 따른 뇌조직 추출률에 대한 고찰)

  • Kim, Hwa-San;Lee, Dong-Ho;Ahn, Byeong-Pil;Kim, Hyun-Ki;Jung, Jin-Yung;Lee, Hyung-Nam;Kim, Jung-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.17-26
    • /
    • 2012
  • Purpose: This study mainly focuses on the patients treated with chemically stable radiopharmaceutical product $^{99m}Tc$-HMPAO (d,l-hexamethylpropylene amine oxime) which yielded reduced image quality due to a decreased brain extraction rate. $^{99m}Tc$-HMPAO will be examined further to determine whether this product may be accounted as a factor for this cause. Material and Methods: From January 2010 until December 2010, out of 272 patients who were all subjected to $^{99m}Tc$-HMPAO brain blood flow SPECT scans resulting from Cerebral Infarction; 23 patients(ages $55.3{\pm}9$, 21 males, 3 females) with decreased tissue extraction rate were examined in detail. The radiopharmaceutical product $^{99m}Tc$-HMPAO was used on patients with normal brain tissue exchange rate as well as those with reduced rate in order to prove its' chemical stability. The patients' age, sex, blood pressure, existence of diabetes, drug use, current health status, known side effects from CT/MRI, examination of the patients' past SPECT before/after images were accounted to determine the factors and correlations affecting the rate of blood tissue extractions. Result: After multiple linear regression analysis, there were no unusual correlations between the 6 factors excluding sex, and before/after examination images. Male subjects showed reduced brain tissue extraction rate than the females ($p$ > 0.05) 91.3% male, 8.7% female. Wilcoxon Matched-Pairs Signed-Ranks Test was used on the before/after images which yielded a value of 0.06, which did not indicate a significant amount of difference on the 2 tests ($p$ > 0.05). As a result, the before/after images indicated similar brain tissue extraction rates, and there were variations depending on the individual patient. Conclusion: The effects of the chemically stable radiopharmaceutical product $^{99m}Tc$-HMPAO depended on the patient's personal characteristics and status, therefore was considered to be a factor in reducing brain tissue extraction rate. The related articles of $^{99m}Tc$-HMPAO cerebral blood flow SPECT speculates a cerebrovascular disease and factors resulting from portal veins, and it was not possible to pin point the exact cause of decreasing brain tissue extraction rate. However, the $^{99m}Tc$-HMPAO cerebral blood flow SPECT scan proved to be extremely useful in tracking and inspecting brain diseases, as well as offering accurate results from patients suffering from reduced brain tissue extraction rates.

  • PDF

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.