• Title/Summary/Keyword: 정보 매핑

Search Result 1,166, Processing Time 0.024 seconds

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

The Variation of Scan Time According to Patient's Breast Size and Body Mass Index in Breast Sentinel lymphangiography (유방암의 감시림프절 검사에서 유방크기와 체질량지수에 따른 검사시간 변화)

  • Lee, Da-Young;Nam-Koong, Hyuk;Cho, Seok-Won;Oh, Shin-Hyun;Im, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.62-67
    • /
    • 2012
  • Purpose : At this time, the sentinel lymph node mapping using radioisotope and blue dye is preceded for breast cancer patient's sentinel lymph node biopsy. But all patients were applied the same protocol without consideration of physical specific character like the breast sizes and body mass indexes. The purpose of this study is search the optimized scan time in breast sentinel lymphangiography by observing how much the body mass index and breast size influence speed of lymphatic flow. Materials and Methods : The Object of this study was 100 breast cancer patients(Female, 100 persons, average age $50.34{\pm}10.26$ years old)at Severance hospital from October 2011 to December 2011. They were scanned breast sentinel lymphangiography before operation. This study was performed on Forte dual heads gamma camera (Philips Medical Systems, Nederland B.V.). All patients were intra-dermal injected $^{99m}Tc$-Phytate 18.5 MBq, 0.5 ml. For 80 patients, we have scanned without limitation of scan time until the lymphatic flow from the lymph node since injection. We measured how long the lymphatic flow time between departures from injects site and arrival to lymph node using stopwatch. After we calculated patient's Body mass Index and classified as 4 groups. And we measured patient's breast size and classified 3 groups. The modified breast lymphangiography that changing scan time according to comparison study's result was performed on 20 patients and was estimated. Results : The mean scan time as breast size was A group 2.48 minutes, B group 7.69 minutes, C group 10.43 minutes. The mean scan time as body mass index was under weight 1.35 minutes, normal weight 2.56 minutes, slightly over 5.62 minutes, over weighted 5.62 minutes. The success rate of modified breast lymphangiography was 85%. Conclusion : As the Body mass index became higher and breast size became bigger, the total scan time is increased. Based on the obtained information, we designed modified breast lymphangiography protocol. At the cases applying that protocol, most of sentinel lymph nodes were visualized as lymphatic pool. In conclusion, we found that the more success rate in modified protocol considering physical individuality than study carrying out in the same protocol.

  • PDF

Physical Offset of UAVs Calibration Method for Multi-sensor Fusion (다중 센서 융합을 위한 무인항공기 물리 오프셋 검보정 방법)

  • Kim, Cheolwook;Lim, Pyeong-chae;Chi, Junhwa;Kim, Taejung;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1125-1139
    • /
    • 2022
  • In an unmanned aerial vehicles (UAVs) system, a physical offset can be existed between the global positioning system/inertial measurement unit (GPS/IMU) sensor and the observation sensor such as a hyperspectral sensor, and a lidar sensor. As a result of the physical offset, a misalignment between each image can be occurred along with a flight direction. In particular, in a case of multi-sensor system, an observation sensor has to be replaced regularly to equip another observation sensor, and then, a high cost should be paid to acquire a calibration parameter. In this study, we establish a precise sensor model equation to apply for a multiple sensor in common and propose an independent physical offset estimation method. The proposed method consists of 3 steps. Firstly, we define an appropriate rotation matrix for our system, and an initial sensor model equation for direct-georeferencing. Next, an observation equation for the physical offset estimation is established by extracting a corresponding point between a ground control point and the observed data from a sensor. Finally, the physical offset is estimated based on the observed data, and the precise sensor model equation is established by applying the estimated parameters to the initial sensor model equation. 4 region's datasets(Jeon-ju, Incheon, Alaska, Norway) with a different latitude, longitude were compared to analyze the effects of the calibration parameter. We confirmed that a misalignment between images were adjusted after applying for the physical offset in the sensor model equation. An absolute position accuracy was analyzed in the Incheon dataset, compared to a ground control point. For the hyperspectral image, root mean square error (RMSE) for X, Y direction was calculated for 0.12 m, and for the point cloud, RMSE was calculated for 0.03 m. Furthermore, a relative position accuracy for a specific point between the adjusted point cloud and the hyperspectral images were also analyzed for 0.07 m, so we confirmed that a precise data mapping is available for an observation without a ground control point through the proposed estimation method, and we also confirmed a possibility of multi-sensor fusion. From this study, we expect that a flexible multi-sensor platform system can be operated through the independent parameter estimation method with an economic cost saving.

Estimation and Mapping of Soil Organic Matter using Visible-Near Infrared Spectroscopy (분광학을 이용한 토양 유기물 추정 및 분포도 작성)

  • Choe, Eun-Young;Hong, Suk-Young;Kim, Yi-Hyun;Zhang, Yong-Seon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.6
    • /
    • pp.968-974
    • /
    • 2010
  • We assessed the feasibility of discrete wavelet transform (DWT) applied for the spectral processing to enhance the estimation performance quality of soil organic matters using visible-near infrared spectra and mapped their distribution via block Kriging model. Continuum-removal and $1^{st}$ derivative transform as well as Haar and Daubechies DWT were used to enhance spectral variation in terms of soil organic matter contents and those spectra were put into the PLSR (Partial Least Squares Regression) model. Estimation results using raw reflectance and transformed spectra showed similar quality with $R^2$ > 0.6 and RPD> 1.5. These values mean the approximation prediction on soil organic matter contents. The poor performance of estimation using DWT spectra might be caused by coarser approximation of DWT which not enough to express spectral variation based on soil organic matter contents. The distribution maps of soil organic matter were drawn via a spatial information model, Kriging. Organic contents of soil samples made Gaussian distribution centered at around 20 g $kg^{-1}$ and the values in the map were distributed with similar patterns. The estimated organic matter contents had similar distribution to the measured values even though some parts of estimated value map showed slightly higher. If the estimation quality is improved more, estimation model and mapping using spectroscopy may be applied in global soil mapping, soil classification, and remote sensing data analysis as a rapid and cost-effective method.

A Study on the Model of Appraisal and Acquisition for Digital Documentary Heritage : Focused on 'Whole-of-Society Approach' in Canada (디지털기록유산 평가·수집 모형에 대한 연구 캐나다 'Whole-of-Society 접근법'을 중심으로)

  • Pak, Ji-Ae;Yim, Jin Hee
    • The Korean Journal of Archival Studies
    • /
    • no.44
    • /
    • pp.51-99
    • /
    • 2015
  • The purpose of the archival appraisal has gradually changed from the selection of records to the documentation of the society. In particular, the qualitative and quantitative developments of the current digital technology and web have become the driving force that enables semantic acquisition, rather than physical one. Under these circumstances, the concept of 'documentary heritage' has been re-established internationally, led by UNESCO. Library and Archives Canada (LAC) reflects this trend. LAC has been trying to develop a new appraisal model and an acquisition model at the same time to revive the spirit of total archives, which is the 'Whole-of-society approach'. Features of this approach can be summarized in three main points. First, it is for documentary heritage and the acquisition refers to semantic acquisition, not the physical one. And because the object of management is documentary heritage, the cooperation between documentary heritage institutions has to be a prerequisite condition. Lastly, it cannot only documenting what already happened, it can documenting what is happening in the current society. 'Whole-of-society approach', as an appraisal method, is a way to identify social components based on social theories. The approach, as an acquisition method, is targeting digital recording, which includes 'digitized' heritage and 'born-digital' heritage. And it makes possible to the semantic acquisition of documentary heritage based on the data linking by mapping identified social components as metadata component and establishing them into linked open data. This study pointed out that it is hard to realize documentation of the society based on domestic appraisal system since the purpose is limited to selection. To overcome this limitation, we suggest a guideline applied with 'Whole-of-society approach'.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.