• Title/Summary/Keyword: 기록시스템

Search Result 2,318, Processing Time 0.036 seconds

A Study on the Development of Ultrasonography Guide using Motion Tracking System (이미지 가이드 시스템 기반 초음파 검사 교육 기법 개발: 예비 연구)

  • Jung Young-Jin;Kim Eun-Hye;Choi Hye-Rin;Lee Chae-Jeong;Kim Seo-Hyeon;Choi Yu-Jin;Hong Dong-Hee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.7
    • /
    • pp.1067-1073
    • /
    • 2023
  • Breast cancer is one of the top three most common cancers in modern women, and the incidence rate is increasing rapidly. Breast cancer has a high family history and a mortality rate of about 15%, making it a high-risk group. Therefore, breast cancer needs constant management after an early examination. Among the various equipment that can diagnose cancer, ultrasound has the advantage of low risk and being able to diagnose in real time. In addition, breast ultrasound will be more useful because Asian women's breasts are denser and less sensitive. However, the results of ultrasound examinations vary greatly depending on the technology of the examiner. To compensate for this, we intend to incorporate motion tracking technology. Motion tracking is a technology that specifies and analyzes a location according to the movement of an object in a three-dimensional space. Therefore, real-time control is possible, and complex and fast movements can be recorded in real time. We would like to present the production of an ultrasound examination guide using these advantages.

Evaluation of the linked operation of Pyeongrim Dam and Suyangje (dam) during period of drought (가뭄 시 평림댐과 수양제 연계 운영 평가)

  • Park, Jinyong;Lee, Seokjun;Kim, Sungi;Choi, Se Kwang;Chun, Gunil;Kim, Minhwan
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.4
    • /
    • pp.301-310
    • /
    • 2024
  • The spatial and temporal non-uniform distribution of precipitation makes water management difficult. Due to climate change, nonuniform distribution of precipitation is worsening, and droughts and floods are occurring frequently. Additionally, the intensity of droughts and floods is intensifying, making existing water management systems difficult. From June 2022 to June 2023, most of the water storage rates of major dams in the Yeongsan river and Seomjin river basin were below 30%. In the case of Juam dam, which is the most dependent on water use in the basin, the water storage rate fell to 20.3%, the lowest ever. Pyeongnim dam recorded the lowest water storage rate of 27.3% on May 4, 2023. Due to a lack of precipitation starting in the spring of 2022, Pyeongnim dam was placed at a drought concern level on June 19, 2022, and entered the severe drought level on August 21. Pyeongrim dam and Suyangje(dam) have different operating institutions. Nevertheless, the low water level was not reached at Pyeongnim dam through organic linkage operation in a drought situation. Pyeongnim dam was able to stably supply water to 63,000 people in three counties. In order to maximize the use of limited water resources, we must review ways to move water smoothly between basins and water sources, and prepare for water shortages caused by climate change by establishing a consumer-centered water supply system.

Oxygen and Hydrogen Isotope Studies of Fluid-Rock Interaction of the Radons-Sancheong Anorthositic Rocks (하동-산청 회장암질암의 유체-암석 상호반응에 대한 산소와 수소 동위원소 연구)

  • Park Young-Rok;Ko Bokyun;Lee Kwang-Sik
    • The Journal of the Petrological Society of Korea
    • /
    • v.13 no.4
    • /
    • pp.224-237
    • /
    • 2004
  • The anorthositic rocks of the study area are divided into the northern Sancheong and southern Hadong anorthositic rocks depending on the different distribution patterns and lithologies. In order to evaluate the characteristics of the hydrothermal systems developed in the study area, oxygen and hydrogen isotopic compositions of the anorthositic rocks were measured. Oxygen isotopic values of the plagioclase exhibit an interesting spatial distribution. Plagioclase collected from the Sancheong anorthositic rocks in the northern part tends to have a relatively restricted range of $\delta$$^{18/0}$ values between 7.3 and 8.8$\textperthousand$, which are heavier than 'normal' $\delta$$^{18/O}$ value (6-6.5$\textperthousand$) typical for plagioclase of the fresh mantle-derived anorthosite, whereas plagioclase from the southern part is characterized by a wide range of $\delta$$^{18/O}$ values between -4.4 and 8.2$\textperthousand$ and much lighter values than 'normal' value for plagioclase of the fresh mantle-derived anorthosite. Plagioclase from the middle part has $\delta$$^{18/O}$ values heavier than the plagioclase from the southern part, but lighter than that from the northern part. The spatial distribution of $\delta$$^{18/O}$ values suggests that the decoupled hydrothermal flow systems might have been developed in the study area. Meteoric water dominated in the hydrothermal flow systems developed in the southern area, whereas magmatic fluid dominated in the northern area. The relationship between water content and hydrogen isotopic composition of anorthosites shows a positive correlation. The positive correlation indicates that fluids exsolved from magma during magmatic differentiation caused deuteric alteration of anorthositic rocks involving replacement of pyroxenes to amphiboles. After the deuteric alteration, hydrothermal system developed by meteoric water dominated the southern area, and erased record of the hydrothermal system developed by magmatic fluid at earlier stage. However, the development of meteoric hydrothermal system has been limited in the southern area only, and could not affect the Sancheong anorthositic rocks in the northern area. The abundant occurrences of secondary alteration minerals such as sericite, calcite, and chlorite in the southern Hadong anorthosite relative to the northern Sancheong anorthositc seem to be related to the overlapping of two distinct hydrothermal systems in the southern area.

A Performance Evaluation of the e-Gov Standard Framework on PaaS Cloud Computing Environment: A Geo-based Image Processing Case (PaaS 클라우드 컴퓨팅 환경에서 전자정부 표준프레임워크 성능평가: 공간영상 정보처리 사례)

  • KIM, Kwang-Seob;LEE, Ki-Won
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.1-13
    • /
    • 2018
  • Both Platform as a Service (PaaS) as one of the cloud computing service models and the e-government (e-Gov) standard framework from the Ministry of the Interior and Safety (MOIS) provide developers with practical computing environments to build their applications in every web-based services. Web application developers in the geo-spatial information field can utilize and deploy many middleware software or common functions provided by either the cloud-based service or the e-Gov standard framework. However, there are few studies for their applicability and performance in the field of actual geo-spatial information application yet. Therefore, the motivation of this study was to investigate the relevance of these technologies or platform. The applicability of these computing environments and the performance evaluation were performed after a test application deployment of the spatial image processing case service using Web Processing Service (WPS) 2.0 on the e-Gov standard framework. This system was a test service supported by a cloud environment of Cloud Foundry, one of open source PaaS cloud platforms. Using these components, the performance of the test system in two cases of 300 and 500 threads was assessed through a comparison test with two kinds of service: a service case for only the PaaS and that on the e-Gov on the PaaS. The performance measurements were based on the recording of response time with respect to users' requests during 3,600 seconds. According to the experimental results, all the test cases of the e-Gov on PaaS considered showed a greater performance. It is expected that the e-Gov standard framework on the PaaS cloud would be important factors to build the web-based spatial information service, especially in public sectors.

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

    • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
      • Journal of Internet Computing and Services
      • /
      • v.14 no.6
      • /
      • pp.71-84
      • /
      • 2013
    • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

    A study on optical coherence tomography system using optical fiber (광섬유를 이용한 광영상 단층촬영기에 관한연구)

    • 양승국;박양하;장원석;오상기;김현덕;김기문
      • Proceedings of the Korean Institute of Navigation and Port Research Conference
      • /
      • 2004.04a
      • /
      • pp.5-9
      • /
      • 2004
    • In this paper, we studied the OCT(Optical Coherence Tomography) system which it has been extensively studied because of having some advantages such as high resolution cross-sectional images, low cost, and small size configuration. A basic principle of OCT system is Michelson interferometer. The characteristics of light source determine the resolution and the transmission depth. As a results, the light source have a commercial SLD with a central wavelength of 1,285 nm and FWHM(Full Width at Half Maximum) of 35.3 nm. The optical delay line part is necessary to equal of the optical path length with scattered light or reflected light from sample. In order to equal the optical path length, the stage which is attached to reference mirror is moved linearly by step motor And the interferometer is configured with the Michelson interferometer using single mod fiber, the scanner can be focused of the sample by using the reference arm. Also, the 2-dimensional cross-sectional images were measured with scanning the transverse direction of the sample by using step motor. After detecting the internal signal of lateral direction at a paint of sample, scanner is moved to obtain the cross-sectional image of 2-demensional by using step motor. Photodiode has been used which has high detection sensitivity, excellent noise characteristic, and dynamic range from 800 nm to 1,700 nm. It is detected mixed small signal between noise and interference signal with high frequency After filtering and amplifying this signal, only envelope curve of interference signal is detected. And then, cross-sectional image is shown through converting this signal into digitalized signal using A/D converter. The resolution of the OCT system is about 30$\mu\textrm{m}$ which corresponds to the theoretical resolution. Also, the cross-sectional image of ping-pong ball is measured. The OCT system is configured with Michelson interferometer which has a low contrast because of reducing the power of feedback interference light. Such a problem is overcomed by using the improved inteferometer. Also, in order to obtain the cross-sectional image within a short time, it is necessary to reduce the measurement time for improving the optical delay line.

    • PDF

    The status, classification and data characteristics of Seonsaengan(先生案, The predecessor's lists) in Jangseogak(藏書閣, Joseon dynasty royal library) (장서각 소장 선생안(先生案)의 현황과 사료적 가치)

    • Yi, Nam-ok
      • (The)Study of the Eastern Classic
      • /
      • no.69
      • /
      • pp.9-44
      • /
      • 2017
    • Seonsaengan(先生案) is the predecessor's lists. The list includes the names of the predecessor, the date of the appointment, the date of return, the previous job, and the next job. Therefore, previous studies on the local recruitment and Jungin (中人) that can not be found in general personnel information of the Joseon dynasty were conducted. However, the status and classification of the list has not been achieved yet. So this study aims to clarify the status, classification and data characteristics of the list. 176 books, are the Joseon dynasty lists of predecessors, remain to this day. These lists are in Jangseogak(47 cases), Kyujanggak(80 cases), the National Library of Korea(24 cases) and other collections(25 cases). Jangseogak has lists of royal government officials, Kyujanggak has lists of central government officials, and the National Library of Korea and other collections have lists of local government officials. However, this paper focuses on accessible Jangseogak list of 47 cases. As I mentioned earlier, the Jangsaegak lists are generally related to the royal government officails. This classification includes 18 central government officials, 5 local government officials, and 24 royal government officails. If the list is classified as contents, it can be classified into six rituals and diplomatic officials, 12 royal government officials, 5 local government officials, 14 royal tombs officials, and 10 royal education officials. Through the information on the list, the following six characteristics can be summarized. First, it can be finded the basic personal information about the recorded person. Second, the period of office and reasons for leaving the office and office can be known. Third, changes in the office system can be confirmed. Fourth, it can be looked at one aspect of the personnel administration system of the Joseon Dynasty through the previous workplace and the next job. Fifth, it is possible to know days that are particularly important for each government. Sixth, the contents of work evaluation can be confirmed. This is the reality of the Joseon Dynasty, which is different from the contents recorded in the Code. Through this, it is possible to look at the personnel administration system of the Joseon Dynasty. However, in order to carry out a precise review, it is necessary to make a database for 176 lists. In addition, if data is analyzed in connection with existing genealogy data, it will be possible to establish a basis for understanding the personnel administration system of the Joseon Dynasty.

    A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

    • Kim, Sun-Woong
      • Journal of Intelligence and Information Systems
      • /
      • v.16 no.2
      • /
      • pp.19-32
      • /
      • 2010
    • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

    Index-based Searching on Timestamped Event Sequences (타임스탬프를 갖는 이벤트 시퀀스의 인덱스 기반 검색)

    • 박상현;원정임;윤지희;김상욱
      • Journal of KIISE:Databases
      • /
      • v.31 no.5
      • /
      • pp.468-478
      • /
      • 2004
    • It is essential in various application areas of data mining and bioinformatics to effectively retrieve the occurrences of interesting patterns from sequence databases. For example, let's consider a network event management system that records the types and timestamp values of events occurred in a specific network component(ex. router). The typical query to find out the temporal casual relationships among the network events is as fellows: 'Find all occurrences of CiscoDCDLinkUp that are fellowed by MLMStatusUP that are subsequently followed by TCPConnectionClose, under the constraint that the interval between the first two events is not larger than 20 seconds, and the interval between the first and third events is not larger than 40 secondsTCPConnectionClose. This paper proposes an indexing method that enables to efficiently answer such a query. Unlike the previous methods that rely on inefficient sequential scan methods or data structures not easily supported by DBMSs, the proposed method uses a multi-dimensional spatial index, which is proven to be efficient both in storage and search, to find the answers quickly without false dismissals. Given a sliding window W, the input to a multi-dimensional spatial index is a n-dimensional vector whose i-th element is the interval between the first event of W and the first occurrence of the event type Ei in W. Here, n is the number of event types that can be occurred in the system of interest. The problem of‘dimensionality curse’may happen when n is large. Therefore, we use the dimension selection or event type grouping to avoid this problem. The experimental results reveal that our proposed technique can be a few orders of magnitude faster than the sequential scan and ISO-Depth index methods.hods.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.