• Title/Summary/Keyword: Codes Application

Search Result 503, Processing Time 0.023 seconds

Analysis of sedation and general anesthesia in patients with special needs in dentistry using the Korean healthcare big data

  • Kim, Jieun;Kim, Hyuk;Seo, Kwang-Suk;Kim, Hyun Jeong
    • Journal of Dental Anesthesia and Pain Medicine
    • /
    • v.22 no.3
    • /
    • pp.205-216
    • /
    • 2022
  • Background: People with special needs tend to require diverse behavioral management in dentistry. They may feel anxious or uncomfortable or may not respond to any communication with the dentists. Patients with medical, physical, or psychological disorders may not cooperate and therefore require sedation (SED) or general anesthesia (GA) to receive dental treatment. Using the healthcare big data in Korea, this study aimed to analyze the trends of SED and GA in special needs patients undergoing dental treatment. It is believed that these data can be used as reference material for hospitals and for preparation of guidelines and related policy decisions of associations or governments for special needs patients in dentistry. Methods: The study used selected health information data provided by the Korean National Health Insurance Service. Patients with a record of use of one of the eight selected drugs used in dental SED between January 2007 and September 2019, those with International Classification of Diseases-10 codes for attention deficit hyperactivity disorder (ADHD), phobia, brain disease, cerebral palsy, epilepsy, genetic disease, autism, mental disorder, mental retardation, and dementia were selected. The insurance claims data were analyzed for age, sex, sedative use, GA, year, and institution. Results: The number of special needs patients who received dental treatment under SED or GA from January 2007 to September 2019 was 116,623. Number of SED cases was 136,018, performed on 69,265 patients, and the number of GA cases was 56,308, implemented on 47,257 patients. In 2007, 3100 special needs patients received dental treatment under SED while in 2018 the number of cases increased 6 times to 18,528 SED cases. In dentistry, ADHD was the most common disability for SED cases while phobia was the most common cause of disability for GA. The male-to-female ratio with respect to SED cases was higher for males (M : F = 64.36% : 35.64%). Conclusion: The application of the SED method and GA for patients with special needs in dentistry is increasing rapidly; thus, preparing guidelines and reinforcing the education and system are necessary.

Investigation of USGS Short-Wave Infrared Databases and Comparison with Domestic Cases - Focusing on the Availability for the Mineralogical Analyses and an Application on the Domestic Illite - (USGS 단파장 적외선 데이터베이스 분석 및 국내 사례와 비교: 광물학적 활용도 고찰 및 국내 산출 일라이트로의 적용 사례)

  • Chang Seong Kim;Raeyoon Jeong;Soon-Oh Kim;Ji-man Cha
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.36 no.4
    • /
    • pp.259-271
    • /
    • 2023
  • Since the short-wave infrared spectrum has a significant range of variation depending on the production environment, countries with advanced resource exploration are collecting the spectrum and building a database. Representative organizations include the USGS and CSIRO, and they are currently carrying out a project in China that can synthesize and use a large number of existing data. The USGS library provides a total of 2,457 spectra targeting not only minerals but also various materials that respond to infrared radiation. Among these, there are 1,276 mineral spectra, which are about half of the total. The spectrum title includes information, such as analysis devices (NIC4, BECK, ASDNG, etc.), purity codes (a, b, c, d, u), and measurement methods (AREF, RREF, RTGC, TRAN). Analyzed raw data are provided in ASCII and GIF format. The CSIRO library has a total of 502 spectra, of which the majority, 493, correspond to mineral spectra. The USGS library is a free, publically available resource, while the CSIRO library is bundled with TSG8 or must be purchased separately. Among these, when comparing the eight spectra whose spectral shapes can be analyzed with the spectra of domestic illite, the positions of the absorption peaks are significantly different from those of domestic illite, except for one Japanese illite. Additional research will be needed to determine the causes of such differences, and the domestically relevant databases should be established as well.

Comparative Analysis of Self-supervised Deephashing Models for Efficient Image Retrieval System (효율적인 이미지 검색 시스템을 위한 자기 감독 딥해싱 모델의 비교 분석)

  • Kim Soo In;Jeon Young Jin;Lee Sang Bum;Kim Won Gyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.519-524
    • /
    • 2023
  • In hashing-based image retrieval, the hash code of a manipulated image is different from the original image, making it difficult to search for the same image. This paper proposes and evaluates a self-supervised deephashing model that generates perceptual hash codes from feature information such as texture, shape, and color of images. The comparison models are autoencoder-based variational inference models, but the encoder is designed with a fully connected layer, convolutional neural network, and transformer modules. The proposed model is a variational inference model that includes a SimAM module of extracting geometric patterns and positional relationships within images. The SimAM module can learn latent vectors highlighting objects or local regions through an energy function using the activation values of neurons and surrounding neurons. The proposed method is a representation learning model that can generate low-dimensional latent vectors from high-dimensional input images, and the latent vectors are binarized into distinguishable hash code. From the experimental results on public datasets such as CIFAR-10, ImageNet, and NUS-WIDE, the proposed model is superior to the comparative model and analyzed to have equivalent performance to the supervised learning-based deephashing model. The proposed model can be used in application systems that require low-dimensional representation of images, such as image search or copyright image determination.

A Product Model Centered Integration Methodology for Design and Construction Information (프로덕트 모델 중심의 설계, 시공 정보 통합 방법론)

  • Lee Keun-Hyoung;Kim Jae-Jun
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • autumn
    • /
    • pp.99-106
    • /
    • 2002
  • Researches on integration of design and construction information from earlier era focused on the conceptual data models. Development and prevalent use of commercial database management system led many researchers to design database schemas for enlightening of relationship between non-graphic data items. Although these researches became the foundation fur the proceeding researches. they did not utilize the graphic data providable from CAD system which is already widely used. 4D CAD concept suggests a way of integrating graphic data with schedule data. Although this integration provided a new possibility for integration, there exists a limitation in data dependency on a specific application. This research suggests a new approach on integration for design and construction information, 'Product Model Centered Integration Methodology'. This methodology achieves integration by preliminary research on existing methodology using 4D CAD concept. and by development and application of new integration methodology, 'Product Model Centered Integration Methodology'. 'Design Component' can be converted into digital format by object based CAD system. 'Unified Object-based Graphic Modeling' shows how to model graphic product model using CAD system. Possibility of reusing design information in latter stage depends on the ways of creating CAD model, so modeling guidelines and specifications are suggested. Then prototype system for integration management, and exchange are presented, using 'Product Frameworker', and 'Product Database' which also supports multiple-viewpoints. 'Product Data Model' is designed, and main data workflows are represented using 'Activity Diagram', one of UML diagrams. These can be used for writing programming codes and developing prototype in order to automatically create activity items in actual schedule management system. Through validation processes, 'Product Model Centered Integration Methodology' is suggested as the new approach for integration of design and construction information.

  • PDF

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

Medical Information Dynamic Access System in Smart Mobile Environments (스마트 모바일 환경에서 의료정보 동적접근 시스템)

  • Jeong, Chang Won;Kim, Woo Hong;Yoon, Kwon Ha;Joo, Su Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • Recently, the environment of a hospital information system is a trend to combine various SMART technologies. Accordingly, various smart devices, such as a smart phone, Tablet PC is utilized in the medical information system. Also, these environments consist of various applications executing on heterogeneous sensors, devices, systems and networks. In these hospital information system environment, applying a security service by traditional access control method cause a problems. Most of the existing security system uses the access control list structure. It is only permitted access defined by an access control matrix such as client name, service object method name. The major problem with the static approach cannot quickly adapt to changed situations. Hence, we needs to new security mechanisms which provides more flexible and can be easily adapted to various environments with very different security requirements. In addition, for addressing the changing of service medical treatment of the patient, the researching is needed. In this paper, we suggest a dynamic approach to medical information systems in smart mobile environments. We focus on how to access medical information systems according to dynamic access control methods based on the existence of the hospital's information system environments. The physical environments consist of a mobile x-ray imaging devices, dedicated mobile/general smart devices, PACS, EMR server and authorization server. The software environment was developed based on the .Net Framework for synchronization and monitoring services based on mobile X-ray imaging equipment Windows7 OS. And dedicated a smart device application, we implemented a dynamic access services through JSP and Java SDK is based on the Android OS. PACS and mobile X-ray image devices in hospital, medical information between the dedicated smart devices are based on the DICOM medical image standard information. In addition, EMR information is based on H7. In order to providing dynamic access control service, we classify the context of the patients according to conditions of bio-information such as oxygen saturation, heart rate, BP and body temperature etc. It shows event trace diagrams which divided into two parts like general situation, emergency situation. And, we designed the dynamic approach of the medical care information by authentication method. The authentication Information are contained ID/PWD, the roles, position and working hours, emergency certification codes for emergency patients. General situations of dynamic access control method may have access to medical information by the value of the authentication information. In the case of an emergency, was to have access to medical information by an emergency code, without the authentication information. And, we constructed the medical information integration database scheme that is consist medical information, patient, medical staff and medical image information according to medical information standards.y Finally, we show the usefulness of the dynamic access application service based on the smart devices for execution results of the proposed system according to patient contexts such as general and emergency situation. Especially, the proposed systems are providing effective medical information services with smart devices in emergency situation by dynamic access control methods. As results, we expect the proposed systems to be useful for u-hospital information systems and services.

Evaluation of Factors Used in AAPM TG-43 Formalism Using Segmented Sources Integration Method and Monte Carlo Simulation: Implementation of microSelectron HDR Ir-192 Source (미소선원 적분법과 몬테칼로 방법을 이용한 AAPM TG-43 선량계산 인자 평가: microSelectron HDR Ir-192 선원에 대한 적용)

  • Ahn, Woo-Sang;Jang, Won-Woo;Park, Sung-Ho;Jung, Sang-Hoon;Cho, Woon-Kap;Kim, Young-Seok;Ahn, Seung-Do
    • Progress in Medical Physics
    • /
    • v.22 no.4
    • /
    • pp.190-197
    • /
    • 2011
  • Currently, the dose distribution calculation used by commercial treatment planning systems (TPSs) for high-dose rate (HDR) brachytherapy is derived from point and line source approximation method recommended by AAPM Task Group 43 (TG-43). However, the study of Monte Carlo (MC) simulation is required in order to assess the accuracy of dose calculation around three-dimensional Ir-192 source. In this study, geometry factor was calculated using segmented sources integration method by dividing microSelectron HDR Ir-192 source into smaller parts. The Monte Carlo code (MCNPX 2.5.0) was used to calculate the dose rate $\dot{D}(r,\theta)$ at a point ($r,\theta$) away from a HDR Ir-192 source in spherical water phantom with 30 cm diameter. Finally, anisotropy function and radial dose function were calculated from obtained results. The obtained geometry factor was compared with that calculated from line source approximation. Similarly, obtained anisotropy function and radial dose function were compared with those derived from MCPT results by Williamson. The geometry factor calculated from segmented sources integration method and line source approximation was within 0.2% for $r{\geq}0.5$ cm and 1.33% for r=0.1 cm, respectively. The relative-root mean square error (R-RMSE) of anisotropy function obtained by this study and Williamson was 2.33% for r=0.25 cm and within 1% for r>0.5 cm, respectively. The R-RMSE of radial dose function was 0.46% at radial distance from 0.1 to 14.0 cm. The geometry factor acquired from segmented sources integration method and line source approximation was in good agreement for $r{\geq}0.1$ cm. However, application of segmented sources integration method seems to be valid, since this method using three-dimensional Ir-192 source provides more realistic geometry factor. The anisotropy function and radial dose function estimated from MCNPX in this study and MCPT by Williamson are in good agreement within uncertainty of Monte Carlo codes except at radial distance of r=0.25 cm. It is expected that Monte Carlo code used in this study could be applied to other sources utilized for brachytherapy.

The Effects of Formative Assessment Using Mobile Applications on Interest and Self-Directedness in Science Instruction (모바일을 활용한 형성평가가 과학수업의 흥미성과 자기주도성에 미치는 영향)

  • Kwak, Hyoungsuk;Shin, Youngjoon
    • Journal of The Korean Association For Science Education
    • /
    • v.34 no.3
    • /
    • pp.285-294
    • /
    • 2014
  • This study investigates the effects of formative assessment utilizing mobile applications on interest and self-directedness in science instruction. The study subjects are two 6th grade classes from H elementary school located in Incheon, and the experimental group and the comparative group are composed of 21 students, respectively. The students from the experimental group have been taught with mobile devices while the comparative group has been taught in methods consistent with the current teaching standards. For the sake of research, the results of the method applied to the mobile device focus group have been edited using Google Drive Forms, entered as QR codes and stored in order for them to later be utilized for teaching and learning process. In the process, the teacher has provided the students with feedback based on their answers. The students of comparative group are to solve the same formative assessment in paper. As a result, the teacher of the mobile device focus group has been able to go through twenty-nine questions on formative assessment in the teaching and learning process, confirm the correct answers five times and provide feedback twenty-five times for additional explanation. In the inquiry about interest, the mobile device group scored 4.64 points and the standard one scored just 1.99 points (p<0.01). Fifteen students answered in the interview that and the major reason why they scored high has been because it was fun to study with mobile devices. When it comes to self-directedness over the process of teaching and learning, the mobile device focus group has answered positively but the standard group has scored relatively low (p<0.01).

Application of Response Surface Methodology for Optimization of Nature Dye Extraction Process (천연색소 추출공정 최적화를 위한 반응표면분석법의 적용)

  • Lee, Seung Bum;Lee, Won Jae;Hong, In Kwon
    • Applied Chemistry for Engineering
    • /
    • v.29 no.3
    • /
    • pp.283-288
    • /
    • 2018
  • As the use of environmentally friendly and non-disease natural pigments grows, various methods for extracting natural pigments have been studied. The natural color was extracted from parsley, a vegetable ingredient containing natural dyes. Target color codes of green series of natural dyes extracted as variables #50932C (L = 55.0, a = -40.0, b = 46.0) were set with the pH and temperature of extracted natural color coordinates (of the extracted), and the quantitative intensities of natural dyes were analyzed. During the colorimetric analysis predicted by the reaction surface analysis method, a color coordinate analysis was conducted under the optimal conditions of pH 8.0 and extraction temperature of $60.9^{\circ}C$. Under these conditions, predicted figures of L, a, and b were 55.0, -36.3, and 36.8, respectively, while actual experimental ones confirmed were 69.0, -35.9, and 31.4, respectively. In these results, the theory accuracy and actual error rate were confirmed to be 73.0 and 13.8%, respectively. The theoretical optimization condition of the color difference (${\Delta}E$) was at the pH of 9.2 and extraction temperature of $55.2^{\circ}C$. Under these conditions the predicted ${\Delta}E$ figure was 12.4 while the experimental one was 13.0. The difference in color analysis showed 97.5% of the theoretical accuracy and 4.5% of the actual error rate. However, the combination of color coordinates did not represent a desired target color, but rather close to the targeted color by means of an arithmetic mean. Therefore, it can be said that when the reaction surface analysis method was applied to the natural dye extraction process, the use of color coordinates as a response value can be a better method for optimizing the dye extraction process.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.