• Title/Summary/Keyword: Combining Technique

Search Result 933, Processing Time 0.026 seconds

Comparison of digitalized fabrication method for interim removable partial denture: case reports (두 가지 프린팅 방식으로 제작한 임시 가철성 의치의 비교: 증례 보고)

  • Yoon-Jeong Shin;Cheong-Hee Lee;Du-Hyeong Lee
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.61 no.4
    • /
    • pp.379-385
    • /
    • 2023
  • With the recent development of digital dentistry, fully digitalized methods for fabricating dentures, using intraoral scans and computer-aided design/computer-aided manufacturing (CAD-CAM), are getting popular. Digital methods have the advantage of simplifying the fabrication process in the clinic and laboratory, supplementing digital data. This case report shows a fully digital fabrication method for interim removable dentures in a patient with anterior tooth loss in which implant placement is impossible or delayed. Interim removable dentures were fabricated using two methods. One method is printing tooth and base parts separately and combining, and the other method is printing the whole denture at one time and coloring on the base part. Afterward, dentures were delivered and adaptation was evaluated using the triple scan technique. The extracted site was scanned intraorally (first scan) and the interim removable denture was digitally scanned both intraorally (second scan) and, after removal extraorally (Third scan). In both method, denture adaptation was shown favorable. We report this case report as both the patient and the operator were satisfied with a simplified process using a fully digital method in the clinic.

Feasibility of Three-Dimensional Balanced Steady-State Free Precession Cine Magnetic Resonance Imaging Combined with an Image Denoising Technique to Evaluate Cardiac Function in Children with Repaired Tetralogy of Fallot

  • YaFeng Peng;XinYu Su;LiWei Hu;Qian Wang;RongZhen Ouyang;AiMin Sun;Chen Guo;XiaoFen Yao;Yong Zhang;LiJia Wang;YuMin Zhong
    • Korean Journal of Radiology
    • /
    • v.22 no.9
    • /
    • pp.1525-1536
    • /
    • 2021
  • Objective: To investigate the feasibility of cine three-dimensional (3D) balanced steady-state free precession (b-SSFP) imaging combined with a non-local means (NLM) algorithm for image denoising in evaluating cardiac function in children with repaired tetralogy of Fallot (rTOF). Materials and Methods: Thirty-five patients with rTOF (mean age, 12 years; range, 7-18 years) were enrolled to undergo cardiac cine image acquisition, including two-dimensional (2D) b-SSFP, 3D b-SSFP, and 3D b-SSFP combined with NLM. End-diastolic volume (EDV), end-systolic volume (ESV), stroke volume (SV), and ejection fraction (EF) of the two ventricles were measured and indexed by body surface index. Acquisition time and image quality were recorded and compared among the three imaging sequences. Results: 3D b-SSFP with denoising vs. 2D b-SSFP had high correlation coefficients for EDV, ESV, SV, and EF of the left (0.959-0.991; p < 0.001) as well as right (0.755-0.965; p < 0.001) ventricular metrics. The image acquisition time ± standard deviation (SD) was 25.1 ± 2.4 seconds for 3D b-SSFP compared with 277.6 ± 0.7 seconds for 2D b-SSFP, indicating a significantly shorter time with the 3D than the 2D sequence (p < 0.001). Image quality score was better with 3D b-SSFP combined with denoising than with 3D b-SSFP (mean ± SD, 3.8 ± 0.6 vs. 3.5 ± 0.6; p = 0.005). Signal-to-noise ratios for blood and myocardium as well as contrast between blood and myocardium were higher for 3D b-SSFP combined with denoising than for 3D b-SSFP (p < 0.05 for all but septal myocardium). Conclusion: The 3D b-SSFP sequence can significantly reduce acquisition time compared to the 2D b-SSFP sequence for cine imaging in the evaluation of ventricular function in children with rTOF, and its quality can be further improved by combining it with an NLM denoising method.

Development and Validation of 18F-FDG PET/CT-Based Multivariable Clinical Prediction Models for the Identification of Malignancy-Associated Hemophagocytic Lymphohistiocytosis

  • Xu Yang;Xia Lu;Jun Liu;Ying Kan;Wei Wang;Shuxin Zhang;Lei Liu;Jixia Li;Jigang Yang
    • Korean Journal of Radiology
    • /
    • v.23 no.4
    • /
    • pp.466-478
    • /
    • 2022
  • Objective: 18F-fluorodeoxyglucose (FDG) PET/CT is often used for detecting malignancy in patients with newly diagnosed hemophagocytic lymphohistiocytosis (HLH), with acceptable sensitivity but relatively low specificity. The aim of this study was to improve the diagnostic ability of 18F-FDG PET/CT in identifying malignancy in patients with HLH by combining 18F-FDG PET/CT and clinical parameters. Materials and Methods: Ninety-seven patients (age ≥ 14 years) with secondary HLH were retrospectively reviewed and divided into the derivation (n = 71) and validation (n = 26) cohorts according to admission time. In the derivation cohort, 22 patients had malignancy-associated HLH (M-HLH) and 49 patients had non-malignancy-associated HLH (NM-HLH). Data on pretreatment 18F-FDG PET/CT and laboratory results were collected. The variables were analyzed using the Mann-Whitney U test or Pearson's chi-square test, and a nomogram for predicting M-HLH was constructed using multivariable binary logistic regression. The predictors were also ranked using decision-tree analysis. The nomogram and decision tree were validated in the validation cohort (10 patients with M-HLH and 16 patients with NM-HLH). Results: The ratio of the maximal standardized uptake value (SUVmax) of the lymph nodes to that of the mediastinum, the ratio of the SUVmax of bone lesions or bone marrow to that of the mediastinum, and age were selected for constructing the model. The nomogram showed good performance in predicting M-HLH in the validation cohort, with an area under the receiver operating characteristic curve of 0.875 (95% confidence interval, 0.686-0.971). At an appropriate cutoff value, the sensitivity and specificity for identifying M-HLH were 90% (9/10) and 68.8% (11/16), respectively. The decision tree integrating the same variables showed 70% (7/10) sensitivity and 93.8% (15/16) specificity for identifying M-HLH. In comparison, visual analysis of 18F-FDG PET/CT images demonstrated 100% (10/10) sensitivity and 12.5% (2/16) specificity. Conclusion: 18F-FDG PET/CT may be a practical technique for identifying M-HLH. The model constructed using 18F-FDG PET/CT features and age was able to detect malignancy with better accuracy than visual analysis of 18F-FDG PET/CT images.

An Exploratory Study of e-Learning Satisfaction: A Mixed Methods of Text Mining and Interview Approaches (이러닝 만족도 증진을 위한 탐색적 연구: 텍스트 마이닝과 인터뷰 혼합방법론)

  • Sun-Gyu Lee;Soobin Choi;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.21 no.1
    • /
    • pp.39-59
    • /
    • 2019
  • E-learning has improved the educational effect by making it possible to learn anytime and anywhere by escaping the traditional infusion education. As the use of e-learning system increases with the increasing popularity of e-learning, it has become important to measure e-learning satisfaction. In this study, we used the mixed research method to identify satisfaction factors of e-learning. The mixed research method is to perform both qualitative research and quantitative research at the same time. As a quantitative research, we collected reviews in Udemy.com by text mining. Then we classified high and low rated lectures and applied topic modeling technique to derive factors from reviews. Also, this study conducted an in-depth 1:1 interview on e-learning learners as a qualitative research. By combining these results, we were able to derive factors of e-learning satisfaction and dissatisfaction. Based on these factors, we suggested ways to improve e-learning satisfaction. In contrast to the fact that survey-based research was mainly conducted in the past, this study collects actual data by text mining. The academic significance of this study is that the results of the topic modeling are combined with the factor based on the information system success model.

Comparison of Seawater Exchange Rate of Small Scale Inner Bays within Jinhae Bay (수치모델을 이용한 진해만 내 소규모 내만의 해수교환율 비교)

  • Kim, Nam Su;Kang, Hoon;Kwon, Min-Sun;Jang, Hyo-Sang;Kim, Jong Gu
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.19 no.1
    • /
    • pp.74-85
    • /
    • 2016
  • For the assessment of seawater exchange rates in Danghangpo bay, Dangdong bay, Wonmun bay, Gohyunsung bay, and Masan bay, which are small-scale inner bays of Jinhae bay, an EFDC model was used to reproduce the seawater flow of the entire Jinhae bay, and Lagrange (particle tracking) and Euler (dye diffusion) model techniques were used to calculate the seawater exchange rates for each of the bays. The seawater exchange rate obtained using the particle tracking method was the highest, at 60.84%, in Danghangpo bay, and the lowest, at 30.50%, in Masan bay. The seawater exchange rate calculated based on the dye diffusion method was the highest, at 45.40%, in Danghangpo bay, and the lowest, at 34.65%, in Masan bay. The sweater exchange rate was found to be the highest in Danghangpo bay likely because of a high flow velocity owing to the narrow entrance of the bay; and in the case of particle tracking method, the morphological characteristics of the particles affected the results, since once the particles get out, it is difficult for them to get back in. Meanwhile, in the case of the Lagrange method, when the particles flow back in by the flood current after escaping the ebb current, they flow back in intact. However, when a dye flows back in after escaping the bay, it becomes diluted by the open sea water. Thus, the seawater exchange rate calculated based on the dye diffusion method turned out to be higher in general, and even if a comparison of the sweater exchange rates calculated through two methods was conducted under the same condition, the results were completely different. Thus, when assessing the seawater exchange rate, more reasonable results could be obtained by either combining the two methods or selecting a modeling technique after giving sufficiently consideration to the purpose of the study and the characteristics of the coastal area. Meanwhile, through a comparison of the degree of closure and seawater exchange rates calculated through Lagrange and Euler methods, it was found that the seawater exchange rate was higher for a higher degree of closure, regardless of the numerical model technique. Thus, it was deemed that the degree of closure would be inappropriate to be used as an index for the closeness of the bay, and some modifications as well as supplementary information would be necessary in this regard.

Prediction of field failure rate using data mining in the Automotive semiconductor (데이터 마이닝 기법을 이용한 차량용 반도체의 불량률 예측 연구)

  • Yun, Gyungsik;Jung, Hee-Won;Park, Seungbum
    • Journal of Technology Innovation
    • /
    • v.26 no.3
    • /
    • pp.37-68
    • /
    • 2018
  • Since the 20th century, automobiles, which are the most common means of transportation, have been evolving as the use of electronic control devices and automotive semiconductors increases dramatically. Automotive semiconductors are a key component in automotive electronic control devices and are used to provide stability, efficiency of fuel use, and stability of operation to consumers. For example, automotive semiconductors include engines control, technologies for managing electric motors, transmission control units, hybrid vehicle control, start/stop systems, electronic motor control, automotive radar and LIDAR, smart head lamps, head-up displays, lane keeping systems. As such, semiconductors are being applied to almost all electronic control devices that make up an automobile, and they are creating more effects than simply combining mechanical devices. Since automotive semiconductors have a high data rate basically, a microprocessor unit is being used instead of a micro control unit. For example, semiconductors based on ARM processors are being used in telematics, audio/video multi-medias and navigation. Automotive semiconductors require characteristics such as high reliability, durability and long-term supply, considering the period of use of the automobile for more than 10 years. The reliability of automotive semiconductors is directly linked to the safety of automobiles. The semiconductor industry uses JEDEC and AEC standards to evaluate the reliability of automotive semiconductors. In addition, the life expectancy of the product is estimated at the early stage of development and at the early stage of mass production by using the reliability test method and results that are presented as standard in the automobile industry. However, there are limitations in predicting the failure rate caused by various parameters such as customer's various conditions of use and usage time. To overcome these limitations, much research has been done in academia and industry. Among them, researches using data mining techniques have been carried out in many semiconductor fields, but application and research on automotive semiconductors have not yet been studied. In this regard, this study investigates the relationship between data generated during semiconductor assembly and package test process by using data mining technique, and uses data mining technique suitable for predicting potential failure rate using customer bad data.

A Study on derivation of drought severity-duration-frequency curve through a non-stationary frequency analysis (비정상성 가뭄빈도 해석 기법에 따른 가뭄 심도-지속기간-재현기간 곡선 유도에 관한 연구)

  • Jeong, Minsu;Park, Seo-Yeon;Jang, Ho-Won;Lee, Joo-Heon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.2
    • /
    • pp.107-119
    • /
    • 2020
  • This study analyzed past drought characteristics based on the observed rainfall data and performed a long-term outlook for future extreme droughts using Representative Concentration Pathways 8.5 (RCP 8.5) climate change scenarios. Standardized Precipitation Index (SPI) used duration of 1, 3, 6, 9 and 12 months, a meteorological drought index, was applied for quantitative drought analysis. A single long-term time series was constructed by combining daily rainfall observation data and RCP scenario. The constructed data was used as SPI input factors for each different duration. For the analysis of meteorological drought observed relatively long-term since 1954 in Korea, 12 rainfall stations were selected and applied 10 general circulation models (GCM) at the same point. In order to analyze drought characteristics according to climate change, trend analysis and clustering were performed. For non-stationary frequency analysis using sampling technique, we adopted the technique DEMC that combines Bayesian-based differential evolution ("DE") and Markov chain Monte Carlo ("MCMC"). A non-stationary drought frequency analysis was used to derive Severity-Duration-Frequency (SDF) curves for the 12 locations. A quantitative outlook for future droughts was carried out by deriving SDF curves with long-term hydrologic data assuming non-stationarity, and by quantitatively identifying potential drought risks. As a result of performing cluster analysis to identify the spatial characteristics, it was analyzed that there is a high risk of drought in the future in Jeonju, Gwangju, Yeosun, Mokpo, and Chupyeongryeong except Jeju corresponding to Zone 1-2, 2, and 3-2. They could be efficiently utilized in future drought management policies.

A Comparative Study on the Transition of Purlin Coupling Method of Korean and Chinese Ancient Wooden Constructions (한중 목조건축 도리 결합방식 변천(變遷)에 관한 비교연구)

  • Cha, Ju-hwan
    • Korean Journal of Heritage: History & Science
    • /
    • v.47 no.4
    • /
    • pp.22-47
    • /
    • 2014
  • This study was to understand the basic principles of the East Asian wooden structure system research and analysis. The Korea and China ancient architecture internal structure research that the combination of girders and crossbeams position. The ancient wooden structures of eastern Asian countries, Korea and China are not much different from each other in the principles of the wooden architecture structure, combining pillars, purlins and crossbeams. However, it seems that age-division, local-division, national-division differs in detail techniques. China ancient wooden structures combination of purlin and crossbeam, and So-seul Timber(Chinese name: Chashou叉手, Tuojiao 托脚) seems to show differences according to the age of the fulcrum position, detailed approach is also different according to various historical dynasty. Before in the 15th century, Purlin and Crossbeam are coupled to each other, but since the 15th century, seems to have developed a technique combined with each other Girder and Crossbeam and to prevent buckling of the Crossbeam cross-sectional area increased dramatically. For Tuojiao in China Tang-Wudai dynasty(A.D. 618~979), can see that saw the top position Girder and Tuojiao no direct coupling, can be seen as maintaining the safety of the material than the material of the inner wooden structures prevent buckling of the purlin. Korea ancient wooden structures of Goryeo dynasty(A.D. 918~1391), So-seul Timber(Chinese name Tuojiao) why do not to use the fashion? To use Purlin Lower backing material techniques to prevent buckling is a popular trend to stable can be thought of as a preferred way to maintain. I think that with universality beyond the local-division, national-division and the two countries since the 15th century of Korea and China ancient wooden structures detailed mechanism for the purlin buckling. In middle-late Chosen dynasty, The effect of Deotgeolyi- techniques and fleeting beams reduce the purlin buckling that reduces the load transmitted from purlin and crossbeam of how to reduce the load on the roof portion of the architecture fleeting beams used, which of craftsmanship of the Chosen Dynasty building can be referred to as another technique for preventing buckling purlin. This Korea and China ancient architecture purlin beam structure and material So-seul Timber study. Seems to be able to provide a basic research study to restore and designed the old wooden architectures.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.