• Title/Summary/Keyword: 검증 시스템

Search Result 12,908, Processing Time 0.046 seconds

Simultaneous Multiple Transmit Focusing Method with Orthogonal Chirp Signal for Ultrasound Imaging System (초음파 영상 장치에서 직교 쳐프 신호를 이용한 동시 다중 송신집속 기법)

  • 정영관;송태경
    • Journal of Biomedical Engineering Research
    • /
    • v.23 no.1
    • /
    • pp.49-60
    • /
    • 2002
  • Receive dynamic focusing with an array transducer can provide near optimum resolution only in the vicinity of transmit focal depth. A customary method to increase the depth of field is to combine several beams with different focal depths, with an accompanying decrease in the frame rate. In this Paper. we Present a simultaneous multiple transmit focusing method in which chirp signals focused at different depths are transmitted at the same time. These chirp signals are mutually orthogonal in a sense that the autocorrelation function of each signal has a narrow mainlobe width and low sidelobe levels. and the crossorelation function of any Pair of the signals has values smaller than the sidelobe levels of each autocorrelation function. This means that each chirp signal can be separated from the combined received signals and compressed into a short pulse. which is then individually focused on a separate receive beamformer. Next. the individually focused beams are combined to form a frame of image. Theoretically, any two chirp signals defined over two nonoverlapped frequency bands are mutually orthogonal In the present work. however, a tractional overlap of adjacent frequency bands is permitted to design more chirp signals within a given transducer bandwidth. The elevation of the rosscorrelation values due to the frequency overlap could be reduced by alternating the direction of frequency sweep of the adjacent chirp signals We also observe that the Proposed method provides better images when the low frequency chirp is focused at a near Point and the high frequency chirp at a far point along the depth. better lateral resolution is obtained at the far field with reasonable SNR due to the SNR gain in Pulse compression Imaging .

Current Status of Cattle Genome Sequencing and Analysis using Next Generation Sequencing (차세대유전체해독 기법을 이용한 소 유전체 해독 연구현황)

  • Choi, Jung-Woo;Chai, Han-Ha;Yu, Dayeong;Lee, Kyung-Tai;Cho, Yong-Min;Lim, Dajeong
    • Journal of Life Science
    • /
    • v.25 no.3
    • /
    • pp.349-356
    • /
    • 2015
  • Thanks to recent advances in next-generation sequencing (NGS) technology, diverse livestock species have been dissected at the genome-wide sequence level. As for cattle, there are currently four Korean indigenous breeds registered with the Domestic Animal Diversity Information System of the Food and Agricultural Organization of the United Nations: Hanwoo, Chikso, Heugu, and Jeju Heugu. These native genetic resources were recently whole-genome resequenced using various NGS technologies, providing enormous single nucleotide polymorphism information across the genomes. The NGS application further provided biological such that Korean native cattle are genetically distant from some cattle breeds of European origins. In addition, the NGS technology was successfully applied to detect structural variations, particularly copy number variations that were usually difficult to identify at the genome-wide level with reasonable accuracy. Despite the success, those recent studies also showed an inherent limitation in sequencing only a representative individual of each breed. To elucidate the biological implications of the sequenced data, further confirmatory studies should be followed by sequencing or validating the population of each breed. Because NGS sequencing prices have consistently dropped, various population genomic theories can now be applied to the sequencing data obtained from the population of each breed of interest. There are still few such population studies available for the Korean native cattle breeds, but this situation will soon be improved with the recent initiative for NGS sequencing of diverse native livestock resources, including the Korean native cattle breeds.

THE EFFECT OF C-FACTOR AND VOLUME ON MICROLEAKAGE OF COMPOSITE RESIN RESTORATIONS WITH ENAMEL MARGINS (법랑질 변연으로 이루어진 복합레진 수복물의 체적과 C-factor가 미세누출에 미치는 영향)

  • Koo, Bong-Joo;Shin, Dong-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.31 no.6
    • /
    • pp.452-459
    • /
    • 2006
  • Competition will usually develop between the opposing walls as the restorative resin shrinks during polymerization. Magnitude of this phenomenon may be depended upon cavity configuration and volume. The purpose of this sturdy was to evaluate the effect of cavity configuration and volume on microleakage of composite resin restoration that has margins on the enamel site only. The labial enamel of forty bovine teeth was ground using a model trimmer to expose a flat enamel surface. Four groups with cylindrical cavities were defined, according to volume and configuration factor(Depth x Diameter / C-factor) - Group I : 1.5 mm ${\times}$ 2.0 mm / 4.0, Group II : 1.5 mm ${\times}$ 6.0 mm / 2.0, Group III : 2.Omm ${\times}$ 1.72 mm / 5.62, Group IV : 2.0 mm ${\times}$ 5.23 mm / 2.54. After treating with fifth-generation one-bottle adhesive - BC Plus$^{TM}$ (Vericom, AnYang, Korea), cavities were bulk flted with microhybrid composite resin - Denfill$^{TM}$ (Vericom). Teeth were stored in distilled water for one day at room temperature and were finished and polished with Sof-Lex system. Specimens were thermocycled 500 times between 5$^{\circ}$C and 55$^{\circ}$C for 30 second at each temperature. Teeth were isolated with two layers of nail varnish except the restoration surface and 1 mm surrounding margins. Electrical conductivity (${\mu}$A) was recorded in distilled water by electrochemical method. Microleakage scores were compared and analyzed using two-way ANOVA at 95% level. The results were as follows: 1. Small cavity volume showed lower microleakage score than large one, however, there was no statistically significant difference. 2. There was no relationship between cavity configuration and microleakage. Factors of cavity configuration and volume did not affect on microleakage of resin restorations with enamel margins only.

ATM Cell Encipherment Method using Rijndael Algorithm in Physical Layer (Rijndael 알고리즘을 이용한 물리 계층 ATM 셀 보안 기법)

  • Im Sung-Yeal;Chung Ki-Dong
    • The KIPS Transactions:PartC
    • /
    • v.13C no.1 s.104
    • /
    • pp.83-94
    • /
    • 2006
  • This paper describes ATM cell encipherment method using Rijndael Algorithm adopted as an AES(Advanced Encryption Standard) by NIST in 2001. ISO 9160 describes the requirement of physical layer data processing in encryption/decryption. For the description of ATM cell encipherment method, we implemented ATM data encipherment equipment which satisfies the requirements of ISO 9160, and verified the encipherment/decipherment processing at ATM STM-1 rate(155.52Mbps). The DES algorithm can process data in the block size of 64 bits and its key length is 64 bits, but the Rijndael algorithm can process data in the block size of 128 bits and the key length of 128, 192, or 256 bits selectively. So it is more flexible in high bit rate data processing and stronger in encription strength than DES. For tile real time encryption of high bit rate data stream. Rijndael algorithm was implemented in FPGA in this experiment. The boundary of serial UNI cell was detected by the CRC method, and in the case of user data cell the payload of 48 octets (384 bits) is converted in parallel and transferred to 3 Rijndael encipherment module in the block size of 128 bits individually. After completion of encryption, the header stored in buffer is attached to the enciphered payload and retransmitted in the format of cell. At the receiving end, the boundary of ceil is detected by the CRC method and the payload type is decided. n the payload type is the user data cell, the payload of the cell is transferred to the 3-Rijndael decryption module in the block sire of 128 bits for decryption of data. And in the case of maintenance cell, the payload is extracted without decryption processing.

The Feasibility Study of MRI-based Radiotherapy Treatment Planning Using Look Up Table (Look Up Table을 이용한 자기공명영상 기반 방사선 치료계획의 타당성 분석 연구)

  • Kim, Shin-Wook;Shin, Hun-Joo;Lee, Young-Kyu;Seo, Jae-Hyuk;Lee, Gi-Woong;Park, Hyeong-Wook;Lee, Jae-Choon;Kim, Ae-Ran;Kim, Ji-Na;Kim, Myong-Ho;Kay, Chul-Seung;Jang, Hong-Seok;Kang, Young-Nam
    • Progress in Medical Physics
    • /
    • v.24 no.4
    • /
    • pp.237-242
    • /
    • 2013
  • In the intracranial regions, an accurate delineation of the target volume has been difficult with only the CT data due to poor soft tissue contrast of CT images. Therefore, the magnetic resonance images (MRI) for the delineation of the target volumes were widely used. To calculate dose distributions with MRI-based RTP, the electron density (ED) mapping concept from the diagnostic CT images and the pseudo CT concept from the MRI were introduced. In this study, the look up table (LUT) from the fifteen patients' diagnostic brain MRI images was created to verify the feasibility of MRI-based RTP. The dose distributions from the MRI-based calculations were compared to the original CT-based calculation. One MRI set has ED information from LUT (lMRI). Another set was generated with voxel values assigned with a homogeneous density of water (wMRI). A simple plan with a single anterior 6MV one portal was applied to the CT, lMRI, and wMRI. Depending on the patient's target geometry for the 3D conformal plan, 6MV photon beams and from two to five gantry portals were used. The differences of the dose distribution and DVH between the lMRI based and CT-based plan were smaller than the wMRI-based plan. The dose difference of wMRI vs. lMRI was measured as 91 cGy vs. 57 cGy at maximum dose, 74 cGt vs. 42 cGy at mean dose, and 94 cGy vs. 53 at minimum dose. The differences of maximum dose, minimum dose, and mean dose of the wMRI-based plan were lower than the lMRI-based plan, because the air cavity was not calculated in the wMRI-based plan. These results prove the feasibility of the lMRI-based planning for brain tumor radiation therapy.

A Case Study on Mechanism Factors for Result Creation of Informatization of IT Service Company (IT서비스 기업의 정보화 성과 창출을 위한 메커니즘 요인 사례 연구)

  • Choi, Hae-Lyong;Gu, Ja-Won
    • Management & Information Systems Review
    • /
    • v.36 no.5
    • /
    • pp.1-26
    • /
    • 2017
  • In the meantime, research on corporate informatization focuses on the completeness of information technology itself and its financial effects, so there is insufficient research on whether information technology can support business strategy. It is necessary to verify whether the management strategy implementation of the company can be led through the informatization of the enterprise and the relation between the main mechanism factors and the informatization performance. In this study, what a mechanism factor is applied in the process of result creation of informatization from three mechanism perspectives such as selecting mechanism, learning mechanism and coordinating mechanism with cases of representative domestic IT company and what an importance mechanism factors have been ascertained. This study results in 8 propositions. For a main agent of companies, securement of information capability of organizations has been selected to realize informatization results and investment of informatization has been selected to solve organizational decentralization problems as the most important factor. Additionally, as competition in the industry gets fierce, investment on informatization has been changed to a utility way of implementation of strategies and decision on investment has been made through the official process and information technology. Differentiated company capability has been made based on acquisition of technical knowledge and company information has been expanded to its whole employees through the information system. Also, informatization change management and outside subcontractor management have been acknowledged as an important adjustment factor of company. The first implication of this study is that since case studies on mechanism factors that preceding studies on informatization results did not empirically cover have directly been dealt with based on experiences of executives in charge of business and in charge of informatization, this study can provide practical views about factors that should be mainly managed for informatization results of IT companies. Secondly, since ser-M framework has been applied for IT companies for the first time, this study can academically contribute to companies in other fields about main mechanism factors for result creation of informatization based on deeper understanding and empirical cases.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

Experimental Analysis of Nodal Head-outflow Relationship Using a Model Water Supply Network for Pressure Driven Analysis of Water Distribution System (상수관망 압력기반 수리해석을 위한 모의 실험시설 기반 절점의 압력-유량 관계 분석)

  • Chang, Dongeil;Kang, Kihoon
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.36 no.6
    • /
    • pp.421-428
    • /
    • 2014
  • For the analysis of water supply network, demand-driven and pressure-driven analysis methods have been proposed. Of the two methods, demand-driven analysis (DDA) can only be used in a normal operation condition to evaluate hydraulic status of a pipe network. Under abnormal conditions, i.e., unexpected pipe destruction, or abnormal low pressure conditions, pressure-driven analysis (PDA) method should be used to estimate the suppliable flowrate at each node in a network. In order to carry out the pressure-driven analysis, head-outflow relationship (HOR), which estimates flowrate at a certain pressure at each node, should be first determined. Most previous studies empirically suggested that each node possesses its own characteristic head-outflow relationship, which, therefore, requires verification by using actual field data for proper application in PDA modeling. In this study, a model pipe network was constructed, and various operation scenarios of normal and abnormal conditions, which cannot be realized in real pipe networks, were established. Using the model network, data on pressure and flowrate at each node were obtained at each operation condition. Using the data obtained, previously proposed HOR equations were evaluated. In addition, head-outflow relationship at each node was analyzed especially under multiple pipe destruction events. By analyzing the experimental data obtained from the model network, it was found that flowrate reduction corresponding to a certain pressure drop (by pipe destruction at one or multiple points on the network) followed intrinsic head-outflow relationship of each node. By comparing the experimentally obtained head-outflow relationship with various HOR equations proposed by previous studies, the one proposed by Wagner et al. showed the best agreement with the exponential parameter, m of 3.0.

Response Modeling for the Marketing Promotion with Weighted Case Based Reasoning Under Imbalanced Data Distribution (불균형 데이터 환경에서 변수가중치를 적용한 사례기반추론 기반의 고객반응 예측)

  • Kim, Eunmi;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.29-45
    • /
    • 2015
  • Response modeling is a well-known research issue for those who have tried to get more superior performance in the capability of predicting the customers' response for the marketing promotion. The response model for customers would reduce the marketing cost by identifying prospective customers from very large customer database and predicting the purchasing intention of the selected customers while the promotion which is derived from an undifferentiated marketing strategy results in unnecessary cost. In addition, the big data environment has accelerated developing the response model with data mining techniques such as CBR, neural networks and support vector machines. And CBR is one of the most major tools in business because it is known as simple and robust to apply to the response model. However, CBR is an attractive data mining technique for data mining applications in business even though it hasn't shown high performance compared to other machine learning techniques. Thus many studies have tried to improve CBR and utilized in business data mining with the enhanced algorithms or the support of other techniques such as genetic algorithm, decision tree and AHP (Analytic Process Hierarchy). Ahn and Kim(2008) utilized logit, neural networks, CBR to predict that which customers would purchase the items promoted by marketing department and tried to optimized the number of k for k-nearest neighbor with genetic algorithm for the purpose of improving the performance of the integrated model. Hong and Park(2009) noted that the integrated approach with CBR for logit, neural networks, and Support Vector Machine (SVM) showed more improved prediction ability for response of customers to marketing promotion than each data mining models such as logit, neural networks, and SVM. This paper presented an approach to predict customers' response of marketing promotion with Case Based Reasoning. The proposed model was developed by applying different weights to each feature. We deployed logit model with a database including the promotion and the purchasing data of bath soap. After that, the coefficients were used to give different weights of CBR. We analyzed the performance of proposed weighted CBR based model compared to neural networks and pure CBR based model empirically and found that the proposed weighted CBR based model showed more superior performance than pure CBR model. Imbalanced data is a common problem to build data mining model to classify a class with real data such as bankruptcy prediction, intrusion detection, fraud detection, churn management, and response modeling. Imbalanced data means that the number of instance in one class is remarkably small or large compared to the number of instance in other classes. The classification model such as response modeling has a lot of trouble to recognize the pattern from data through learning because the model tends to ignore a small number of classes while classifying a large number of classes correctly. To resolve the problem caused from imbalanced data distribution, sampling method is one of the most representative approach. The sampling method could be categorized to under sampling and over sampling. However, CBR is not sensitive to data distribution because it doesn't learn from data unlike machine learning algorithm. In this study, we investigated the robustness of our proposed model while changing the ratio of response customers and nonresponse customers to the promotion program because the response customers for the suggested promotion is always a small part of nonresponse customers in the real world. We simulated the proposed model 100 times to validate the robustness with different ratio of response customers to response customers under the imbalanced data distribution. Finally, we found that our proposed CBR based model showed superior performance than compared models under the imbalanced data sets. Our study is expected to improve the performance of response model for the promotion program with CBR under imbalanced data distribution in the real world.