• Title/Summary/Keyword: 성능검사시스템

Search Result 530, Processing Time 0.032 seconds

Decision function for optimal smoothing parameter of asymmetrically reweighted penalized least squares (Asymetrically reweighted penalized least squares에서 최적의 평활화 매개변수를 위한 결정함수)

  • Park, Aa-Ron;Park, Jun-Kyu;Ko, Dae-Young;Kim, Sun-Geum;Baek, Sung-June
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.500-506
    • /
    • 2019
  • In this study, we present a decision function of optimal smoothing parameter for baseline correction using Asymmetrically reweighted penalized least squares (arPLS). Baseline correction is very important due to influence on performance of spectral analysis in application of spectroscopy. Baseline is often estimated by parameter selection using visual inspection on analyte spectrum. It is a highly subjective procedure and can be tedious work especially with a large number of data. For these reasons, an objective procedure is necessary to determine optimal parameter value for baseline correction. The proposed function is defined by modeling the median value of possible parameter range as the length and order of the background signal. The median value increases as the length of the signal increases and decreases as the degree of the signal increases. The simulated data produced a total of 112 signals combined for the 7 lengths of the signal, adding analytic signals and linear and quadratic, cubic and 4th order curve baseline respectively. According to the experimental results using simulated data with linear, quadratic, cubic and 4th order curved baseline, and real Raman spectra, we confirmed that the proposed function can be effectively applied to optimal parameter selection for baseline correction using arPLS.

Development and Validation of AI Image Segmentation Model for CT Image-Based Sarcopenia Diagnosis (CT 영상 기반 근감소증 진단을 위한 AI 영상분할 모델 개발 및 검증)

  • Lee Chung-Sub;Lim Dong-Wook;Noh Si-Hyeong;Kim Tae-Hoon;Ko Yousun;Kim Kyung Won;Jeong Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.3
    • /
    • pp.119-126
    • /
    • 2023
  • Sarcopenia is not well known enough to be classified as a disease in 2021 in Korea, but it is recognized as a social problem in developed countries that have entered an aging society. The diagnosis of sarcopenia follows the international standard guidelines presented by the European Working Group for Sarcopenia in Older People (EWGSOP) and the d Asian Working Group for Sarcopenia (AWGS). Recently, it is recommended to evaluate muscle function by using physical performance evaluation, walking speed measurement, and standing test in addition to absolute muscle mass as a diagnostic method. As a representative method for measuring muscle mass, the body composition analysis method using DEXA has been formally implemented in clinical practice. In addition, various studies for measuring muscle mass using abdominal images of MRI or CT are being actively conducted. In this paper, we develop an AI image segmentation model based on abdominal images of CT with a relatively short imaging time for the diagnosis of sarcopenia and describe the multicenter validation. We developed an artificial intelligence model using U-Net that can automatically segment muscle, subcutaneous fat, and visceral fat by selecting the L3 region from the CT image. Also, to evaluate the performance of the model, internal verification was performed by calculating the intersection over union (IOU) of the partitioned area, and the results of external verification using data from other hospitals are shown. Based on the verification results, we tried to review and supplement the problems and solutions.

Design of Translator for generating Secure Java Bytecode from Thread code of Multithreaded Models (다중스레드 모델의 스레드 코드를 안전한 자바 바이트코드로 변환하기 위한 번역기 설계)

  • 김기태;유원희
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2002.06a
    • /
    • pp.148-155
    • /
    • 2002
  • Multithreaded models improve the efficiency of parallel systems by combining inner parallelism, asynchronous data availability and the locality of von Neumann model. This model executes thread code which is generated by compiler and of which quality is given by the method of generation. But multithreaded models have the demerit that execution model is restricted to a specific platform. On the contrary, Java has the platform independency, so if we can translate from threads code to Java bytecode, we can use the advantages of multithreaded models in many platforms. Java executes Java bytecode which is intermediate language format for Java virtual machine. Java bytecode plays a role of an intermediate language in translator and Java virtual machine work as back-end in translator. But, Java bytecode which is translated from multithreaded models have the demerit that it is not secure. This paper, multhithread code whose feature of platform independent can execute in java virtual machine. We design and implement translator which translate from thread code of multithreaded code to Java bytecode and which check secure problems from Java bytecode.

  • PDF

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.

Performance Characteristics of Agitated Bed Manure Composting and Ammonia Removal from Composting Using Sawdust Biofiltration System (교반식 축분 퇴비화 및 톱밥 탈취처리 시스템의 퇴비화 암모니아 제거 성능)

  • Hong, J.H.;Park, K.J.
    • Journal of Animal Environmental Science
    • /
    • v.13 no.1
    • /
    • pp.13-20
    • /
    • 2007
  • Sawdust biofiltration is an emerging bio-technology for control of ammonia emissions including compost odors from composting of biological wastes. Although sawdust is widely used as a medium for bulking agent in composting system and for microbial attachment in biofiltration systems, the performance of agitated bed composting and sawdust biofiltration are not well established. A pilot-scale composting of hog manure amended with sawdust and sawdust biofiltration systems for practical operation were investigated using aerated and agitated rectangular reactor with compost turner and sawdust biofilter operated under controlled conditions, each with a working capacity of approximately $40m^3\;and\;4.5m^3$ respectively. These were used to investigate the effect of compost temperature, seed germination rate and the C/N ratio of the compost on ammonia emissions, compost maturity and sawdust biofiltration performance. Temperature profiles showed that the material in three runs had been reached to temperature of 55 to $65^{\circ}C$ and above. The ammonia concentration in the exhaust gas of the sawdust biofilter media was below the maximum average value as 45 ppm. Seed germination rate levels of final compost was maintained from 70 to 93% and EC values of the finished compost varied between 2.8 and 4.8 ds/m, providing adequate conditions for plant growth.

  • PDF

ATM Cell Encipherment Method using Rijndael Algorithm in Physical Layer (Rijndael 알고리즘을 이용한 물리 계층 ATM 셀 보안 기법)

  • Im Sung-Yeal;Chung Ki-Dong
    • The KIPS Transactions:PartC
    • /
    • v.13C no.1 s.104
    • /
    • pp.83-94
    • /
    • 2006
  • This paper describes ATM cell encipherment method using Rijndael Algorithm adopted as an AES(Advanced Encryption Standard) by NIST in 2001. ISO 9160 describes the requirement of physical layer data processing in encryption/decryption. For the description of ATM cell encipherment method, we implemented ATM data encipherment equipment which satisfies the requirements of ISO 9160, and verified the encipherment/decipherment processing at ATM STM-1 rate(155.52Mbps). The DES algorithm can process data in the block size of 64 bits and its key length is 64 bits, but the Rijndael algorithm can process data in the block size of 128 bits and the key length of 128, 192, or 256 bits selectively. So it is more flexible in high bit rate data processing and stronger in encription strength than DES. For tile real time encryption of high bit rate data stream. Rijndael algorithm was implemented in FPGA in this experiment. The boundary of serial UNI cell was detected by the CRC method, and in the case of user data cell the payload of 48 octets (384 bits) is converted in parallel and transferred to 3 Rijndael encipherment module in the block size of 128 bits individually. After completion of encryption, the header stored in buffer is attached to the enciphered payload and retransmitted in the format of cell. At the receiving end, the boundary of ceil is detected by the CRC method and the payload type is decided. n the payload type is the user data cell, the payload of the cell is transferred to the 3-Rijndael decryption module in the block sire of 128 bits for decryption of data. And in the case of maintenance cell, the payload is extracted without decryption processing.

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

The Design of Mobile Medical Image Communication System based on CDMA 1X-EVDO for Emergency Care (CDMA2000 1X-EVDO망을 이용한 이동형 응급 의료영상 전송시스템의 설계)

  • Kang, Won-Suk;Yong, Kun-Ho;Jang, Bong-Mun;Namkoong, Wook;Jung, Hai-Jo;Yoo, Sun-Kook;Kim, Hee-Joung
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2004.11a
    • /
    • pp.53-55
    • /
    • 2004
  • In emergency cases, such as the severe trauma involving the fracture of skull, spine, or cervical bone, from auto accident or a fall, and/or pneumothorax which can not be diagnosed exactly by the eye examination, it is necessary the radiological examination during transferring to the hospital for emergency care. The aim of this study was to design and evaluate the prototype of mobile medical image communication system based on CDMA 1X EVDO. The system consists of a laptop computer used as a transmit DICOM client, linked with cellular phone which support to the CDMA 1X EVDO communication service, and a receiving DICOM server installed in the hospital. The DR images were stored with DICOM format in the storage of transmit client. Those images were compressed into JPEG2000 format and transmitted from transmit client to the receiving server. All of those images were progressively transmitted to the receiving server and displayed on the server monitor. To evaluate the image quality, PSNR of compressed image was measured. Also, several field tests had been performed using commercial CDMA2000 1X-EVDO reverse link with the TCP/IP data segments. The test had been taken under several velocity of vehicle in seoul areas.

  • PDF

Measurement of Image Quality According to the Time of Computed Radiography System (시간에 따르는 CR장비의 영상의 질평가)

  • Son, Soon-Yong;Choi, Kwan-Woo;Kim, Jung-Min;Jeong, Hoi-Woun;Kwon, Kyung-Tae;Hwang, Sun-Kwang;Lee, Ik-Pyo;Kim, Ki-Won;Jung, Jae-Yong;Lee, Young-Ah;Son, Jin-Hyun;Min, Jung-Whan
    • Journal of radiological science and technology
    • /
    • v.38 no.4
    • /
    • pp.365-374
    • /
    • 2015
  • The regular quality assurance (RQA) of X-ray images is essential for maintaining a high accuracy of diagnosis. This study was to evaluate the modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) of a computed radiography (CR) system for various periods of use from 2006 to 2015. We measured the pre-sampling MTF using the edge method and RQA 5 based on commission standard international electro-technical commission (IEC). The spatial frequencies corresponding to the 50% MTF for the CR systems in 2006, 2009, 2012 and 2015 were 1.54, 1.14, 1.12, and $1.38mm^{-1}$, respectively and the10% MTF for 2006, 2009, 2012, and 2015 were 2.68, 2.44, 2.44, and $2.46mm^{-1}$, respectively. In the NPS results, the CR systems showed the best noise distribution in 2006, and with the quality of distributions in the order of 2015, 2009, and 2012. At peak DQE and DQE at $1mm^{-1}$, the CR systems showed the best efficiency in 2006, and showed better efficiency in order of 2015, 2009, and 2012. Because the eraser lamp in the CR systems was replaced, the image quality in 2015 was superior to those in 2009 and 2012. This study can be incorporated into used in clinical QA requiring performance and evaluation of the performance of the CR systems.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.