• Title/Summary/Keyword: Software Quality Control

Search Result 418, Processing Time 0.025 seconds

Counting Harmful Aquatic Organisms in Ballast Water through Image Processing (이미지처리를 통한 선박평형수 내 유해수중생물 개체수 측정)

  • Ha, Ji-Hun;Im, Hyo-Hyuk;Kim, Yong-Hyuk
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.6 no.3
    • /
    • pp.383-391
    • /
    • 2016
  • Ballast water provides stability and manoeuvrability to a ship. Foreign harmful aquatic organisms, which were transferred by ballast water, cause disturbing ecosystem. In order to minimize transference of foreign harmful aquatic organisms, IMO(International Maritime Organization) adopted the International Convention for the Control and Management of Ship's Ballast Water and Sediments in 2004. If the convention take effect, a port authority might need to check that ballast water is properly disposed of. In this paper, we propose a method of counting harmful aquatic organisms in ballast water thorough image processing. We extracted three samples from the ballast water that had been collected at Busan port in Korea. Then we made three grey-scale images from each sample as experimental data. We made a comparison between the proposed method and CellProfiler which is a well known cell-counting program based on image processing. Setting of CellProfiler is empirically chosen from the result of cell count by an expert. After finding a proper threshold for each image at which the result is similar to that of CellProfiler, we used the average value as the final threshold. Our experimental results showed that the proposed method is simple but about ten times faster than CellProfiler without loss of the output quality.

Market in Medical Devices of Blockchain-Based IoT and Recent Cyberattacks

  • Shih-Shuan WANG;Hung-Pu (Hong-fu) CHOU;Aleksander IZEMSKI ;Alexandru DINU;Eugen-Silviu VRAJITORU;Zsolt TOTH;Mircea BOSCOIANU
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.2
    • /
    • pp.39-44
    • /
    • 2023
  • The creativity of thesis is that the significance of cyber security challenges in blockchain. The variety of enterprises, including those in the medical market, are the targets of cyberattacks. Hospitals and clinics are only two examples of medical facilities that are easy targets for cybercriminals, along with IoT-based medical devices like pacemakers. Cyberattacks in the medical field not only put patients' lives in danger but also have the potential to expose private and sensitive information. Reviewing and looking at the present and historical flaws and vulnerabilities in the blockchain-based IoT and medical institutions' equipment is crucial as they are sensitive, relevant, and of a medical character. This study aims to investigate recent and current weaknesses in medical equipment, of blockchain-based IoT, and institutions. Medical security systems are becoming increasingly crucial in blockchain-based IoT medical devices and digital adoption more broadly. It is gaining importance as a standalone medical device. Currently the use of software in medical market is growing exponentially and many countries have already set guidelines for quality control. The achievements of the thesis are medical equipment of blockchain-based IoT no longer exist in a vacuum, thanks to technical improvements and the emergence of electronic health records (EHRs). Increased EHR use among providers, as well as the demand for integration and connection technologies to improve clinical workflow, patient care solutions, and overall hospital operations, will fuel significant growth in the blockchain-based IoT market for linked medical devices. The need for blockchain technology and IoT-based medical device to enhance their health IT infrastructure and design and development techniques will only get louder in the future. Blockchain technology will be essential in the future of cybersecurity, because blockchain technology can be significantly improved with the cybersecurity adoption of IoT devices, i.e., via remote monitoring, reducing waiting time for emergency rooms, track assets, etc. This paper sheds the light on the benefits of the blockchain-based IoT market.

Deep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic Segmentation

  • Seul Bi Lee;Youngtaek Hong;Yeon Jin Cho;Dawun Jeong;Jina Lee;Soon Ho Yoon;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon
    • Korean Journal of Radiology
    • /
    • v.24 no.4
    • /
    • pp.294-304
    • /
    • 2023
  • Objective: We aimed to investigate whether image standardization using deep learning-based computed tomography (CT) image conversion would improve the performance of deep learning-based automated hepatic segmentation across various reconstruction methods. Materials and Methods: We collected contrast-enhanced dual-energy CT of the abdomen that was obtained using various reconstruction methods, including filtered back projection, iterative reconstruction, optimum contrast, and monoenergetic images with 40, 60, and 80 keV. A deep learning based image conversion algorithm was developed to standardize the CT images using 142 CT examinations (128 for training and 14 for tuning). A separate set of 43 CT examinations from 42 patients (mean age, 10.1 years) was used as the test data. A commercial software program (MEDIP PRO v2.0.0.0, MEDICALIP Co. Ltd.) based on 2D U-NET was used to create liver segmentation masks with liver volume. The original 80 keV images were used as the ground truth. We used the paired t-test to compare the segmentation performance in the Dice similarity coefficient (DSC) and difference ratio of the liver volume relative to the ground truth volume before and after image standardization. The concordance correlation coefficient (CCC) was used to assess the agreement between the segmented liver volume and ground-truth volume. Results: The original CT images showed variable and poor segmentation performances. The standardized images achieved significantly higher DSCs for liver segmentation than the original images (DSC [original, 5.40%-91.27%] vs. [standardized, 93.16%-96.74%], all P < 0.001). The difference ratio of liver volume also decreased significantly after image conversion (original, 9.84%-91.37% vs. standardized, 1.99%-4.41%). In all protocols, CCCs improved after image conversion (original, -0.006-0.964 vs. standardized, 0.990-0.998). Conclusion: Deep learning-based CT image standardization can improve the performance of automated hepatic segmentation using CT images reconstructed using various methods. Deep learning-based CT image conversion may have the potential to improve the generalizability of the segmentation network.

The Workflow for Computational Analysis of Single-cell RNA-sequencing Data (단일 세포 RNA 시퀀싱 데이터에 대한 컴퓨터 분석의 작업과정)

  • Sung-Hun WOO;Byung Chul JUNG
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.56 no.1
    • /
    • pp.10-20
    • /
    • 2024
  • RNA-sequencing (RNA-seq) is a technique used for providing global patterns of transcriptomes in samples. However, it can only provide the average gene expression across cells and does not address the heterogeneity within the samples. The advances in single-cell RNA sequencing (scRNA-seq) technology have revolutionized our understanding of heterogeneity and the dynamics of gene expression at the single-cell level. For example, scRNA-seq allows us to identify the cell types in complex tissues, which can provide information regarding the alteration of the cell population by perturbations, such as genetic modification. Since its initial introduction, scRNA-seq has rapidly become popular, leading to the development of a huge number of bioinformatic tools. However, the analysis of the big dataset generated from scRNA-seq requires a general understanding of the preprocessing of the dataset and a variety of analytical techniques. Here, we present an overview of the workflow involved in analyzing the scRNA-seq dataset. First, we describe the preprocessing of the dataset, including quality control, normalization, and dimensionality reduction. Then, we introduce the downstream analysis provided with the most commonly used computational packages. This review aims to provide a workflow guideline for new researchers interested in this field.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Image Watermarking for Copyright Protection of Images on Shopping Mall (쇼핑몰 이미지 저작권보호를 위한 영상 워터마킹)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.147-157
    • /
    • 2013
  • With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.

The Efficacy of Aspirin in Preventing the Recurrence of Colorectal Adenoma: a Renewed Meta-Analysis of Randomized Trials

  • Zhao, Tai-Yun;Tu, Jing;Wang, Yin;Cheng, Da-Wei;Gao, Xian-Kui;Luo, Hao;Yan, Bi-Chun;Xu, Xiao-Li;Zhang, Hong-Ling;Lu, Xing-Jun;Wang, Yao-Jun
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.5
    • /
    • pp.2711-2717
    • /
    • 2016
  • Background: Through search the possible randomized control trials, we make a renewed meta-analysis in order to assess the impact of aspirin in preventing the recurrence of colorectal adenoma. Materials and Methods: The Medicine/PubMed, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Chinese biomedical literature service system (SinoMed) databases were searched for the related randomized controlled trials until to the April 2016. Three different authors respectively evaluated the quality of studies and extracted data, and we used the STATA software to analyze, investigate heterogeneity between the data, using the fixed-effects model to calculate and merge data. Results: 7 papers were included the renewed meta-analysis, among these studies, two pairs were identified as representing the same study population, with the only difference being the duration of follow-up. Thus there were only five papers included our meta-analysis, and one Chinese paper were also included the work. Results were categorized by the length of follow-up, different kinds of people, varied dose of oral aspirin. The relative of adenoma in patients taking aspirin vs placebo were 0.73 (95% CI 0.55-0.98, P=0.039) with 1 year follow up; 0.84 (95% CI 0.72-0.98, P=0.484) with greater than 1 year follow up; for the advanced adenoma, the RR 0.68 (95% CI 0.49-0.94, P=0.582),for one year; RR=0.75 (95% CI 0.52-1.07, P=0.552) for greater one year. Furthermore the white population could divided into two subgroups according to the different length of follow-up time. When the length of follow-up time less than 3-year, The RR of two subgroups respective were RR=0.86 (95% CI 0.76-0.98, P=0.332), $I^2=0%$, RR=0.68 (95% CI 0.47-0.98, P=0.552), $I^2=64.6%$, But with the extension of follow-up time greater than 2-year, with the white, oral aspirin without considering dose had no efficacy on preventing the recurrence of any adenoma, the RR was 0.86 (95% CI 0.71-1.05, P=0.302), $I^2=16.4%$. Conclusions: This meta-analysis indicated that oral aspirin is associated with a remarkable decrease in the recurrence of any adenoma and advanced adenomas in patients follow-up for 1 year without concerning the dose of aspirin, but with the extension of follow-up time for greater than 1 year, oral aspirin can be effective on preventing the recurrence of any adenoma, but for the advanced adenoma, the result indicated that oral aspirin had no efficacy, According to the inclusion of ethnic groups, we also divided relevant papers into two subgroups as the yellow and white group. Then the follow-up time was less than 3 years, oral aspirin without considering the dose, had an significant efficacy on preventing the recurrence of any adenoma. But with the follow-up greater than 2 years, oral aspirin had no effect in the white.

A Study on a Basic Model for GIS Audit, Based on Various Types of GIS Projects (GIS 사업유형을 고려한 GIS 감리의 기반 모델 연구)

  • Koh, Kwang-Chul;Kim, Eun-Hyung
    • Journal of Korea Spatial Information System Society
    • /
    • v.2 no.2 s.4
    • /
    • pp.5-23
    • /
    • 2000
  • Since 1995, national and local governments have competitively initiated many and large GIS projects and audit for the projects becomes an important issue. So far, the audit in the Information Technology(IT) area has tried to deal with the issue but ineffectiveness has been found for the successful GIS project management. Effective auditing is a critical element for the project management. In order to establish a proper audit model for the GIS projects and to promote auditing activities in the projects, this study constructs two hypotheses and tries to prove them. The hypotheses are as follows : 1) For a good audits model for GIS, unique characteristics of a GIS project audit items and the scope of the audit need to be identified. 2) The scope of audit needs to be classified according to the requests from tasks in the projects. To prove the hypotheses, this study analyzes positive aspects of audit in IT and construction projects, clarifies the audit items in GIS projects by comparing with them, and classifies the scope of the GIS audit based on various types of GIS projects. As a results, 5 types of the GIS audit are identified : (1) audit for project management, (2) audit focused on IT, (3) audit characterized by GIS technologies, (4) GIS database audit and (5) consulting services for critical problems in the projects. In addition, 4 criteria in classifying the GIS projects are suggested for the GIS audit. The 4 criteria are domain, scope, duration, and GIS applications technologies. Especially, GIS technology considered in this study includes GIS software, methodologies for GIS development, GIS database and quality control of GIS data, which are not usually reflected in the existing studies about in GIS audit. Because the GIS audit depends on a type of GIS projects, scopes of the audit can be flexibly reconstructed in accordance with the types of GIS projects. This is a key to effective and realistic audit for the future GIS projects. Strategies for effective GIS audit are also proposed in terms of the following: GIS project management, goal establishment in each audit stage, documentation from GIS audit, timing strategies for intensive GIS audit, and designing team structure.

  • PDF