• Title/Summary/Keyword: exact approach

Search Result 627, Processing Time 0.026 seconds

A Study on Automatic Vehicle Extraction within Drone Image Bounding Box Using Unsupervised SVM Classification Technique (무감독 SVM 분류 기법을 통한 드론 영상 경계 박스 내 차량 자동 추출 연구)

  • Junho Yeom
    • Land and Housing Review
    • /
    • v.14 no.4
    • /
    • pp.95-102
    • /
    • 2023
  • Numerous investigations have explored the integration of machine leaning algorithms with high-resolution drone image for object detection in urban settings. However, a prevalent limitation in vehicle extraction studies involves the reliance on bounding boxes rather than instance segmentation. This limitation hinders the precise determination of vehicle direction and exact boundaries. Instance segmentation, while providing detailed object boundaries, necessitates labour intensive labelling for individual objects, prompting the need for research on automating unsupervised instance segmentation in vehicle extraction. In this study, a novel approach was proposed for vehicle extraction utilizing unsupervised SVM classification applied to vehicle bounding boxes in drone images. The method aims to address the challenges associated with bounding box-based approaches and provide a more accurate representation of vehicle boundaries. The study showed promising results, demonstrating an 89% accuracy in vehicle extraction. Notably, the proposed technique proved effective even when dealing with significant variations in spectral characteristics within the vehicles. This research contributes to advancing the field by offering a viable solution for automatic and unsupervised instance segmentation in the context of vehicle extraction from image.

Risk factors for orthodontic fixed retention failure: A retrospective controlled study

  • Kaat Verschueren;Amit Arvind Rajbhoj;Giacomo Begnoni;Guy Willems;Anna Verdonck;Maria Cadenas de Llano-Perula
    • The korean journal of orthodontics
    • /
    • v.53 no.6
    • /
    • pp.365-373
    • /
    • 2023
  • Objective: To investigate the potential correlation between fixed orthodontic retention failure and several patient- and treatment-related factors. Methods: Patients finishing treatment with fixed appliances between 2016 and 2017 were retrospectively included in this study. Those not showing fixed retention failure were considered as control group. Patients with fixed retention failure were considered as the experimental group. Additionally, patients with failure of fixed retainers in the period of June 2019 to March 2021 were prospectively identified and included in the experimental group. The location of the first retention failure, sex, pretreatment dental occlusion, facial characteristics, posttreatment dental occlusion, treatment approach and presence of oral habits were compared between groups before and after treatment separately by using a Fisher exact test and a Mann-Whitney U test. Results: 206 patients with fixed retention failure were included, 169 in the mandibular and 74 in the maxillary jaws. Significant correlations were observed between retention failure in the mandibular jaws and mandibular arch length discrepancy (P = 0.010), post-treatment growth pattern (P = 0.041), nail biting (P < 0.001) and abnormal tongue function (P = 0.002). Retention failure in the maxillary jaws was more frequent in patients with IPR in the mandibular jaws (P = 0.005) and abnormal tongue function (P = 0.021). Conclusions: This study suggests a correlation between fixed retention failure and parafunctional habits, such as nail biting and abnormal tongue function. Prospective studies with larger study populations could further confirm these results.

Effectiveness of BBV152 vaccine and ChAdOx1-S vaccine in preventing severe disease among vaccinated patients admitted to a designated COVID-19 hospital in India

  • Rajaraman Nivetha;Ramesh Anshul;Subbarayan Sarojini;Chinnaian Sivagurunathan;Chandrasekar Janaganbose Maikandaan
    • Clinical and Experimental Vaccine Research
    • /
    • v.13 no.1
    • /
    • pp.28-34
    • /
    • 2024
  • Purpose: Coronavirus disease 2019 (COVID-19) is a highly formidable disease. Globally, multiple vaccines have been developed to prevent and manage this disease. However, the periodic mutations of severe acute respiratory syndrome coronavirus 2 variants cast doubt on the effectiveness of commonly used vaccines in mitigating severe disease in the Indian population. This study aimed to assess the effectiveness of the BBV152 vaccine and ChAdOx1-S vaccine in preventing severe forms of the disease. Materials and Methods: This retrospective study, based on hospital records, was conducted on 204 vaccinated COVID-19 patients using a consecutive sampling approach. Data on their vaccination status, comorbidities, and high-resolution computed tomography lung reports' computed tomography severity scores were extracted from their medical records. Fisher's exact test and binomial logistic regression analysis were employed to assess the independent associations of various factors with the dependent variables. Results: Of the 204 records, 57.9% represented males, with a mean age of 61.5±9.8 years. Both vaccines demonstrated effective protection against severe illness (90.2%), with BBV152 offering slightly better protection compared to ChAdOx1-S. Male gender, partial vaccination, comorbid conditions, and the type of vaccine were identified as independent predictors of severe lung involvement. Conclusion: This study indicates that both vaccines were highly effective (90%) in preventing severe forms of the disease in fully vaccinated individuals. When comparing the two vaccines, BBV152 was slightly more effective than ChAdOx1-S in preventing severe COVID-19.

Leveraging Reinforcement Learning for LLM-based Automated Software Vulnerability Repair (강화 학습을 활용한 대형 언어 모델 기반 자동 소프트웨어 취약점 패치 생성)

  • Woorim Han;Miseon Yu;Yunheung Paek
    • Annual Conference of KIPS
    • /
    • 2024.10a
    • /
    • pp.290-293
    • /
    • 2024
  • Software vulnerabilities impose a significant burden on developers, particularly in debugging and maintenance. Automated Software Vulnerability Repair has emerged as a promising solution to mitigate these challenges. Recent advances have introduced learning-based approaches that take vulnerable functions and their Common Weakness Enumeration (CWE) types as input and generate repaired functions as output. These approaches typically fine-tune large pre-trained language models to produce vulnerability patches, with performance evaluated using Exact Match (EM) and CodeBLEU metrics to assess similarity to ground-truth patches. However, current methods rely on teacher forcing during fine-tuning, where the model is trained with ground-truth inputs, but during inference, inputs are generated by the model itself, leading to exposure bias. Additionally, while models are trained using the cross-entropy loss function, they are evaluated using discrete, non-differentiable metrics, resulting in a mismatch between the training objective and the test objective. This mismatch can yield inconsistent results, as the model is not directly optimized to improve test-time performance metrics. To address these discrepancies, we propose the use of reinforcement learning (RL) to optimize patch generation. By directly using the CodeBLEU score as a reward signal during training, our approach encourages the generation of higher-quality patches that align more closely with evaluation metrics, thereby improving overall performance.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

A Inquiry of Zhang Bo-duan's Writings (장백단(張伯端)의 저술고(著述考))

  • Kim, Kyeongsoo
    • The Journal of Korean Philosophical History
    • /
    • no.29
    • /
    • pp.255-280
    • /
    • 2010
  • Zhang Bo-duan compiled about internal alchemy in Taoism. Although he lived in the mundane world, he wished to seek theory on neidan of Taoism(internal alchemy). After finding enlightenment, he elucidated that the enlightenment was a state of rising above world not needed to leave the world. After ages, he was admired as the founder of Taoism in Southern school and his Oh Jin Peon which contents internal alchemy was considered seriously to have more than 30 people who annotated with it until Ch'ing Empire. At his age of 80, he met the real person who gave him theory on neidan of Taoism(internal alchemy), its preface tells that he organized its main point, and then wrote Oh Jin Peon with it in 1075. Generally Zhang Bo-duan was known to leave three books as Oh Jin Peon, Guem Dan Sa Baek Ja, and Cheung Hwa Bi Mun, most of critics have been studying on the basis of them. However, it is not correct whether all of them is his writings and there is not exact analysis but simple belief about it. I think accuracy and details are indispensible in philosophical approach. The study not having verification about primary data is no more than a visionary projet which soon collapses. So the purpose of this study is adding the detail analysis on it and making its exact basis of philosophical approach. Zhang Bo-duan over his age of 80, became enlightened, in his old age handed down his student the secret as a record and theory on neidan of Taoism(internal alchemy). And not in his living but after his dying his status was soared. Because of his high status in internal alchemy Taoism, it seems that there are more interest in it and some published books which just leave his name. In this study, I accept Oh Jin Peon as a his real writing among unsure his writings and criticize systematically and classify its characteristics. And I demonstrate that Guem Dan Sa Baek Ja, Cheung Hwa Bi Mun couldn't be his real writings, these could be forgeries by posterity, with proposing some basis of the argument.

Development of a Computation Code for the Verification of the Vulnerability Criteria for Surf-riding and Broaching Mode of IMO Second-Generation Intact Stability Criteria (IMO 2세대 선박 복원성 기준에 따른 서프라이딩/ 브로칭 취약성 기준 검증을 위한 계산 코드 개발)

  • Shin, Dong Min;Oh, Kyoung-gun;Moon, Byung Young
    • Journal of Ocean Engineering and Technology
    • /
    • v.33 no.6
    • /
    • pp.518-525
    • /
    • 2019
  • Recently, the Sub-Committee on SDC (Ship Design and Construction) of IMO have discussed actively the technical issues associated with the second-generation intact stability criteria of ships. Generally, second generation intact stability criteria refer to vulnerability five modes ship stability which occurs when the ship navigating in rough seas. As waves passes the ship, dynamic roll motion phenomenon will affect ship stability that may lead to capsizing. Multi-tiered approach for second generation of intact stability criteria of IMO instruments covers apply for all ships. Each ship is checked for vulnerability to pure loss of stability, parametric roll, and broaching/surf-riding phenomena using L1(level 1) vulnerability criteria. If a possible vulnerability is detected, then the L2(level 2) criteria is used, followed by direct stability assessment, if necessary. In this study, we propose a new method to verify the criteria of the surf-riding/broaching mode of small ships. In case, L1 vulnerability criteria is not satisfied based on the relatively simple calculation using the Froude number, we presented the calculation code for the L2 criteria considering the hydrodynamics in waves to perform the more complicated calculation. Then the vulnerability criteria were reviewed based on the data for a given ship. The value of C, which is the probability of the vulnerability criteria for surf-riding/broaching, was calculated. The criteria value C is considered in new approach method using the Froude-Krylov force and the diffraction force. The result shows lower values when considering both the Froude-rylov force and the diffraction force than with only the Froude-Krylov force was considered. This difference means that when dynamic roll motion of ship, more exact wave force needs considered for second generation intact stability criteria This result will contribute to basic ship design process according to the IMO Second-Generation Intact Stability Criteria.

The design of a single layer antireflection coating on the facet of buried channel waveguide devices using the angular spectrum method and field profiles obtained by the variational method (Variational 방법으로 구한 필드 분포와 Angular Spectrum 방법을 사용한 Buried채널 도파로 소자 단면의 단층 무반사 코팅 설계)

  • 김상택;김형주;김부균
    • Korean Journal of Optics and Photonics
    • /
    • v.13 no.1
    • /
    • pp.51-57
    • /
    • 2002
  • We have calculated the optimum refractive index and normalized thickness of a single layer antireflection coating on the facet of buried channel waveguides as a function of waveguide width for several waveguide depths using the angular spectrum method and field profiles obtained by the effective index method (EIM) and the variational method (VM), respectively, and discussed the results. In the area of large waveguide width, the optimum parameters of a single layer antireflection coating obtained by both methods are almost the same. However, as waveguide width decreases, the parameters obtained by the VM approach those of a single layer antireflection coating between cladding layer and air, while those obtained by the EIM do not approach those, and the difference between the two parameters is large. The tolerance maps of the quasi-TE and quasi-TM modes obtained by the VM for square waveguides are located in almost the same area regardless of refractive index contrast, while those obtained by the free space radiation mode (FSRM) method for refractive index contrast of 10% are located in the different area. Thus, we think that the tolerance maps obtained by the VM are more exact than those obtained by the FSRM method.

Target Identification for Metabolic Engineering: Incorporation of Metabolome and Transcriptome Strategies to Better Understand Metabolic Fluxes

  • Lindley, Nic
    • Proceedings of the Korean Society for Applied Microbiology Conference
    • /
    • 2004.06a
    • /
    • pp.60-61
    • /
    • 2004
  • Metabolic engineering is now a well established discipline, used extensively to determine and execute rational strategies of strain development to improve the performance of micro-organisms employed in industrial fermentations. The basic principle of this approach is that performance of the microbial catalyst should be adequately characterised metabolically so as to clearlyidentify the metabolic network constraints, thereby identifying the most probable targets for genetic engineering and the extent to which improvements can be realistically achieved. In order to harness correctly this potential, it is clear that the physiological analysis of each strain studied needs to be undertaken under conditions as close as possible to the physico-chemical environment in which the strain evolves within the full-scale process. Furthermore, this analysis needs to be undertaken throughoutthe entire fermentation so as to take into account the changing environment in an essentially dynamic situation in which metabolic stress is accentuated by the microbial activity itself, leading to increasingly important stress response at a metabolic level. All too often these industrial fermentation constraints are overlooked, leading to identification of targets whose validity within the industrial context is at best limited. Thus the conceptual error is linked to experimental design rather than inadequate methodology. New tools are becoming available which open up new possibilities in metabolic engineering and the characterisation of complex metabolic networks. Traditionally metabolic analysis was targeted towards pre-identified genes and their corresponding enzymatic activities within pre-selected metabolic pathways. Those pathways not included at the onset were intrinsically removed from the network giving a fundamentally localised vision of pathway functionality. New tools from genome research extend this reductive approach so as to include the global characteristics of a given biological model which can now be seen as an integrated functional unit rather than a specific sub-group of biochemical reactions, thereby facilitating the resolution of complexnetworks whose exact composition cannot be estimated at the onset. This global overview of whole cell physiology enables new targets to be identified which would classically not have been suspected previously. Of course, as with all powerful analytical tools, post-genomic technology must be used carefully so as to avoid expensive errors. This is not always the case and the data obtained need to be examined carefully to avoid embarking on the study of artefacts due to poor understanding of cell biology. These basic developments and the underlying concepts will be illustrated with examples from the author's laboratory concerning the industrial production of commodity chemicals using a number of industrially important bacteria. The different levels of possibleinvestigation and the extent to which the data can be extrapolated will be highlighted together with the extent to which realistic yield targets can be attained. Genetic engineering strategies and the performance of the resulting strains will be examined within the context of the prevailing experimental conditions encountered in the industrial fermentor. Examples used will include the production of amino acids, vitamins and polysaccharides. In each case metabolic constraints can be identified and the extent to which performance can be enhanced predicted

  • PDF

Electromagnetic Traveltime Tomography with Wavefield Transformation (파동장 변환을 이용한 전자탐사 주시 토모그래피)

  • Lee, Tae-Jong;Suh, Jung-Hee;Shin, Chang-Soo
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.17-25
    • /
    • 1999
  • A traveltime tomography has been carried out by transforming electromagnetic data in frequency domain to wave-like domain. The transform uniquely relates a field satisfying a diffusion equation to an integral of the corresponding wavefield. But direct transform of frequency domain magnetic fields to wave-field domain is ill-posed problem because the kernel of the integral transform is highly damped. In this study, instead of solving such an unstable problem, it is assumed that wave-fields in transformed domain can be approximated by sum of ray series. And for further simplicity, reflection and refraction energy compared to that of direct wave is weak enough to be neglected. Then first arrival can be approximated by calculating the traveltime of direct wave only. But these assumptions are valid when the conductivity contrast between background medium and the target anomalous body is low enough. So this approach can only be applied to the models with low conductivity contrast. To verify the algorithm, traveltime calculated by this approach was compared to that of direct transform method and exact traveltime, calculated analytically, for homogeneous whole space. The error in first arrival picked by this study was less than that of direct transformation method, especially when the number of frequency samples is less than 10, or when the data are noisy. Layered earth model with varying conductivity contrasts and inclined dyke model have been successfully imaged by applying nonlinear traveltime tomography in 30 iterations within three CPU minutes on a IBM Pentium Pro 200 MHz.

  • PDF