• Title/Summary/Keyword: Software Evaluation

Search Result 2,526, Processing Time 0.029 seconds

EFFECT OF INDUCTION CHEMOTHERAPY ON FLAP SURVIVAL RATE IN MICROSURGERY (종양수술전 화학요법이 미세수술시 피판생존율에 미치는 영향)

  • Kim, Uk-Kyu;Kim, Yong-Deok;Byun, June-Ho;Shin, Sang-Hun;Chung, In-Kyo
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.29 no.6
    • /
    • pp.421-429
    • /
    • 2003
  • Purpose : Neoadjuvant chemotherapy is commonly used to treat cancer patients as adjunct treatment, but if the microvascular tissue transfer is performed simulataneously with cancer resection surgery, the induction chemotherapy might affect the survival rate of vascularized free flap. Our study will focus on the effect of induction chemotherapy on the free flaps which were made on white rat abdomen after injection of 5-FU. Materials and Methods: The experimental rat groups were divided into three groups (total 24 rats) as a normal control group, 24 hrs group after 5-FU injection, 3 days group after 5-FU injection. Inferior abdominal island flaps of 8 Sprague Dawley rats on each group were made and immediately were induced into an ischemic state by clamping the supplying inferior epigastric artery and vein with microvascular clamp for a hour to induce a similiar free flap circumstance, then the inferior abdominal skin flaps were reperfused by releasing the clamps. The flaps on abdomen were repositioned and sutured. The experimental data for flap survival rate was collected by digital photo taking, analysed by computer image program to compare with the flap luminosity. The rats were sacrificed at 3 days, 5 days, 7 days after flap preparation and specimens of the flap were taken and stained with H-E staining. The microscopic finding was made under magnification of 200 and 400. Results: 1. Gross findings on each groups showed the healing condition was good as following sequences; normal, 24 hrs group after chemotherapy, 3 days group after chemotherpy. 2. The values of flap luminosity for evaluation of flap survival rate also showed the same sequences as gross findings of healing state. 3. The microscopic findings of epidermis necrosis, inflammation state, dermis fibrosis, vessel change, fatty tissue layer thinning were compared with each group. The 3 days group after chemotherapy showed remarkably poor healing condition compared to other groups. Conclusion: Chemotherapy agents affected the healing process of free flap, but healing condition was recovered spontaneously as post-injection periods passed out. In opposite to our expectation, 3 days group showed the bad flap condition in comparing with 24 hours group which was considered as immatured body circulation state of chemotherapy agent. It showed that 3 weeks in human being after chemotherapy was not proper as timing of microvascular tissue transfer if 3 days group in rat was considered as same healing period of 3 weeks in human being. More delayed healing timing than 3 weeks might be required in clinical application of free tissue transfer.

Analysis of the ESD and DAP According to the Change of the Cine Imaging Condition of Coronary Angiography and Usefulness of SNR and CNR of the Images: Focusing on the Change of Tube Current (관상동맥 조영술(Coronary Angiography)의 씨네(cine) 촬영조건 변화에 따른 입사표면선량(ESD)과 흡수선량(DAP) 및 영상의 SNR·CNR 유용성 분석: 관전류 변화를 중점으로)

  • Seo, Young Hyun;Song, Jong Nam
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.3
    • /
    • pp.371-379
    • /
    • 2019
  • The purpose of this study was to investigate the effect of the change in the X-ray condition on the entrance surface dose (ESD) and dose area product (DAP) in the cine imaging of coronary angiography (CAG), and to analyze the usefulness of the condition change on the dose relation and image quality by measuring and analyzing the Signal to Noise Radio (SNR) and Contrast to Nois Ratio (CNR) of the angiographic images taken by the Image J program. Data were collected from 33 patients (24 males and 9 females) who underwent CAG at this hospital from November 2017 to March 2018. In terms of imaging condition and data acquisition, the ESD and DAP of group A with a high tube current of 397.2 mA and group B with a low tube current of 370.7 mA were retrospectively obtained for comparison and analysis. For the SNR and CNR measurement and analysis via Image J, the result values were derived by substituting the obtained data into the formula. The correlations among ESD and DAP according to the change in the imaging condition, SNR, and CNR were analyzed by using the SPSS statistical analysis software. The relationships of groups A and B, having a difference in the imaging condition, mA, with ESD ($A:483.5{\pm}60.1$; $B: 464.4{\pm}39.9$) and DAP ($A:84.3{\pm}10.7$; $B:81.5{\pm}7$) were not statistically significant (p>0.05). In the relationships with SNR and CNR based on Image J, the SNR ($5.451{\pm}0.529$) and CNR ($0.411{\pm}0.0432$) of the images obtained via the left coronary artery (LCA) imaging of group B showed differences of $0.475{\pm}0.096$ and $-0.048{\pm}0.0$, respectively, from the SNR ($4.976{\pm}0.433$) and CNR ($0.459{\pm}0.0431$) of the LCA of group A. However, the differences were not statistically significant (p<0.05). In the SNR and CNR obtained via the right coronary artery (RCA) imaging, the SNR ($4.731{\pm}0.773$) and CNR ($0.354{\pm}0.083$) of group A showed increased values of $1.491{\pm}0.405$ and $0.188{\pm}0.005$, respectively, from the SNR ($3.24{\pm}0.368$) and CNR ($0.166{\pm}0.033$) of group B. Among these, CNR was statistically significant (p<0.05). In the correlation analysis, statistically significant results were shown in SNR (LCA) and CNR (LCA); SNR (RCA) and CNR (RCA); ESD and DAP; ESD and sec; DAP and CNR (RCA); and DAP and sec (p<0.05). As a result of the analyses on the image quality evaluation and usefulness of the dose change, the SNR and CNR were increased in the RCA images of the CAG obtained by increasing the mA. Based on the result that CNR showed a statistically significant difference, it is believed that the contrast in the image quality can be further improved by increasing the mA in RCA imaging.

Genetic Diversity of Korean Native Chicken Populations in DAD-IS Database Using 25 Microsatellite Markers (초위성체 마커를 활용한 가축다양성정보시스템(DAD-IS) 등재 재래닭 집단의 유전적 다양성 분석)

  • Roh, Hee-Jong;Kim, Kwan-Woo;Lee, Jinwook;Jeon, Dayeon;Kim, Seung-Chang;Ko, Yeoung-Gyu;Mun, Seong-Sil;Lee, Hyun-Jung;Lee, Jun-Heon;Oh, Dong-Yep;Byeon, Jae-Hyun;Cho, Chang-Yeon
    • Korean Journal of Poultry Science
    • /
    • v.46 no.2
    • /
    • pp.65-75
    • /
    • 2019
  • A number of Korean native chicken(KNC) populations were registered in FAO (Food and Agriculture Organization) DAD-IS (Domestic Animal Diversity Information Systems, http://www.fao.org/dad-is). But there is a lack of scientific basis to prove that they are unique population of Korea. For this reason, this study was conducted to prove KNC's uniqueness using 25 Microsatellite markers. A total of 548 chickens from 11 KNC populations (KNG, KNB, KNR, KNW, KNY, KNO, HIC, HYD, HBC, JJC, LTC) and 7 introduced populations (ARA: Araucana, RRC and RRD: Rhode Island Red C and D, LGF and LGK: White Leghorn F and K, COS and COH: Cornish brown and Cornish black) were used. Allele size per locus was decided using GeneMapper Software (v 5.0). A total of 195 alleles were observed and the range was 3 to 14 per locus. The MNA, $H_{\exp}$, $H_{obs}$, PIC value within population were the highest in KNY (4.60, 0.627, 0.648, 0.563 respectively) and the lowest in HYD (1.84, 0.297, 0.286, 0.236 respectively). The results of genetic uniformity analysis suggested 15 cluster (${\Delta}K=66.22$). Excluding JJC, the others were grouped in certain cluster with high genetic uniformity. JJC was not grouped in certain cluster but grouped in cluster 2 (44.3%), cluster 3 (17.7%) and cluster8 (19.1%). As a results of this study, we can secure a scientific basis about KNC's uniqueness and these results can be use to basic data for the genetic evaluation and management of KNC breeds.

A Study on Estimating Shear Strength of Continuum Rock Slope (연속체 암반비탈면의 강도정수 산정 연구)

  • Kim, Hyung-Min;Lee, Su-gon;Lee, Byok-Kyu;Woo, Jae-Gyung;Hur, Ik;Lee, Jun-Ki
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.5
    • /
    • pp.5-19
    • /
    • 2019
  • Considering the natural phenomenon in which steep slopes ($65^{\circ}{\sim}85^{\circ}$) consisting of rock mass remain stable for decades, slopes steeper than 1:0.5 (the standard of slope angle for blast rock) may be applied in geotechnical conditions which are similar to those above at the design and initial construction stages. In the process of analysing the stability of a good to fair continuum rock slope that can be designed as a steep slope, a general method of estimating rock mass strength properties from design practice perspective was required. Practical and genealized engineering methods of determining the properties of a rock mass are important for a good continuum rock slope that can be designed as a steep slope. The Genealized Hoek-Brown (H-B) failure criterion and GSI (Geological Strength Index), which were revised and supplemented by Hoek et al. (2002), were assessed as rock mass characterization systems fully taking into account the effects of discontinuities, and were widely utilized as a method for calculating equivalent Mohr-Coulomb shear strength (balancing the areas) according to stress changes. The concept of calculating equivalent M-C shear strength according to the change of confining stress range was proposed, and on a slope, the equivalent shear strength changes sensitively with changes in the maximum confining stress (${{\sigma}^{\prime}}_{3max}$ or normal stress), making it difficult to use it in practical design. In this study, the method of estimating the strength properties (an iso-angle division method) that can be applied universally within the maximum confining stress range for a good to fair continuum rock mass slope is proposed by applying the H-B failure criterion. In order to assess the validity and applicability of the proposed method of estimating the shear strength (A), the rock slope, which is a study object, was selected as the type of rock (igneous, metamorphic, sedimentary) on the steep slope near the existing working design site. It is compared and analyzed with the equivalent M-C shear strength (balancing the areas) proposed by Hoek. The equivalent M-C shear strength of the balancing the areas method and iso-angle division method was estimated using the RocLab program (geotechnical properties calculation software based on the H-B failure criterion (2002)) by using the basic data of the laboratory rock triaxial compression test at the existing working design site and the face mapping of discontinuities on the rock slope of study area. The calculated equivalent M-C shear strength of the balancing the areas method was interlinked to show very large or small cohesion and internal friction angles (generally, greater than $45^{\circ}$). The equivalent M-C shear strength of the iso-angle division is in-between the equivalent M-C shear properties of the balancing the areas, and the internal friction angles show a range of $30^{\circ}$ to $42^{\circ}$. We compared and analyzed the shear strength (A) of the iso-angle division method at the study area with the shear strength (B) of the existing working design site with similar or the same grade RMR each other. The application of the proposed iso-angle division method was indirectly evaluated through the results of the stability analysis (limit equilibrium analysis and finite element analysis) applied with these the strength properties. The difference between A and B of the shear strength is about 10%. LEM results (in wet condition) showed that Fs (A) = 14.08~58.22 (average 32.9) and Fs (B) = 18.39~60.04 (average 32.2), which were similar in accordance with the same rock types. As a result of FEM, displacement (A) = 0.13~0.65 mm (average 0.27 mm) and displacement (B) = 0.14~1.07 mm (average 0.37 mm). Using the GSI and Hoek-Brown failure criterion, the significant result could be identified in the application evaluation. Therefore, the strength properties of rock mass estimated by the iso-angle division method could be applied with practical shear strength.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.