• Title/Summary/Keyword: process control techniques

Search Result 601, Processing Time 0.03 seconds

Solution-Processed Indium-Gallium Oxide Thin-Film Transistors for Power Electronic Applications (전력반도체 응용을 위한 용액 공정 인듐-갈륨 산화물 반도체 박막 트랜지스터의 성능과 안정성 향상 연구)

  • Se-Hyun Kim;Jeong Min Lee;Daniel Kofi Azati;Min-Kyu Kim;Yujin Jung;Kang-Jun Baeg
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.37 no.4
    • /
    • pp.400-406
    • /
    • 2024
  • Next-generation wide-bandgap semiconductors such as SiC, GaN, and Ga2O3 are being considered as potential replacements for current silicon-based power devices due to their high mobility, larger size, and production of high-quality wafers at a moderate cost. In this study, we investigate the gradual modulation of chemical composition in multi-stacked metal oxide semiconductor thin films to enhance the performance and bias stability of thin-film transistors (TFTs). It demonstrates that adjusting the Ga ratio in the indium gallium oxide (IGO) semiconductor allows for precise control over the threshold voltage and enhances device stability. Moreover, employing multiple deposition techniques addresses the inherent limitations of solution-processed amorphous oxide semiconductor TFTs by mitigating porosity induced by solvent evaporation. It is anticipated that solution-processed indium gallium oxide (IGO) semiconductors, with a Ga ratio exceeding 50%, can be utilized in the production of oxide semiconductors with wide band gaps. These materials hold promise for power electronic applications necessitating high voltage and current capabilities.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

Prediction of field failure rate using data mining in the Automotive semiconductor (데이터 마이닝 기법을 이용한 차량용 반도체의 불량률 예측 연구)

  • Yun, Gyungsik;Jung, Hee-Won;Park, Seungbum
    • Journal of Technology Innovation
    • /
    • v.26 no.3
    • /
    • pp.37-68
    • /
    • 2018
  • Since the 20th century, automobiles, which are the most common means of transportation, have been evolving as the use of electronic control devices and automotive semiconductors increases dramatically. Automotive semiconductors are a key component in automotive electronic control devices and are used to provide stability, efficiency of fuel use, and stability of operation to consumers. For example, automotive semiconductors include engines control, technologies for managing electric motors, transmission control units, hybrid vehicle control, start/stop systems, electronic motor control, automotive radar and LIDAR, smart head lamps, head-up displays, lane keeping systems. As such, semiconductors are being applied to almost all electronic control devices that make up an automobile, and they are creating more effects than simply combining mechanical devices. Since automotive semiconductors have a high data rate basically, a microprocessor unit is being used instead of a micro control unit. For example, semiconductors based on ARM processors are being used in telematics, audio/video multi-medias and navigation. Automotive semiconductors require characteristics such as high reliability, durability and long-term supply, considering the period of use of the automobile for more than 10 years. The reliability of automotive semiconductors is directly linked to the safety of automobiles. The semiconductor industry uses JEDEC and AEC standards to evaluate the reliability of automotive semiconductors. In addition, the life expectancy of the product is estimated at the early stage of development and at the early stage of mass production by using the reliability test method and results that are presented as standard in the automobile industry. However, there are limitations in predicting the failure rate caused by various parameters such as customer's various conditions of use and usage time. To overcome these limitations, much research has been done in academia and industry. Among them, researches using data mining techniques have been carried out in many semiconductor fields, but application and research on automotive semiconductors have not yet been studied. In this regard, this study investigates the relationship between data generated during semiconductor assembly and package test process by using data mining technique, and uses data mining technique suitable for predicting potential failure rate using customer bad data.

The Manufacture of Absorbents and Removal Characteristics of VOCs by Essential Oil and Photocatalyst (식물정유와 광촉매를 이용한 흡수제 제조 및 VOCs 제거특성에 관한 연구)

  • Jeong, Hae-Eun;Yang, Kyeong-Soon;Kang, Min-Kyoung;Cho, Joon-Hyung;Oh, Kwang-Joong
    • Clean Technology
    • /
    • v.23 no.1
    • /
    • pp.54-63
    • /
    • 2017
  • Volatile organic compounds (VOCs) are widely used in both industrial and domestic activities. VOCs are one of the most unpleasant, frequently complaint-rousing factors of pollution around the world. It is now necessary to research and develop an alternative technology that could overcome the problems of the existing odor-control and VOC-eliminating techniques. In this study, essential oil and photocatalytic process was applied in the removal of benzene and toluene, typical VOCs in petrochemistry plant. therefore, this study conducted experiments on the selection of appropriate essential oil, photodegradation, hydroxyl radical generation capacity. The removal efficiency and reaction rate were performed to selecte the type and concentration of essential oil. As a result, removal efficiency of Hinoki Cypress oil was approximately 70% and reaction rate of Hinoki Cypress was high. The results of photolysis experiment, photocatalytic oxidation process showed that the decomposition efficiency of VOCs increased considerably with increasing UV lamp power. In addition, the conversion of VOCs was increased up to $0.1gL^{-1}$ photocatalysts. The hydroxyl radicals measure was performed to determine the ability to generate hydroxyl radicals. The analytical result showed that high $TiO_2$ concentration and lamp power was produced many hydroxyl radical. Experiments of the removal efficiency and reaction rate were performed using essential oil and photooxidation. As a result, the removal efficiency showed that the removal efficiency was increased high temperature and reaction time. The activation energy was calculated from the reaction rate equation at various temperature condition. Activation energy was approximately $18kJmol^{-1}$.

Review of Microbially Mediated Smectite-illite Reaction (생지화학적 스멕타이트-일라이트 반응에 관한 고찰)

  • Kim, Jin-Wook
    • Economic and Environmental Geology
    • /
    • v.42 no.5
    • /
    • pp.395-401
    • /
    • 2009
  • The smectite-illite (SI) reaction is a ubiquitous process in siliciclastic sedimentary environments. For the last 4 decades the importance of smectite to illite (S-I) reaction was described in research papers and reports, as the degree of the (S-I) reaction, termed "smectite illitization", is linked to the exploration of hydrocarbons, and geochemical/petrophysical indicators. The S-I transformation has been thought that the reaction, explained either by layer-by-layer mechanism in the solid state or dissolution/reprecipitation process, was entirely abiotic and to require burial, heat, and time to proceed, however few studies have taken into account the bacterial activity. Recent laboratory studies showed evidence suggesting that the structural ferric iron (Fe(III)) in clay minerals can be reduced by microbial activity and the role of microorganisms is to link organic matter oxidation to metal reduction, resulting in the S-I transformation. In abiotic systems, elevated temperatures are typically used in laboratory experiments to accelerate the smectite to illite reaction in order to compensate for a long geological time in nature. However, in biotic systems, bacteria may catalyze the reaction and elevated temperature or prolonged time may not be necessary. Despite the important role of microbe in S-I reaction, factors that control the reaction mechanism are not clearly addressed yet. This paper, therefore, overviews the current status of microbially mediated smectite-to-illite reaction studies and characterization techniques.

Efficient Multicasting Mechanism for Mobile Computing Environment Machine learning Model to estimate Nitrogen Ion State using Traingng Data from Plasma Sheath Monitoring Sensor (Plasma Sheath Monitoring Sensor 데이터를 활용한 질소이온 상태예측 모형의 기계학습)

  • Jung, Hee-jin;Ryu, Jinseung;Jeong, Minjoong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.27-30
    • /
    • 2022
  • The plasma process, which has many advantages in terms of efficiency and environment compared to conventional process methods, is widely used in semiconductor manufacturing. Plasma Sheath is a dark region observed between the plasma bulk and the chamber wall surrounding it or the electrode. The Plasma Sheath Monitoring Sensor (PSMS) measures the difference in voltage between the plasma and the electrode and the RF power applied to the electrode in real time. The PSMS data, therefore, are expected to have a high correlation with the state of plasma in the plasma chamber. In this study, a model for predicting the state of nitrogen ions in the plasma chamber is training by a deep learning machine learning techniques using PSMS data. For the data used in the study, PSMS data measured in an experiment with different power and pressure settings were used as training data, and the ratio, flux, and density of nitrogen ions measured in plasma bulk and Si substrate were used as labels. The results of this study are expected to be the basis of artificial intelligence technology for the optimization of plasma processes and real-time precise control in the future.

  • PDF

Energy Balancing Distribution Cluster With Hierarchical Routing In Sensor Networks (계층적 라우팅 경로를 제공하는 에너지 균등분포 클러스터 센서 네트워크)

  • Mary Wu
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.3
    • /
    • pp.166-171
    • /
    • 2023
  • Efficient energy management is a very important factor in sensor networks with limited resources, and cluster techniques have been studied a lot in this respect. However, a problem may occur in which energy use of the cluster header is concentrated, and when the cluster header is not evenly distributed over the entire area but concentrated in a specific area, the transmission distance of the cluster members may be large or very uneven. The transmission distance can be directly related to the problem of energy consumption. Since the energy of a specific node is quickly exhausted, the lifetime of the sensor network is shortened, and the efficiency of the entire sensor network is reduced. Thus, balanced energy consumption of sensor nodes is a very important research task. In this study, factors for balanced energy consumption by cluster headers and sensor nodes are analyzed, and a balancing distribution clustering method in which cluster headers are balanced distributed throughout the sensor network is proposed. The proposed cluster method uses multi-hop routing to reduce energy consumption of sensor nodes due to long-distance transmission. Existing multi-hop cluster studies sets up a multi-hop cluster path through a two-step process of cluster setup and routing path setup, whereas the proposed method establishes a hierarchical cluster routing path in the process of selecting cluster headers to minimize the overhead of control messages.

A 14b 200KS/s $0.87mm^2$ 1.2mW 0.18um CMOS Algorithmic A/D Converter (14b 200KS/s $0.87mm^2$ 1.2mW 0.18um CMOS 알고리즈믹 A/D 변환기)

  • Park, Yong-Hyun;Lee, Kyung-Hoon;Choi, Hee-Cheol;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.12 s.354
    • /
    • pp.65-73
    • /
    • 2006
  • This work presents a 14b 200KS/s $0.87mm^2$ 1.2mW 0.18um CMOS algorithmic A/D converter (ADC) for intelligent sensors control systems, battery-powered system applications simultaneously requiring high resolution, low power, and small area. The proposed algorithmic ADC not using a conventional sample-and-hold amplifier employs efficient switched-bias power-reduction techniques in analog circuits, a clock selective sampling-capacitor switching in the multiplying D/A converter, and ultra low-power on-chip current and voltage references to optimize sampling rate, resolution, power consumption, and chip area. The prototype ADC implemented in a 0.18um 1P6M CMOS process shows a measured DNL and INL of maximum 0.98LSB and 15.72LSB, respectively. The ADC demonstrates a maximum SNDR and SFDR of 54dB and 69dB, respectively, and a power consumption of 1.2mW at 200KS/s and 1.8V. The occupied active die area is $0.87mm^2$.

Real-time monitoring for blending uniformity of trimebutine CR tablets using near-infrared and Raman spectroscopy (근적외분광분석법과 라만분광분석법을 이용한 트리메부틴말레인산 서방정의 혼합 과정 모니터링)

  • Woo, Young-Ah
    • Analytical Science and Technology
    • /
    • v.24 no.6
    • /
    • pp.519-526
    • /
    • 2011
  • Chemometrics using near-infrared (NIR) and Raman spectroscopy have found significant uses in a variety quantitative and qualitative analyses of pharmaceutical products in complex matrixes. Most of the pharmaceutical can be measured directly with little or no sample preparation using these spectroscopic methods. During pharmaceutical manufacturing process, analytical techniques with no or less sample preparation are very critical to confirm the quality. This study showed NIR and Raman spectroscopy with principal component analysis (PCA) was very effective for the blending processing control. It is of utmost importance to evaluate critical parameters related to quality of products during pharmaceutical processing. The blending is confirmed by off-line determination of active pharmaceutical ingredient (API) by a conventional method such as high performance liquid chromatography (HPLC) and UV spectroscopy. These analytical methods are time-consuming and ineffective for real time control. This study showed the possibility for the determination of blend uniformity end-point of CR tablets with the use of both NIR and Raman spectroscopy. The samples were acquired from six positions during blending processing with U-type blender from 0 to 30 min. Using both collected NIR and Raman spectral data, principal component analysis (PCA) was used to follow the uniformity of blending and finally determine the end-point. The variation of homogeneity of six samples during blending was clearly found and blend uniformity end-point was successfully confirmed in the domains of principal component (PC) scores.

Validation of Nursing-sensitive Patient Outcomes;Focused on Knowledge outcomes (지식결과에 대한 타당성 검증;간호결과분류(NOC)에 기초하여)

  • Yom, Young-Hee;Lee, Kyu-Eun
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.6 no.3
    • /
    • pp.357-374
    • /
    • 2000
  • The purpose of this study was to validate knowledge outcomes included Nursing Outcomes Classification(NOC) developed by Johnson and Maas at the University of Iowa. A sample of 71 nurse experts working in university affiliated hospitals participated in this study. They were asked to rate indicators that examplified the outcomes on a scale of 1(indicator is not all characteristic) to 5(indicator is very characteristic). A questionnaire with an adaptation of Fehring's methodology was used to establish the content validity of outcomes. The results were as follow: 1. All indicators were considered to be 'supporting' and no indicators were considered to be 'nonsupporting'. 2. 'Knowledge: Treatment Regimen' attained and OCV score of 0.816 and was the highest OCV score among outcomes. 3. 'Knowledge: Energy Conservation' attained an OCV score of 0.748 and was the lowest OCV score among abuse outcomes. 4. 'Knowledge: Breastfeeding' attained an OCV score of 0.790 and was the highest indicator was 'description of benefits of breastfeeding'. 5. 'Knowledge: Child Safety' attained an OCV score of 0.778 and was the highest indicator was 'demonstration of first aids techniques'. 6. 'Knowledge: Diet' attained an OCV score of 0.779 and was the highest indicator was 'performance of self-monitoring activities'. 7. 'Knowledge: Disease Process' attained an OCV score of 0.815 and was the highest indicator was 'description of signs and symptoms'. 8. 'Knowledge: Health Behaviors' attained an OCV score of 0.800 and was the highest indicator was 'description of safe use of prescription drugs'. 9. 'Knowledge: Health Resources' attained an OCV score of 0.794 and was the highest indicator was 'description of need for follow-up care'. 10. 'Knowledge: Infection Control' attained an OCV score of 0.793 and was the highest indicator was 'description of signs and symptoms'. 11. 'Knowledge: Medication' attained an OCV score of 0.789 and was the highest indicator was 'description of correct administration of medication'. 12. 'Knowledge: Personal Safety' attained an OCV score of 0.804 and was the highest indicator was 'description of measures to reduce risk of accidental injury'. 13. 'Knowledge: Prescribed Activity' attained an OCV score of 0.810 and was the highest indicator was 'proper performance of exercise'. 14. 'Knowledge: Substance Use Control' attained an OCV score of 0.809 and was the highest indicator was 'description of signs of dependence during substance withdrawl'. 15. 'Knowledge: Treatment Procedure(s)' attained an OCV score of 0.795 and was the highest indicator was 'description of appropriate action for complications'. 16. 'Knowledge: Treatment Regimen' attained an OCV score of 0.816 and was the highest indicator was 'description of self-care responsibilities for emergency situations'. More outcomes need to be validated and outcomes sensitive to Korean culture need to be developed.

  • PDF