• Title/Summary/Keyword: making techniques

Search Result 1,309, Processing Time 0.03 seconds

BuddyMirror: A Smart Mirror Supporting Image-Making Service (BuddyMirror: 이미지 메이킹 서비스를 지원하는 스마트 미러)

  • Jo, Yeon-Jeong;Sim, Chae-Lin;Jang, Hyo-Won;Jin, Jae-Hwan;Lee, Myung-Joon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.9 no.5
    • /
    • pp.811-821
    • /
    • 2019
  • Image making for a person is a way to improve various factors that can express oneself, such as appearance, impression, and confidence. In general, people use mirror or camera as a traditional method to make their own image or perform presentation exercises. Recently, as smart mirrors are widely used in various fields, attempts to use smart mirrors as image making tools instead of mirrors have been frequently made. Smart Mirror is considered as a suitable tool to provide image making service because it can attach various devices such as a camera and a microphone in addition to the main advantage of a mirror that it is easily accessible. In this paper, we present BuddyMirror - a smart mirror software that provides image-making service to users, and a dedicated mobile app for flexibly running the mirror software. BuddyMirror provides functions for presentation, mock interview, and styling service at the request of users, interworking with the dedicated mobile app. We also describe the techniques developed for implementing and activating each of the new services as a module of MagicMirror, a widely used smart mirror development platform. The developed mobile app enables users to deliver presentations to BuddyMirror or to download the recorded video for image-making services.

Partial Discharge Detection of High Voltage Switchgear Using a Ultra High Frequency Sensor

  • Shin, Jong-Yeol;Lee, Young-Sang;Hong, Jin-Woong
    • Transactions on Electrical and Electronic Materials
    • /
    • v.14 no.4
    • /
    • pp.211-215
    • /
    • 2013
  • Partial discharge diagnosis techniques using ultra high frequencies do not affect load movement, because there is no interruption of power. Consequently, these techniques are popular among the prevention diagnosis methods. For the first time, this measurement technique has been applied to the GIS, and has been tested by applying an extra high voltage switchboard. This particular technique makes it easy to measure in the live state, and is not affected by the noise generated by analyzing the causes of faults ? thereby making risk analysis possible. It is reported that the analysis data and the evaluation of the risk level are improved, especially for poor location, and that the measurement of Ultra high frequency (UHF) partial discharge of the real live wire in industrial switchgear is spectacular. Partial discharge diagnosis techniques by using the Ultra High Frequency sensor have been recently highlighted, and it is verified by applying them to the GIS. This has become one of the new and various power equipment techniques. Diagnosis using a UHF sensor is easy to measure, and waveform analysis is already standardized, due to numerous past case experiments. This technique is currently active in research and development, and commercialization is becoming a reality. Another aspect of this technique is that it can determine the occurrences and types of partial discharge, by the application diagnosis for live wire of ultra high voltage switchgear. Measured data by using the UHF partial discharge techniques for ultra high voltage switchgear was obtained from 200 places in Gumi, Yeosu, Taiwan and China's semiconductor plants, and also the partial discharge signals at 15 other places were found. It was confirmed that the partial discharge signal was destroyed by improving the work of junction bolt tightening check, and the cable head reinforcement insulation at 8 places with a possibility for preventing the interruption of service. Also, it was confirmed that the UHF partial discharge measurement techniques are also a prevention diagnosis method in actual industrial sites. The measured field data and the usage of the research for risk assessment techniques of the live wire status of power equipment make a valuable database for future improvements.

Automated Composition System of Web Services by Semantic and Workflow based Hybrid Techniques (시맨틱과 워크플로우 혼합기법에 의한 자동화된 웹 서비스 조합시스템)

  • Lee, Yong-Ju
    • The KIPS Transactions:PartD
    • /
    • v.14D no.2
    • /
    • pp.265-272
    • /
    • 2007
  • In this paper, we implement an automated composition system of web services using hybrid techniques that merge the benefit of BPEL techniques, with the advantage of OWL-S, BPEL techniques have practical capabilities that fulfil the needs of the business environment such as fault handling and transaction management. However, the main shortcoming of these techniques is the static composition approach, where the service selection and flow management are done a priori and manually. In contrast, OWL-S techniques use ontologies to provide a mechanism to describe the web services functionality in machine-understandable form, making it possible to discover, and integrate web services automatically. This allows for the dynamic integration of compatible web services, possibly discovered at run time, into the composition schema. However, the development of these approaches is still in its infancy and has been largely detached from the BPEL composition effort. In this work, we describe the design of the SemanticBPEL architecture that is a hybrid system of BPEL4WS and OWL-S, and propose algorithms for web service search and integration. In particular, the SemanticBPEL has been implemented based on the open source tools. The proposed system is compared with existing BPEL systems by functional analysis. These comparisions show that our system outperforms existing systems.

An Intelligent Framework for Test Case Prioritization Using Evolutionary Algorithm

  • Dobuneh, Mojtaba Raeisi Nejad;Jawawi, Dayang N.A.
    • Journal of Internet Computing and Services
    • /
    • v.17 no.5
    • /
    • pp.89-95
    • /
    • 2016
  • In a software testing domain, test case prioritization techniques improve the performance of regression testing, and arrange test cases in such a way that maximum available faults be detected in a shorter time. User-sessions and cookies are unique features of web applications that are useful in regression testing because they have precious information about the application state before and after making changes to software code. This approach is in fact a user-session based technique. The user session will collect from the database on the server side, and test cases are released by the small change configuration of a user session data. The main challenges are the effectiveness of Average Percentage Fault Detection rate (APFD) and time constraint in the existing techniques, so in this paper developed an intelligent framework which has three new techniques use to manage and put test cases in group by applying useful criteria for test case prioritization in web application regression testing. In dynamic weighting approach the hybrid criteria which set the initial weight to each criterion determines optimal weight of combination criteria by evolutionary algorithms. The weight of each criterion is based on the effectiveness of finding faults in the application. In this research the priority is given to test cases that are performed based on most common http requests in pages, the length of http request chains, and the dependency of http requests. To verify the new technique some fault has been seeded in subject application, then applying the prioritization criteria on test cases for comparing the effectiveness of APFD rate with existing techniques.

An autonomous synchronized switch damping on inductance and negative capacitance for piezoelectric broadband vibration suppression

  • Qureshi, Ehtesham Mustafa;Shen, Xing;Chang, Lulu
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.17 no.4
    • /
    • pp.501-517
    • /
    • 2016
  • Synchronized switch damping (SSD) is a structural vibration control technique in which a piezoelectric patch attached to or embedded into the structure is connected to or disconnected from the shunt circuit in order to dissipate the vibration energy of the host structure. The switching process is performed by a digital signal processor (DSP) which detects the displacement extrema and generates a command to operate the switch in synchronous with the structure motion. Recently, autonomous SSD techniques have emerged in which the work of DSP is taken up by a low pass filter, thus making the whole system autonomous or self-powered. The control performance of the previous autonomous SSD techniques heavily relied on the electrical quality factor of the shunt circuit which limited their damping performance. Thus in order to reduce the influence of the electrical quality factor on the damping performance, a new autonomous SSD technique is proposed in this paper in which a negative capacitor is used along with the inductor in the shunt circuit. Only a negative capacitor could also be used instead of inductor but it caused saturation of negative capacitor in the absence of an inductor due to high current generated during the switching process. The presence of inductor in the shunt circuit of negative capacitor limits the amount of current supplied by the negative capacitance, thus improving the damping performance. In order to judge the control performance of proposed autonomous SSDNCI, a comparison is made between the autonomous SSDI, autonomous SSDNC and autonomous SSDNCI techniques for the control of an aluminum cantilever beam subjected to both single mode and multimode excitation. A value of negative capacitance slightly greater than the piezoelectric patch capacitance gave the optimum damping results. Experiment results confirmed the effectiveness of the proposed autonomous SSDNCI technique as compared to the previous techniques. Some limitations and drawbacks of the proposed technique are also discussed.

Various Types and Manufacturing Techniques of Nano and Micro Capsules for Nanofood

  • Kim, Dong-Myong
    • Journal of Dairy Science and Biotechnology
    • /
    • v.24 no.1
    • /
    • pp.53-63
    • /
    • 2006
  • Nano and micro capsulation (NM capsulation) involve the incorporation for nanofood materials, enzymes, cells or other materials in small capsules. Since Kim D. M. (2001) showed that a new type of food called firstly the name of nanofood, which means nanotechnology for food, and the encapsulated materials can be protected from moisture, heat or other extreme conditions, thus enhancing their stability and maintaining viability applications for this nanofood technique have increased in the food. NM capsules for nanofood is also utilized to mask odours or tastes. Various techniques are employed to form the capsules, including spray drying, spray chilling or spray cooling, extrusion coating, fluidized bed coating, liposome entrapment, coacervation, inclusion complexation, centrifugal extrusion and rotational suspension separation. Each of these techniques is discussed in this review. A wide variety of nanofood is NM capsulated - flavouring agents, acids, bases, artificial sweeteners, colourants, preservatives, leavening agents, antioxidants, agents with undesirable flavours, odours and nutrients, among others. The use of NM capsulation for sweeteners such as aspartame and flavors in chewing gum is well known. Fats, starches, dextrins, alginates, protein and lipid materials can be employed as encapsulating materials. Various methods exist to release the ingredients from the capsules. Release can be site-specific, stage-specific or signaled by changes in pH, temperature, irradiation or osmotic shock. NM capsulation for the nanofood, the most common method is by solvent-activated release. The addition of water to dry beverages or cake mixes is an example. Liposomes have been applied in cheese-making, and its use in the preparation of nanofood emulsions such as spreads, margarine and mayonnaise is a developing area. Most recent developments include the NM capsulation for nanofood in the areas of controlled release, carrier materials, preparation methods and sweetener immobilization. New markets are being developed and current research is underway to reduce the high production costs and lack of food-grade materials.

  • PDF

Efficient and Low-Cost Metal Revision Techniques for Post Silicon Repair

  • Lee, Sungchul;Shin, Hyunchul
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.14 no.3
    • /
    • pp.322-330
    • /
    • 2014
  • New effective techniques to repair "small" design errors in integrated circuits are presented. As semiconductor chip complexity increases and the design period becomes tight, errors frequently remain in a fabricated chip making revisions required. Full mask revision significantly increases the cost and time-to-market. However, since many "small" errors can be repaired by modifying several connections among the circuit blocks and spare cells, errors can frequently be repaired by revising metal layers. Metal only revision takes significantly less time and involves less cost when compared to full mask revision, since mask revision costs multi-million dollars while metal revision costs tens of thousand dollars. In our research, new techniques are developed to further reduce the number of metal layers to be revised. Specifically, we partition the circuit blocks with higher error probabilities and extend the terminals of the signals crossing the partition boundaries to the preselected metal repair layers. Our partitioning and pin extension to repair layers can significantly improve the repairability by revising only the metal repair layers. Since pin extension may increase delay slightly, this method can be used for non-timing-critical parts of circuits. Experimental results by using academia and industrial circuits show that the revision of the two metal layers can repair many "small" errors at low-cost and with short revision time. On the average, when 11.64% of the spare cell area and 24.72% of the extended pins are added to the original circuits, 83.74% of the single errors (and 72.22% of the double errors) can be corrected by using two metal revision. We also suggest methods to use our repair techniques with normal commercial vender tools.

Credit Score Modelling in A Two-Phase Mathematical Programming (두 단계 수리계획 접근법에 의한 신용평점 모델)

  • Sung Chang Sup;Lee Sung Wook
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2002.05a
    • /
    • pp.1044-1051
    • /
    • 2002
  • This paper proposes a two-phase mathematical programming approach by considering classification gap to solve the proposed credit scoring problem so as to complement any theoretical shortcomings. Specifically, by using the linear programming (LP) approach, phase 1 is to make the associated decisions such as issuing grant of credit or denial of credit to applicants. or to seek any additional information before making the final decision. Phase 2 is to find a cut-off value, which minimizes any misclassification penalty (cost) to be incurred due to granting credit to 'bad' loan applicant or denying credit to 'good' loan applicant by using the mixed-integer programming (MIP) approach. This approach is expected to and appropriate classification scores and a cut-off value with respect to deviation and misclassification cost, respectively. Statistical discriminant analysis methods have been commonly considered to deal with classification problems for credit scoring. In recent years, much theoretical research has focused on the application of mathematical programming techniques to the discriminant problems. It has been reported that mathematical programming techniques could outperform statistical discriminant techniques in some applications, while mathematical programming techniques may suffer from some theoretical shortcomings. The performance of the proposed two-phase approach is evaluated in this paper with line data and loan applicants data, by comparing with three other approaches including Fisher's linear discriminant function, logistic regression and some other existing mathematical programming approaches, which are considered as the performance benchmarks. The evaluation results show that the proposed two-phase mathematical programming approach outperforms the aforementioned statistical approaches. In some cases, two-phase mathematical programming approach marginally outperforms both the statistical approaches and the other existing mathematical programming approaches.

  • PDF

Transplantation Immunology from the Historical Perspective (이식면역학의 역사적 고찰)

  • Park, Chung-Gyu
    • IMMUNE NETWORK
    • /
    • v.4 no.1
    • /
    • pp.1-6
    • /
    • 2004
  • Transplantation would be the only way to cure the end-stage organ failure involving heart, lung, liver, kidney and pancreas. The replacement of the parts of the body damaged to lose its function or lost to trauma must be a dream of human-being. Human history is replete with chimeras, from sphinxes to mermaids, making one wonder if the ancients might actually have dreamed of what now is called 'xenotransplantation'. In the 20th century, the transplantation of organs and tissues to cure disease has become a clinical reality. The development in the fields of surgical techniques, physiology and immunology attributed to the successful transplantation in human. In the center of the successful transplantation lies the progress in understanding the cellular and molecular biology of immune system which led to the development of immunosuppressive drugs and the invention of the concept of immunological tolerance. The mandatory side effects of immunosuppressive drugs including infection and cancer forced us to search alternative approaches along with the development of new immunosuppressive agents. Among the alternative approaches, the induction of a state of immunologic tolerance would be the most promising and the most generic applicability as a future therapy. Recent reports documenting long-term graft survival without immunosuppression suggest that tolerance-based therapies may become a clinical reality. Last year, we saw the epoch making success of overcoming hyperacute rejection in porcine to primate xenotransplantation which will lead porcine to human xenotransplantation to clinical reality. In this review, I dare to summarize the development of transplantation immunology from the perspective of history.

Matrix-Based Intelligent Inference Algorithm Based On the Extended AND-OR Graph

  • Lee, Kun-Chang;Cho, Hyung-Rae
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.121-130
    • /
    • 1999
  • The objective of this paper is to apply Extended AND-OR Graph (EAOG)-related techniques to extract knowledge from a specific problem-domain and perform analysis in complicated decision making area. Expert systems use expertise about a specific domain as their primary source of solving problems belonging to that domain. However, such expertise is complicated as well as uncertain, because most knowledge is expressed in causal relationships between concepts or variables. Therefore, if expert systems can be used effectively to provide more intelligent support for decision making in complicated specific problems, it should be equipped with real-time inference mechanism. We develop two kinds of EAOG-driven inference mechanisms(1) EAOG-based forward chaining and (2) EAOG-based backward chaining. and The EAOG method processes the following three characteristics. 1. Real-time inference : The EAOG inference mechanism is suitable for the real-time inference because its computational mechanism is based on matrix computation. 2. Matrix operation : All the subjective knowledge is delineated in a matrix form, so that inference process can proceed based on the matrix operation which is computationally efficient. 3. Bi-directional inference : Traditional inference method of expert systems is based on either forward chaining or backward chaining which is mutually exclusive in terms of logical process and computational efficiency. However, the proposed EAOG inference mechanism is generically bi-directional without loss of both speed and efficiency.

  • PDF