• Title/Summary/Keyword: 반실험적 기법

Search Result 160, Processing Time 0.022 seconds

Index-based Searching on Timestamped Event Sequences (타임스탬프를 갖는 이벤트 시퀀스의 인덱스 기반 검색)

  • 박상현;원정임;윤지희;김상욱
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.468-478
    • /
    • 2004
  • It is essential in various application areas of data mining and bioinformatics to effectively retrieve the occurrences of interesting patterns from sequence databases. For example, let's consider a network event management system that records the types and timestamp values of events occurred in a specific network component(ex. router). The typical query to find out the temporal casual relationships among the network events is as fellows: 'Find all occurrences of CiscoDCDLinkUp that are fellowed by MLMStatusUP that are subsequently followed by TCPConnectionClose, under the constraint that the interval between the first two events is not larger than 20 seconds, and the interval between the first and third events is not larger than 40 secondsTCPConnectionClose. This paper proposes an indexing method that enables to efficiently answer such a query. Unlike the previous methods that rely on inefficient sequential scan methods or data structures not easily supported by DBMSs, the proposed method uses a multi-dimensional spatial index, which is proven to be efficient both in storage and search, to find the answers quickly without false dismissals. Given a sliding window W, the input to a multi-dimensional spatial index is a n-dimensional vector whose i-th element is the interval between the first event of W and the first occurrence of the event type Ei in W. Here, n is the number of event types that can be occurred in the system of interest. The problem of‘dimensionality curse’may happen when n is large. Therefore, we use the dimension selection or event type grouping to avoid this problem. The experimental results reveal that our proposed technique can be a few orders of magnitude faster than the sequential scan and ISO-Depth index methods.hods.

An adaptive digital watermark using the spatial masking (공간 마스킹을 이용한 적응적 디지털 워터 마크)

  • 김현태
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.9 no.3
    • /
    • pp.39-52
    • /
    • 1999
  • In this paper we propose a new watermarking technique for copyright protection of images. The proposed technique is based on a spatial masking method with a spatial scale parameter. In general it becomes more robust against various attacks but with some degradations on the image quality as the amplitude of the watermark increases. On the other hand it becomes perceptually more invisible but more vulnerable to various attacks as the amplitude of the watermark decreases. Thus it is quite complex to decide the compromise between the robustness of watermark and its visibility. We note that watermarking using the spread spectrum is not robust enought. That is there may be some areas in the image that are tolerable to strong watermark signals. However large smooth areas may not be strong enough. Thus in order to enhance the invisibility of watermarked image for those areas the spatial masking characteristics of the HVS(Human Visual System) should be exploited. That is for texture regions the magnitude of the watermark can be large whereas for those smooth regions the magnitude of the watermark can be small. As a result the proposed watermarking algorithm is intend to satisfy both the robustness of watermark and the quality of the image. The experimental results show that the proposed algorithm is robust to image deformations(such as compression adding noise image scaling clipping and collusion attack).

Assessment of Nitrogen Impaction on Watershed by Rice Cultivation (벼농사에서 질소유출이 수질에 미치는 영향평가)

  • Roh, Kee-An;Kim, Min-Kyeong;Lee, Byeong-Mo;Lee, Nam-Jong;Seo, Myung-Chul;Koh, Mun-Hwan
    • Korean Journal of Environmental Agriculture
    • /
    • v.24 no.3
    • /
    • pp.270-279
    • /
    • 2005
  • It is important to understand and evaluate the environmental impacts of rice cultivation for developing environmentally-friendly agriculture because rice is main crop in Korea and rice cultivation have both functions of water pollution and purification with environmental and cultivation conditions. This paper presents the evaluation of nitrogen impact by rice cultivation on water system. A simple protocol was proposed to assess the potential amount of nitrogen outflow from paddy field and most of parameters affect on the nitrogen outflow from paddy field such as the amount of fertilizer application, water balance, the quality and quantity of irrigation water, soil properties, nitrogen turnover in the soil and cultivation method were considered. To develop the protocol, coefficients for parameters affected nitrogen turnover and outflow were gotten and summarized by comparison and analysis of all possible references related, and by additional experiments at field and laboratory. And potential amount of nitrogen input and output by water in paddy field were estimated with the protocol at the conditions of the nitrogen contents of irrigation water, amount of fertilizer application, and irrigation methods. Where irrigation water was clean, below 1.0 mg $L^{-1}$ of nitrogen concentration, rice cultivation polluted nearby watershed. At the conditions of 2.0 mg $L^{-1}$ of nitrogen concentration, 110 kg $ha^{-1}$ of nitrogen fertilizer application and flooding irrigation, rice cultivation had water pollution function, but it had water purification function with intermittent irrigation. At the conditions of 3.0 mg $L^{-1}$ of nitrogen concentration and 110 kg $ha^{-1}$ of nitrogen fertilizer application, rice cultivation had water purification function, but that had water pollution function with 120 kg $ha^{-1}$ of nitrogen application. Where irrigation water was polluted over 6.0 mg $L^{-1}$ of nitrogen, it was evaluated that rice cultivation had water purifying effect, even though the amount of nitrogen application was 120 kg $ha^{-1}$.

Continuous Removal of Organic Matters of Eutrophic Lake Using Freshwater Bivalves: Inter-specific and Intra-specific Differences (CROM를 이용한 부영양 저수지의 유기물 제어: 이매패의 종 특이성에 대하여)

  • Lee, Ju-Hwan;Hwang, Soon-Jin;Park, Sen-Gu;Hwang, Su-Ok;Yu, Chun-Man;Kim, Baik-Ho
    • Korean Journal of Ecology and Environment
    • /
    • v.42 no.3
    • /
    • pp.350-363
    • /
    • 2009
  • Inter- and intra-specific differences in removal activities, filtering rates (FR) and production of feces-and pseudo-feces (PF) between a native freshwater bivalve in Korea, Anodonta woodiana Lea and Unio douglasiae Griffith et Pidgeon, were compared using a continuous removal of organic matters (CROM) system. The CROM system comprised five steps; input of polluted water, control of water flow, mussel treatment, analysis of water quality and discharge of clean water. The study was designed to compare the removal activity of organic matters between A. woodiana and U. douglasiae, and the intra-specific differences between density and length in A. woordiana. Results clearly indicate that two kinds of mussels had obvious removal activities of seston in the eutrophic reservoir. First, if both are similar in shell length, there were no significant inter-specific differences in removal activity between A. woordiana and U. douglasiae (P>0.5), but FRs of U. douglasiae was relatively high due to low ash-fee dry weight. Second, if both are same in animal density, the smaller mussels (1$\sim$2 years old) showed a higher filtering rate and production of feces- and pseudo-feces and less release of ammonium than the larger mussels. Third, if both are same in biomass, FRs and PF of mussels were higher in the low-density tank than the high-density tank, While the Concentration of $NH_4$-N and $PO_4$-P released WRS similar to each other (P>0.5). Therefore, these results suggest that CROM system using a young bivalve A. woordiana can be applied to control the nuisance seston in eutrophic lake system, if a relevant species and density were selected. Additional pilot tests to optimize the age and density of domestic bivalves were needed for the generalization of CROM operation.

Tegumental ultrastructure of juvenile and adult Echinostoma cinetorchis (이전고환극구흡충 유약충 및 성충의 표피 미세구조)

  • 이순형;전호승
    • Parasites, Hosts and Diseases
    • /
    • v.30 no.2
    • /
    • pp.65-74
    • /
    • 1992
  • The tegumental ultrastructure of juvenile and adult Echinostoma cinetorchis (Trematoda: Echinostomatidae) was observed by scanning electron microscopy. Three-day (juvenile) and 16-day (adult) worms were harvested from rats (Sprague-Dawley) experimentally fed the metacercariae from the laboratory-infected fresh water snail, Hippeutis cantori. The worms were fifed with 2.5% glutaraldehyde, processed routinely, and observed by an ISI Korea DS-130 scanning electron microscope. The 3-day old juvenile worms were elongated and ventrally curved, with their ventral sucker near the anterior two-fifths of the body. The head crown was bearing 37∼38 collar spines arranged in a zigzag pattern. The lips of the oral and ventral suckers had 8 and 5 type II sensory papillae respectively, and bewteen the spines, a few type III papillae were observed. Tongue or spade-shape spines were distributed anteriorly to the ventral sucker, whereas peg-like spines were distributed posteriorly and became sparse toward the posterior body. The spines of the dorsal surface were similar to those of the ventral surface. The 16-day old adults were leaf-like, and their oral and ventral suckers were located very closely. Aspinous head crown, oral and ventral suckers had type II and type III sensory papillae, and numerous type I papillae were distributed on the tegument anterior to the ventral sucker. Scale-like spines, with broad base and round tip, were distributed densely on the tegument anterior to the ventral sucker but they became sparse posteriorly. At the dorsal surface, spines were observed at times only at the anterior body. The results showed that the tegument of E. cinetorchis is similar to that of other echinostomes, but differs in the number and arrangement of collar spines, shape and distribution of tegumenal spines, and type and distribution of sensory papillae.

  • PDF

Video Assisted Thoracoscopic Sympathetic Ramus Clipping in Essential Hyperhidrosis -Cadaver Fitting Test and Clinical Application (다한증 환자에서 클립을 이용한 교감신경 교통가지 차단술 -사체 연구 및 임상적용-)

  • Lee, Sung-Ho;Cho, Seong-Joon;Jung, Jae-Seung;Kim, Tae-Sik;Son, Ho-Sung;Sun, Kyung;Kim, Kwang-Taik;Kim, Hyoung-Mook
    • Journal of Chest Surgery
    • /
    • v.36 no.8
    • /
    • pp.595-601
    • /
    • 2003
  • Background: It has been known that the most effective treatment method of hyperhidrosis is video-assisted thoracoscopic sympathetic nerve block. Postoperative compensatory hyperhidrosis and anhidrosis are major factors that decrease the postoperative satisfaction. Although sympathetic rami have been selectively blocked to decrease the complications, technical difficulties and excessive bleeding have prevented the universal application. Material and Method: Three pre-fixative cadavers were dissected before clinical application. Bilateral sympathetic chains were exposed in supine position after the whole anterior chest wall was removed. Second and third sympathetic rami were blocked using clips. After the sympathetic chains including ganglia were removed, we evaluated the extents of rami block. Twenty-five patients were subjected to the clinical application. Surgeries were performed in semi-fowlers position under general anesthesia and bilateral ventilation. 2 mm thoracoscopy and 5 mm trocar were intro-duced through third and fourth intercostal space, respectively. Second and third sympathetic rami were blocked using thoracoscopic clips. The postoperative complications, satisfaction, and compensatory hyperhidrosis rate were evaluated retrospectively. Result: Sympathetic rami were completely blocked in cadaver dissection study Hyper-hidrosis symptom was improved in all patients without operative complication. Operative time was shorter than that of traditional ramicotomy. All patients, except four, were satisfied with postoperative palmar hyperhidrosis. Com-pensatory hyperhidrosis was more severely happened in fifteen patients (60%). The remaining six patients had no complaint. Two patients had a minimal degree of gustatory hyperhidrosis. Conclusion: This operative method had shorter operative time and less complication rate, compared with traditional ramicotomy Operative success rate was similar to the traditional syrnpathicotorny; lower extent and occurrence rate of compensatory hyperhidrosis. The thoracic sympathetic rami clipping was suggested as an alternative method for treatment of palmar hyperhidrosis.

Changes of Serum Fatty Acid and Carnitine Levels after Administration of L-carnitine in Rats (흰쥐에서 L-carnitine 투여 후에 혈청 지방산과 Carnitine의 농도 변화)

  • Lee, Jae Won;Hong, Young Mi
    • Clinical and Experimental Pediatrics
    • /
    • v.45 no.9
    • /
    • pp.1075-1082
    • /
    • 2002
  • Purpose : Obesity is known to be associated with hypertension, dyslipidemia, and fatty liver and is thought to be associated with increased levels of free fatty acids. One of the strategies for decreasing free fatty acid levels is stimulation of hepatic lipid oxidation with L-carnitine. Carnitine is an essential cofactor for transport of long-chain fatty acid into mitochondria for oxidation. This study was designed to evaluate the changes of serum fatty acids and carnitine levels after exogenous injection of L-carnitine. Methods : Sprague Dawley rats were divided into two groups. Group A was control. Group B was given intraperitoneal injection with L-carnitine(200 mg/kg) daily for two weeks. Serum lipid (total cholesterol, triglyceride, HDL-cholesterol, LDL-cholesterol) and fatty acid levels were analyzed on the first day of the first and second weeks after injection of L-carnitine. Total, free, and acyl carnitine levels also were performed by a enzymatic cycling techniques at the same day intervals. Results : There was no significant difference between the two groups in total cholesterol, HDL-cholesterol, LDL-cholesterol levels before and after the administration of L-carnitine. But triglyceride levels were significantly decreased at the first week in group B compared with group A. Among free fatty acids, linoleic acid showed significant decrement(A group : $131.3{\pm}31.3mg/dL$ vs B group : $90.0{\pm}7.0mg/dL$) at the first week. Total, free, and acyl carnitine levels showed significant increments at all days intervals, but only free carnitine showed significant increments according to cumulative doses of carnitine. Conclusion : Plasma linoleic acid, a long-chain fatty acid, showed significant decrement after administration of L-carnitine in the first week. This may suggest that L-carnitine can be used as an antilipidemic agent for obese patients. A prospective study will investigate obese children in the future.

Changes in Immunogenicity of Preserved Aortic Allograft (보존된 동종동맥편 조직의 면역성 변화에 관한 연구)

  • 전예지;박영훈;강영선;최희숙;임창영
    • Journal of Chest Surgery
    • /
    • v.29 no.11
    • /
    • pp.1173-1181
    • /
    • 1996
  • The causes of degenerative changes in allograft cardiac valves are not well known to this day. Today's preserved allografts possess highly viable endothelial cells and degeneration of allografts can be facilitated by immune reaction which may be mediated by these viable cells. To test the antigenicity of endothelial cells, pieces from aortic wall were obtained from fresh and cryo-preserved rat allograft. Timings of sampling were prior to sterilization, after sterilization, after 1, 2, 7, 14 days of fresh preservation and cryopreservation. Endothelial cells were tested by immunohistochemical methods using monoclonal antibodies to MHC class I(MRC OX-18), class II(MRC OX-6) and ICAM-1 antigens. After transplantation of each group of aortic allograft at the subcutaneous layers of rats, population of CD4$^{+}$ T cell and CD8$^{+}$ T cell were analyzed with monoclonal antibodies after 1, 2, 3, 4, 6 and 8 weeks. MHC class I expression was 23.95% before preservation and increased to 35.53~48.08% after preservation(p=0.0183). MHC Class II expression was 9.72% before preservation and 10.13~13.39% after preservation(P=0.1599). ICAM-1 expression was 15.02% before preservation and increased to 19.85~35.33% after preservation(P=0.001). The proportion of CD4$^{+}$ T-cell was 42.13% before transplantation. And this was 49.23~36.8% after transplantation in No treat group (p=0.955), decreased to 29.56~32.80% in other group(p=0.0001~0.008). In all the groups, the proportion of CD8$^{+}$ T-cell increased from 25.57% before transplantation to 42.32~58.92% after transplantation(p=0.000l~0.0002). The CD4$^{+}$/CD8$^{+}$ ratio decreased from 1.22~2.28 at first week to 0.47~0.95 at eighth week(p=0.0001). The results revealed that the expression of MHC class I and ICAM-1 in aortic allograft endothelium were increased but that of MHC class II were not changed, despite the different method of preservation. During 8 weeks after transplantation of aortic allograft, the subpopulations of CD4$^{+}$ T cell were not changed or only slightly decreased but those of CD8$^{+}$ T cell were progressively increased.ely increased.

  • PDF

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.