• Title/Summary/Keyword: field study

Search Result 39,563, Processing Time 0.078 seconds

Study on 3D Printer Suitable for Character Merchandise Production Training (캐릭터 상품 제작 교육에 적합한 3D프린터 연구)

  • Kwon, Dong-Hyun
    • Cartoon and Animation Studies
    • /
    • s.41
    • /
    • pp.455-486
    • /
    • 2015
  • The 3D printing technology, which started from the patent registration in 1986, was a technology that did not attract attention other than from some companies, due to the lack of awareness at the time. However, today, as expiring patents are appearing after the passage of 20 years, the price of 3D printers have decreased to the level of allowing purchase by individuals and the technology is attracting attention from industries, in addition to the general public, such as by naturally accepting 3D and to share 3D data, based on the generalization of online information exchange and improvement of computer performance. The production capability of 3D printers, which is based on digital data enabling digital transmission and revision and supplementation or production manufacturing not requiring molding, may provide a groundbreaking change to the process of manufacturing, and may attain the same effect in the character merchandise sector. Using a 3D printer is becoming a necessity in various figure merchandise productions which are in the forefront of the kidult culture that is recently gaining attention, and when predicting the demand by the industrial sites related to such character merchandise and when considering the more inexpensive price due to the expiration of patents and sharing of technology, expanding opportunities and sectors of employment and cultivating manpower that are able to engage in further creative work seems as a must, by introducing education courses cultivating manpower that can utilize 3D printers at the education field. However, there are limits in the information that can be obtained when seeking to introduce 3D printers in school education. Because the press or information media only mentions general information, such as the growth of the industrial size or prosperous future value of 3D printers, the research level of the academic world also remains at the level of organizing contents in an introductory level, such as by analyzing data on industrial size, analyzing the applicable scope in the industry, or introducing the printing technology. Such lack of information gives rise to problems at the education site. There would be no choice but to incur temporal and opportunity expenses, since the technology would only be able to be used after going through trials and errors, by first introducing the technology without examining the actual information, such as through comparing the strengths and weaknesses. In particular, if an expensive equipment introduced does not suit the features of school education, the loss costs would be significant. This research targeted general users without a technology-related basis, instead of specialists. By comparing the strengths and weaknesses and analyzing the problems and matters requiring notice upon use, pursuant to the representative technologies, instead of merely introducing the 3D printer technology as had been done previously, this research sought to explain the types of features that a 3D printer should have, in particular, when required in education relating to the development of figure merchandise as an optional cultural contents at cartoon-related departments, and sought to provide information that can be of practical help when seeking to provide education using 3D printers in the future. In the main body, the technologies were explained by making a classification based on a new perspective, such as the buttress method, types of materials, two-dimensional printing method, and three-dimensional printing method. The reason for selecting such different classification method was to easily allow mutual comparison of the practical problems upon use. In conclusion, the most suitable 3D printer was selected as the printer in the FDM method, which is comparatively cheap and requires low repair and maintenance cost and low materials expenses, although rather insufficient in the quality of outputs, and a recommendation was made, in addition, to select an entity that is supportive in providing technical support.

Effect of Additional 1 hour T-piece Trial on Weaning Outcome to the Patients at Minimum Pressure Support (최소압력보조 수준에서 추가적 1시간 T-piece 시도가 이탈에 미치는 영향)

  • Hong, Sang-Bum;Koh, Youn-Suck;Lim, Chae-Man;Ann, Jong-Jun;Park, Wann;Shim, Tae-Son;Lee, Sang-Do;Kim, Woo-Sung;Kim, Dong-Soon;Kim, Won-Dong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.4
    • /
    • pp.813-822
    • /
    • 1998
  • Background: Extubation is recommended to be performed at minimum pressure support (PSmin) during the pressure support ventilation (PSV). In field, physicians sometimes perform additional 1 hr T-piece trial to the patient at PSmin to reduce re-intubation risk. Although it provides confirmation of patient's breathing reserve, weaning could be delayed due to increased airway resistance by endotracheal tube. Methods: To investigate the effect of additional 1 hr T-piece trial on weaning outcome, a prospective study was done in consecutive 44 patients who had received mechanical ventilation more than 3 days. Respiratory mechanics, hemodymic, and gas exchange measurements were done and the level of PSmin was calculated using the equation (PSmin=peak inspiratory flow rate $\times$ total ventilatory system resistance) at the 15cm $H_2O$ of pressure support. At PSmin, the patients were randomized into intervention (additional 1 hr T-piece trial) and control (extubation at PSmin). The measurements were repeated at PSmm, during weaning process (in cases of intervention), and after extubation. The weaning success was defined as spontaneous breathing more than 48hr after extubation. In intervention group, failure to continue weaning process was also considered as weaning failure. Results: Thirty-six patients with 42 times weaning trial were satisfied to the protocol. Mean PSmin level was 7.6 (${\pm}1.9$)cm $H_2O$. There were no differences in total ventilation times (TVT), APACHE III score, nutritional indices, and respiratory mechanics at PSmin between 2 groups. The weaning success rate and re-intubation rate were not different between intervention group (55% and 18% in each) and control group (70% and 20% in each) at first weaning trial. Work of breathing, pressure time product, and tidal volume were aggravated during 1 hr T-piece trial compared to those of PSmin in intervention group ($10.4{\pm}1.25$ and $1.66{\pm}1.08$ J/L in work of breathing) ($191{\pm}232$ and $287{\pm}217$cm $H_2O$ s/m in pressure time product) ($0.33{\pm}0.09$ and $0.29{\pm}0.09$ L in tidal volume) (P<0.05 in each). As in whole, TVT, and tidal volume at PSmin were significantly different between the patients with weaning success ($246{\pm}195$ hr, $0.43{\pm}0.11$ L) and the those with weaning failure ($407{\pm}248$ hr, $0.35{\pm}0.10$L) (P<0.05 in each). Conclusion : There were no advantage to weaning outcome by addition of 1 hr T-piece trial compared to prompt extubation to the patient at PS min.

  • PDF

Requirement and Perception of Parents on the Subject of Home Economics in Middle School (중학교 가정교과에 대한 학부모의 인식 및 요구도)

  • Shin Hyo-Shick;Park Mi-Soog
    • Journal of Korean Home Economics Education Association
    • /
    • v.18 no.3 s.41
    • /
    • pp.1-22
    • /
    • 2006
  • The purpose of this study is that I should look for a desirous directions about home economics by studying the requirements and perception of the high school parents who have finished the course of home economics. It was about 600 parents whom I have searched Seoul-Pusan, Ganwon. Ghynggi province, Choongcheong-Gyungsang province, Cheonla and Jeju province of 600, I chose only 560 as apparently suitable research. The questions include 61 requirements about home economics and one which we never fail to keep among the contents, whenever possible and one about the perception of home economics aims 11 about the perception of home economics courses and management. The collections were analyzed frequency, percent, mean. standard deviation t-test by using SAS program. The followings is the summary result of studying of it. 1. All the boys and girls learning together about the Idea of healthy lives and desirous human formulation and knowledge together are higher. 2. Among the teaching purposes of home economics, the item of the scientific principle and knowledge for improvements of home life shows 15.7% below average value. 3. The recognition degree about the quality of home economics is highly related with the real life, and about the system. we recognize lacking in periods and contents of home economics field and about guiding content, accomplishment and application qualities are higher regardless of sex. 4. The important term which we should emphasize in the subject of home economics is family part. 5. Among the needs of home economic requirement in freshman, in the middle unit, their growth and development are higher than anything else, representing 4.11, and by contrast the basic principle and actuality is 3.70, which is lowest among them. 6. In the case of second grade requirement of home economics content for parents in the middle unit young man and consuming life is 4.09 highest. 7. In the case of 3rd grade requirement of economics contents in the middle unit the choice of coming direction and job ethics is highest 4.16, and preparing meals and evaluation is lowest 3.50.

  • PDF

Preparation of Vitamin E Acetate Nano-emulsion and In Vitro Research Regarding Vitamin E Acetate Transdermal Delivery System which Use Franz Diffusion Cell (Vitamin E Acetate를 함유한 Nano-emulsion 제조와 Franz Diffusion Cell을 이용한 Vitamin E Acetate의 경표피 흡수에 관한 In Vitro 연구)

  • Park, Soo-Nam;Kim, Jai-Hyun;Yang, Hee-Jung;Won, Bo-Ryoung;Ahn, You-Jin;Kang, Myung-Kyu
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.35 no.2
    • /
    • pp.91-101
    • /
    • 2009
  • in the cosmetics and medical supply field as a antioxidant material. The stable nano particle emulsion of skin toner type containing VEA was prepared. To evaluate the skin permeation, experiments on VEA permeation to the skin of the ICR outbred albino mice (12 weeks, about 50 g, female) and on differences of solubility as a function of receptor formulations was performed. The analysis of nano-emulsions containing VEA 0.07 % showed that the higher ethanol contents the larger emulsions were formed, while the higher surfactant contents the size became smaller.In this study, vitamin E acetate (VEA, tocopheryl acetate), a lipid-soluble vitamin which is widely used A certain contents of ethanol in receptor phase increased VEA solubility on the nano-emulsion. When the ethanol contents were 10.0 % and 20.0 %, the VEA solubility was higher than 5.0 % and 40.0 %, respectively. The type of surfactant in receptor solution influenced to VEA solubility. The comparison between three kind surfactants whose chemical structures and HLB values are different, showed that solubility of VEA was increased as order of sorbitan sesquioleate (Arlacel 83; HLB 3.7) > POE (10) hydrogenated castor oil (HCO-10; HLB 6.5) > sorbitan monostearate (Arlacel 60; HLB 4.7). VEA solubility was also shown to be different according to the type of antioxidant. In early time, the solubility of the sample including ascorbic acid was similar to those of other samples including other types of antioxidants. However, the solubility of the sample including ascorbic acid was 2 times higher than others after 24 h. Franz diffusion cell experiment using mouse skin was performed with four nano-emulsion samples which have different VEA contents. The emulsion of 10 wt% ethanol was shown to be the most permeable at the amount of 128.8 ${\mu}g/cm^2$. When the result of 10 % ethanol content was compared with initial input of 220.057 ${\mu}g/cm^2$, the permeated amount was 58.53 % and the permeated amount at 10 % ethanol was higher 45.0 % and 15.0 % than the other results which ethanol contents were 1.0 and 20.0 wt%, respectively. Emulsion particle size used 0.5 % surfactant (HCO-60) was 26.0 nm that is one twentieth time smaller than the size of 0.007 % surfactant (HCO-60) at the same ethanol content. Transepidermal permeation of VEA was 54.848 ${\mu}g/cm^2$ which is smaller than that of particlesize 590.7 nm. Skin permeation of nano-emulsion containing VEA and difference of VEA solubility as a function of receptor phase formulation were determined from the results. Using these results, optimal conditions of transepidermal permeation with VEA were considered to be set up.

DC Resistivity method to image the underground structure beneath river or lake bottom (하저 지반특성 규명을 위한 전기비저항 탐사)

  • Kim Jung-Ho;Yi Myeong-Jong;Song Yoonho;Cho Seong-Jun;Lee Seong-Kon;Son Jeongsul
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2002.09a
    • /
    • pp.139-162
    • /
    • 2002
  • Since weak zones or geological lineaments are likely to be eroded, weak zones may develop beneath rivers, and a careful evaluation of ground condition is important to construct structures passing through a river. Dc resistivity surveys, however, have seldomly applied to the investigation of water-covered area, possibly because of difficulties in data aquisition and interpretation. The data aquisition having high quality may be the most important factor, and is more difficult than that in land survey, due to the water layer overlying the underground structure to be imaged. Through the numerical modeling and the analysis of case histories, we studied the method of resistivity survey at the water-covered area, starting from the characteristics of measured data, via data acquisition method, to the interpretation method. We unfolded our discussion according to the installed locations of electrodes, ie., floating them on the water surface, and installing at the water bottom, since the methods of data acquisition and interpretation vary depending on the electrode location. Through this study, we could confirm that the dc resistivity method can provide the fairly reasonable subsurface images. It was also shown that installing electrodes at the water bottom can give the subsurface image with much higher resolution than floating them on the water surface. Since the data acquired at the water-covered area have much lower sensitivity to the underground structure than those at the land, and can be contaminated by the higher noise, such as streaming potential, it would be very important to select the acquisition method and electrode array being able to provide the higher signal-to-noise ratio data as well as the high resolving power. The method installing electrodes at the water bottom is suitable to the detailed survey because of much higher resolving power, whereas the method floating them, especially streamer dc resistivity survey, is to the reconnaissance survey owing of very high speed of field work.

  • PDF

Cultural Practices of In vitro Tuber of Pinellia ternata(Thunb.) Breit I. Effects of Planting Time on Growth, Tuber Formation and Yield (기내(器內) 대량(大量) 생산(生産) 반하(半夏) 종구(種球)의 포장(圃場) 재배기술(裁培技術) 연구(硏究) I. 파종시기(播種詩期)가 생육(生育)과 괴경형성(塊莖形成) 및 수량(收量)에 미치는 영향(影響))

  • Park, Ho-Ki;Kim, Tai-Soo;Park, Moon-Soo;Choi, In-Leok;Jang, Yeong-Sun;Park, Keun-Yong
    • Korean Journal of Medicinal Crop Science
    • /
    • v.1 no.2
    • /
    • pp.109-114
    • /
    • 1993
  • This study was carried out to determine the optimum planting time for in vitromultiplied tuber of Pinellia ternata(Thunb.) Breit. The tubers were planted on April 20, May 20, June 20, July 20, August 20 and September 20 in 1990. Emergence ratios were 68 to 87% in any planting time except planting on July 20. The number of tubers per $m^2$ at harvest in plantings on May 20 and June 20 were significantly higher with 1,110 and 1,021, respectively, while in plantings after July 20, those were drastically decreased. As compared with fresh yield of planting on April 20(352kg /10a), that of May 20 was 109% and June 20 was 103%, while those of after July 20 were from 41% to 19%. There was a highly positive correlation between dry tuber yield and the number of tubers per $m^2(r=0.991^{**})$. Tuber yields for commercial use(diameter over 7.1mm) were high in planting on May 20(322kg /10a) and on June 20(299kg /10a). It was suggested that optimum field planting time for in vitro multiplied tuber of Pinellia ternata(Thunb.) Breit was from May 20 to June May 20.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

A Study on the Meaning and Future of the Moon Treaty (달조약의 의미와 전망에 관한 연구)

  • Kim, Han-Taek
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.21 no.1
    • /
    • pp.215-236
    • /
    • 2006
  • This article focused on the meaning of the 1979 Moon Treaty and its future. Although the Moon Treaty is one of the major 5 space related treaties, it was accepted by only 11 member states which are non-space powers, thus having the least enfluences on the field of space law. And this article analysed the relationship between the 1979 Moon Treay and 1967 Space Treaty which was the first principle treaty, and searched the meaning of the "Common Heritage of Mankind(hereinafter CHM)" stipulated in the Moon treaty in terms of international law. This article also dealt with the present and future problems arising from the Moon Treaty. As far as the 1967 Space Treaty is concerned the main standpoint is that outer space including the moon and the other celestial bodies is res extra commercium, areas not subject to national appropriation like high seas. It proclaims the principle non-appropriation concerning the celestial bodies in outer space. But the concept of CHM stipulated in the Moon Treaty created an entirely new category of territory in international law. This concept basically conveys the idea that the management, exploitation and distribution of natural resources of the area in question are matters to be decided by the international community and are not to be left to the initiative and discretion of individual states or their nationals. Similar provision is found in the 1982 Law of the Sea Convention that operates the International Sea-bed Authority created by the concept of CHM. According to the Moon Treaty international regime will be established as the exploitation of the natural resources of the celestial bodies other than the Earth is about to become feasible. Before the establishment of an international regime we could imagine moratorium upon the expoitation of the natural resources on the celestial bodies. But the drafting history of the Moon Treaty indicates that no moratorium on the exploitation of natural resources was intended prior to the setting up of the international regime. So each State Party could exploit the natural resources bearing in mind that those resouces are CHM. In this respect it would be better for Korea, now not a party to the Moon Treaty, to be a member state in the near future. According to the Moon Treaty the efforts of those countries which have contributed either directly or indirectly the exploitation of the moon shall be given special consideration. The Moon Treaty, which although is criticised by some space law experts represents a solid basis upon which further space exploration can continue, shows the expression of the common collective wisdom of all member States of the United Nations and responds the needs and possibilities of those that have already their technologies into outer space.

  • PDF

A Study on Aviation Safety and Third Country Operator of EU Regulation in light of the Convention on international Civil Aviation (시카고협약체계에서의 EU의 항공법규체계 연구 - TCO 규정을 중심으로 -)

  • Lee, Koo-Hee
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.29 no.1
    • /
    • pp.67-95
    • /
    • 2014
  • Some Contracting States of the Chicago Convention issue FAOC(Foreign Air Operator Certificate) and conduct various safety assessments for the safety of the foreign operators which operate to their state. These FAOC and safety audits on the foreign operators are being expanded to other parts of the world. While this trend is the strengthening measure of aviation safety resulting in the reduction of aircraft accident. FAOC also burdens the other contracting States to the Chicago Convention due to additional requirements and late permission. EASA(European Aviation Safety Agency) is a body governed by European Basic Regulation. EASA was set up in 2003 and conduct specific regulatory and executive tasks in the field of civil aviation safety and environmental protection. EASA's mission is to promote the highest common standards of safety and environmental protection in civil aviation. The task of the EASA has been expanded from airworthiness to air operations and currently includes the rulemaking and standardization of airworthiness, air crew, air operations, TCO, ATM/ANS safety oversight, aerodromes, etc. According to Implementing Rule, Commission Regulation(EU) No 452/2014, EASA has the mandate to issue safety authorizations to commercial air carriers from outside the EU as from 26 May 2014. Third country operators (TCO) flying to any of the 28 EU Member States and/or to 4 EFTA States (Iceland, Norway, Liechtenstein, Switzerland) must apply to EASA for a so called TCO authorization. EASA will only take over the safety-related part of foreign operator assessment. Operating permits will continue to be issued by the national authorities. A 30-month transition period ensures smooth implementation without interrupting international air operations of foreign air carriers to the EU/EASA. Operators who are currently flying to Europe can continue to do so, but must submit an application for a TCO authorization before 26 November 2014. After the transition period, which lasts until 26 November 2016, a valid TCO authorization will be a mandatory prerequisite, in the absence of which an operating permit cannot be issued by a Member State. The European TCO authorization regime does not differentiate between scheduled and non-scheduled commercial air transport operations in principle. All TCO with commercial air transport need to apply for a TCO authorization. Operators with a potential need of operating to the EU at some time in the near future are advised to apply for a TCO authorization in due course, even when the date of operations is unknown. For all the issue mentioned above, I have studied the function of EASA and EU Regulation including TCO Implementing Rule newly introduced, and suggested some proposals. I hope that this paper is 1) to help preparation of TCO authorization, 2) to help understanding about the international issue, 3) to help the improvement of korean aviation regulations and government organizations, 4) to help compliance with international standards and to contribute to the promotion of aviation safety, in addition.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.