• Title/Summary/Keyword: Structure Identification

Search Result 1,710, Processing Time 0.03 seconds

A study on improvement of regular survey system of state-designated movable cultural heritage (국가지정 동산문화재의 정기조사제도 개선방안 연구)

  • Lee, Jong-Suk;Kim, Chang-Gyoo
    • Korean Journal of Heritage: History & Science
    • /
    • v.51 no.4
    • /
    • pp.146-169
    • /
    • 2018
  • Artificial or natural artifacts, which have historical, artistic, academic or scenic value as national, ethnic or global assets, are designated as "cultural heritages" under the Act on the Protection of Cultural Heritage. Cultural heritages can be divided into tangible cultural heritages, intangible cultural heritages, and monument and folklore heritages. In addition, depending on the object of designation, a cultural heritage can be designated either as a city or a provincial cultural heritage or a cultural heritage material, by a city mayor or provincial governor, and as a state-designated heritage by the administrator of the Cultural heritage Administration. The regular survey is a part of the policy for the preservation and management of state-designated heritages, which requires that surveys be undertaken every three to five years for the preservation, repair and maintenance of cultural heritages. It was stipulated in the Act on the Protection of Cultural Heritage in 2006, and since then has substantially contributed to the preservation and management of state-designated heritages based on the identification of damage to cultural heritages and the application of appropriate treatment measures. However, some parts of the guidelines on the regular survey, legislated in 2006, occasionally give rise to confusion in managing the regular survey system of state-designated movable cultural heritages, and need to be modified to facilitate the systematic management and improvement of the regular survey system. This study attempts to analyze the structure and operation of the regular survey system of state-designated movable cultural heritages, and proposes plans for improving the way of specifying each department which leads, manages and executes the regular survey, the process of entrusting the survey, and its guidelines and forms. I hope that these plans concerning the regular survey of state-designated movable cultural heritages will contribute to improving the quality and management of the system.

Development Process for User Needs-based Chatbot: Focusing on Design Thinking Methodology (사용자 니즈 기반의 챗봇 개발 프로세스: 디자인 사고방법론을 중심으로)

  • Kim, Museong;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.221-238
    • /
    • 2019
  • Recently, companies and public institutions have been actively introducing chatbot services in the field of customer counseling and response. The introduction of the chatbot service not only brings labor cost savings to companies and organizations, but also enables rapid communication with customers. Advances in data analytics and artificial intelligence are driving the growth of these chatbot services. The current chatbot can understand users' questions and offer the most appropriate answers to questions through machine learning and deep learning. The advancement of chatbot core technologies such as NLP, NLU, and NLG has made it possible to understand words, understand paragraphs, understand meanings, and understand emotions. For this reason, the value of chatbots continues to rise. However, technology-oriented chatbots can be inconsistent with what users want inherently, so chatbots need to be addressed in the area of the user experience, not just in the area of technology. The Fourth Industrial Revolution represents the importance of the User Experience as well as the advancement of artificial intelligence, big data, cloud, and IoT technologies. The development of IT technology and the importance of user experience have provided people with a variety of environments and changed lifestyles. This means that experiences in interactions with people, services(products) and the environment become very important. Therefore, it is time to develop a user needs-based services(products) that can provide new experiences and values to people. This study proposes a chatbot development process based on user needs by applying the design thinking approach, a representative methodology in the field of user experience, to chatbot development. The process proposed in this study consists of four steps. The first step is 'setting up knowledge domain' to set up the chatbot's expertise. Accumulating the information corresponding to the configured domain and deriving the insight is the second step, 'Knowledge accumulation and Insight identification'. The third step is 'Opportunity Development and Prototyping'. It is going to start full-scale development at this stage. Finally, the 'User Feedback' step is to receive feedback from users on the developed prototype. This creates a "user needs-based service (product)" that meets the process's objectives. Beginning with the fact gathering through user observation, Perform the process of abstraction to derive insights and explore opportunities. Next, it is expected to develop a chatbot that meets the user's needs through the process of materializing to structure the desired information and providing the function that fits the user's mental model. In this study, we present the actual construction examples for the domestic cosmetics market to confirm the effectiveness of the proposed process. The reason why it chose the domestic cosmetics market as its case is because it shows strong characteristics of users' experiences, so it can quickly understand responses from users. This study has a theoretical implication in that it proposed a new chatbot development process by incorporating the design thinking methodology into the chatbot development process. This research is different from the existing chatbot development research in that it focuses on user experience, not technology. It also has practical implications in that companies or institutions propose realistic methods that can be applied immediately. In particular, the process proposed in this study can be accessed and utilized by anyone, since 'user needs-based chatbots' can be developed even if they are not experts. This study suggests that further studies are needed because only one field of study was conducted. In addition to the cosmetics market, additional research should be conducted in various fields in which the user experience appears, such as the smart phone and the automotive market. Through this, it will be able to be reborn as a general process necessary for 'development of chatbots centered on user experience, not technology centered'.

A Case Study on Introducing Vita Parcours as Forest Leisure Sports Facility in Saneum Healing Forest (산음 치유의 숲 Vita Parcours 도입 사례 연구 - Vita Parcours 도입을 사례로 -)

  • Lee, Jin-Kyu;Kim, Ki Weon
    • The Journal of the Korean Institute of Forest Recreation
    • /
    • v.22 no.4
    • /
    • pp.71-81
    • /
    • 2018
  • It is necessary to build a quality-enhancing forest leisure sports facility according to values that value life quality, national forest policy. Vita Parcours (fitness trails) is found to be the activities highly convenient, so their introduction and promotion in Korea should be highly considered. field survey was necessary to explore the possibility of installing Vita Parcours. Several sites were selected, such as Asean recreational forest, Unaksan recreational forest, Yumyeongsan recreational forest, Saneum recreational forest, Jungmisan recreational forest and National center for forest therapy. At these locations, we explored the current status of forest facilities and forest trails. A total number of 31 exercise facilities has been identified and surveyed, some of which are located on the trails (2), alongside the trail (9), alongside trail boundaries (2) or represent facilities suitable for both outdoor and indoor exercise within the forest (18), all of whom provide location for 44 different exercise routines (flexibility exercises (23), endurance (12) and strength exercises (9)). Field work also included identification of forests paths, the total number of whom was 34 paths - 30 identified as trails, 2 as hiking trails, 1 as a forest path for relaxation and healing and 1 as an exploratory path. Regarding the structure and shape of these trails, 32 was straight in shape and only 2 designed as a circular forest trails. Average length of these trails was 652.2m with the highest and the lowest point altitude difference of 60m, on average. Saneum recreational forest provide the most suitable site/environment for Vita Parcours and as a result of this, Saneum recreational forest is proposed as a location to support the endeavors in promotion of these valuable forest fitness trails. Among the forest paths at this site, a forest path which has a length of 1.84km and 73.0m the highest and the lowest point altitude difference was selected as the most suitable, and may be equipped with necessary stations for exercise or obstacles. In addition, if these trails are to be introduced and welcomed by its users, we must assure that they are properly maintained.

Trends in QA/QC of Phytoplankton Data for Marine Ecosystem Monitoring (해양생태계 모니터링을 위한 식물플랑크톤 자료의 정도 관리 동향)

  • YIH, WONHO;PARK, JONG WOO;SEONG, KYEONG AH;PARK, JONG-GYU;YOO, YEONG DU;KIM, HYUNG SEOP
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.3
    • /
    • pp.220-237
    • /
    • 2021
  • Since the functional importance of marine phytoplankton was firstly advocated from early 1880s massive data on the species composition and abundance were produced by classical microscopic observation and the advanced auto-imaging technologies. Recently, pigment composition resulted from direct chemical analysis of phytoplankton samples or indirect remote sensing could be used for the group-specific quantification, which leads us to more diversified data production methods and for more improved spatiotemporal accessibilities to the target data-gathering points. In quite a few cases of many long-term marine ecosystem monitoring programs the phytoplankton species composition and abundance was included as a basic monitoring item. The phytoplankton data could be utilized as a crucial evidence for the long-term change in phytoplankton community structure and ecological functioning at the monitoring stations. Usability of the phytoplankton data sometimes is restricted by the differences in data producers throughout the whole monitoring period. Methods for sample treatments, analyses, and species identification of the phytoplankton species could be inconsistent among the different data producers and the monitoring years. In-depth study to determine the precise quantitative values of the phytoplankton species composition and abundance might be begun by Victor Hensen in late 1880s. International discussion on the quality assurance of the marine phytoplankton data began in 1969 by the SCOR Working Group 33 of ICSU. Final report of the Working group in 1974 (UNESCO Technical Papers in Marine Science 18) was later revised and published as the UNESCO Monographs on oceanographic methodology 6. The BEQUALM project, the former body of IPI (International Phytoplankton Intercomparison) for marine phytoplankton data QA/QC under ISO standard, was initiated in late 1990. The IPI is promoting international collaboration for all the participating countries to apply the QA/QC standard established from the 20 years long experience and practices. In Korea, however, such a QA/QC standard for marine phytoplankton species composition and abundance data is not well established by law, whereas that for marine chemical data from measurements and analysis has been already set up and managed. The first priority might be to establish a QA/QC standard system for species composition and abundance data of marine phytoplankton, then to be extended to other functional groups at the higher consumer level of marine food webs.

A study for improvement of far-distance performance of a tunnel accident detection system by using an inverse perspective transformation (역 원근변환 기법을 이용한 터널 영상유고시스템의 원거리 감지 성능 향상에 관한 연구)

  • Lee, Kyu Beom;Shin, Hyu-Soung
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.3
    • /
    • pp.247-262
    • /
    • 2022
  • In domestic tunnels, it is mandatory to install CCTVs in tunnels longer than 200 m which are also recommended by installation of a CCTV-based automatic accident detection system. In general, the CCTVs in the tunnel are installed at a low height as well as near by the moving vehicles due to the spatial limitation of tunnel structure, so a severe perspective effect takes place in the distance of installed CCTV and moving vehicles. Because of this effect, conventional CCTV-based accident detection systems in tunnel are known in general to be very hard to achieve the performance in detection of unexpected accidents such as stop or reversely moving vehicles, person on the road and fires, especially far from 100 m. Therefore, in this study, the region of interest is set up and a new concept of inverse perspective transformation technique is introduced. Since moving vehicles in the transformed image is enlarged proportionally to the distance from CCTV, it is possible to achieve consistency in object detection and identification of actual speed of moving vehicles in distance. To show this aspect, two datasets in the same conditions are composed with the original and the transformed images of CCTV in tunnel, respectively. A comparison of variation of appearance speed and size of moving vehicles in distance are made. Then, the performances of the object detection in distance are compared with respect to the both trained deep-learning models. As a result, the model case with the transformed images are able to achieve consistent performance in object and accident detections in distance even by 200 m.

A study on characteristics of palace wallpaper in the Joseon Dynasty - Focusing on Gyeongbokgung Palace, Changdeokgung Palace and Chilgung Palace - (조선시대 궁궐 도배지 특성 연구 - 경복궁, 창덕궁, 칠궁을 중심으로 -)

  • KIM Jiwon;KIM Jisun;KIM, Myoungnam;JEONG Seonhwa
    • Korean Journal of Heritage: History & Science
    • /
    • v.56 no.1
    • /
    • pp.80-97
    • /
    • 2023
  • By taking wallpaper specimens from Gyeongbokgung Palace, Changdeokgung Palace, and Chilgung Palace preserved from the late Joseon Dynasty to the present, we planned in this study to determine the types and characteristics of the paper used as wallpaper in the Joseon royal family. First, we confirmed the features of paper hanging in the palaces with old literature on the wallpaper used by the royal family based on archival research. Second, we conducted a field survey targeting the royal palaces whose construction period was relatively clear, and analyzed the first layer of wallpaper directly attached to the wall structure after sampling the specimens. Therefore, we confirmed that the main raw material was hanji, which was used as a wallpaper by the royal family, and grasped the types of substances(dyes and pigments) used to produce a blue color in spaces that must have formality by analyzing the blue-colored paper. Based on the results confirmed through the analysis, we checked documents and the existing wallpaper by comparing the old literature related to wallpaper records of the Joseon Dynasty palaces. We also built a database for the restoration of cultural properties when conserving the wallpaper in the royal palaces. We examined the changes in wallpaper types by century and the content according to the place of use by extracting wallpaper-related contents recorded in 36 cases of Uigwe from the 17th to 20th centuries. As a result, it was found that the names used for document paper and wallpaper were not different, thus document paper and wallpaper were used without distinction during the Joseon Dynasty. And though there are differences in the types of wallpaper depending on the period, it was confirmed that the foundation of wallpaper continued until the late Joseon Dynasty, with Baekji(white hanji), Hubaekji(thick white paper), jeojuji(common hanji used to write documents), chojuji(hanji used as a draft for writing documents) and Gakjang(a wide and thick hanji used as a pad). As a result of fiber identification by the morphological characteristics of fibers and the normal color reaction(KS M ISO 9184-4: Graph "C" staining test) for the first layer of paper directly attached to the palace wall, the main materials of hanji used by the royal family were confirmed and the raw materials used to make hanii in buildings of palaces based on the construction period were determined. Also, as a result of analyzing the coloring materials of the blue decorative paper with an optical microscope, ultraviolet-visible spectroscopic analysis(UV-Vis), and X-ray diffraction analysis(XRD), we determined that the type of blue decorative paper dyes and pigments used in the palaces must have formality and identified that the raw materials used to produce the blue color were natural indigo, lazurite and cobalt blue.

Brief Introduction of Research Progresses in Control and Biocontrol of Clubroot Disease in China

  • He, Yueqiu;Wu, Yixin;He, Pengfei;Li, Xinyu
    • 한국균학회소식:학술대회논문집
    • /
    • 2015.05a
    • /
    • pp.45-46
    • /
    • 2015
  • Clubroot disease of crucifers has occurred since 1957. It has spread to the whole China, especially in the southwest and nourtheast where it causes 30-80% loss in some fields. The disease has being expanded in the recent years as seeds are imported and the floating seedling system practices. For its effective control, the Ministry of Agriculture of China set up a program in 2010 and a research team led by Dr. Yueqiu HE, Yunnan Agricultural University. The team includes 20 main reseachers of 11 universities and 5 institutions. After 5 years, the team has made a lot of progresses in disease occurrence regulation, resources collection, resistance identification and breeding, biological agent exploration, formulation, chemicals evaluation, and control strategy. About 1200 collections of local and commercial crucifers were identified in the field and by artificiall inoculation in the laboratories, 10 resistant cultivars were breeded including 7 Chinese cabbages and 3 cabbages. More than 800 antagostic strains were isolated including bacteria, stretomyces and fungi. Around 100 chemicals were evaluated in the field and greenhouse based on its control effect, among them, 6 showed high control effect, especially fluazinam and cyazofamid could control about 80% the disease. However, fluzinam has negative effect on soil microbes. Clubroot disease could not be controlled by bioagents and chemicals once when the pathogen Plasmodiophora brassicae infected its hosts and set up the parasitic relationship. We found the earlier the pathogent infected its host, the severer the disease was. Therefore, early control was the most effective. For Chinese cabbage, all controlling measures should be taken in the early 30 days because the new infection could not cause severe symptom after 30 days of seeding. For example, a biocontrol agent, Bacillus subtilis Strain XF-1 could control the disease 70%-85% averagely when it mixed with seedling substrate and was drenching 3 times after transplanting, i.e. immediately, 7 days, 14 days. XF-1 has been deeply researched in control mechanisms, its genome, and development and application of biocontrol formulate. It could produce antagonistic protein, enzyme, antibiotics and IAA, which promoted rhizogenesis and growth. Its The genome was sequenced by Illumina/Solexa Genome Analyzer to assembled into 20 scaffolds then the gaps between scaffolds were filled by long fragment PCR amplification to obtain complet genmone with 4,061,186 bp in size. The whole genome was found to have 43.8% GC, 108 tandem repeats with an average of 2.65 copies and 84 transposons. The CDSs were predicted as 3,853 in which 112 CDSs were predicted to secondary metabolite biosynthesis, transport and catabolism. Among those, five NRPS/PKS giant gene clusters being responsible for the biosynthesis of polyketide (pksABCDEFHJLMNRS in size 72.9 kb), surfactin(srfABCD, 26.148 kb, bacilysin(bacABCDE 5.903 kb), bacillibactin(dhbABCEF, 11.774 kb) and fengycin(ppsABCDE, 37.799 kb) have high homolgous to fuction confirmed biosynthesis gene in other strain. Moreover, there are many of key regulatory genes for secondary metabolites from XF-1, such as comABPQKX Z, degQ, sfp, yczE, degU, ycxABCD and ywfG. were also predicted. Therefore, XF-1 has potential of biosynthesis for secondary metabolites surfactin, fengycin, bacillibactin, bacilysin and Bacillaene. Thirty two compounds were detected from cell extracts of XF-1 by MALDI-TOF-MS, including one Macrolactin (m/z 441.06), two fusaricidin (m/z 850.493 and 968.515), one circulocin (m/z 852.509), nine surfactin (m/z 1044.656~1102.652), five iturin (m/z 1096.631~1150.57) and forty fengycin (m/z 1449.79~1543.805). The top three compositions types (contening 56.67% of total extract) are surfactin, iturin and fengycin, in which the most abundant is the surfactin type composition 30.37% of total extract and in second place is the fengycin with 23.28% content with rich diversity of chemical structure, and the smallest one is the iturin with 3.02% content. Moreover, the same main compositions were detected in Bacillus sp.355 which is also a good effects biocontol bacterial for controlling the clubroot of crucifer. Wherefore those compounds surfactin, iturin and fengycin maybe the main active compositions of XF-1 against P. brassicae. Twenty one fengycin type compounds were evaluate by LC-ESI-MS/MS with antifungal activities, including fengycin A $C_{16{\sim}C19}$, fengycin B $C_{14{\sim}C17}$, fengycin C $C_{15{\sim}C18}$, fengycin D $C_{15{\sim}C18}$ and fengycin S $C_{15{\sim}C18}$. Furthermore, one novel compound was identified as Dehydroxyfengycin $C_{17}$ according its MS, 1D and 2D NMR spectral data, which molecular weight is 1488.8480 Da and formula $C_{75}H_{116}N_{12}O_{19}$. The fengycin type compounds (FTCPs $250{\mu}g/mL$) were used to treat the resting spores of P. brassicae ($10^7/mL$) by detecting leakage of the cytoplasm components and cell destruction. After 12 h treatment, the absorbencies at 260 nm (A260) and at 280 nm (A280) increased gradually to approaching the maximum of absorbance, accompanying the collapse of P. brassicae resting spores, and nearly no complete cells were observed at 24 h treatment. The results suggested that the cells could be lyzed by the FTCPs of XF-1, and the diversity of FTCPs was mainly attributed to a mechanism of clubroot disease biocontrol. In the five selected medium MOLP, PSA, LB, Landy and LD, the most suitable for growth of strain medium is MOLP, and the least for strains longevity is the Landy sucrose medium. However, the lipopeptide highest yield is in Landy sucrose medium. The lipopeptides in five medium were analyzed with HPLC, and the results showed that lipopeptides component were same, while their contents from B. subtilis XF-1 fermented in five medium were different. We found that it is the lipopeptides content but ingredients of XF-1 could be impacted by medium and lacking of nutrition seems promoting lipopeptides secretion from XF-1. The volatile components with inhibition fungal Cylindrocarpon spp. activity which were collect in sealed vesel were detected with metheds of HS-SPME-GC-MS in eight biocontrol Bacillus species and four positive mutant strains of XF-1 mutagenized with chemical mutagens, respectively. They have same main volatile components including pyrazine, aldehydes, oxazolidinone and sulfide which are composed of 91.62% in XF-1, in which, the most abundant is the pyrazine type composition with 47.03%, and in second place is the aldehydes with 23.84%, and the third place is oxazolidinone with 15.68%, and the smallest ones is the sulfide with 5.07%.

  • PDF

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

A Conceptual Review of the Transaction Costs within a Distribution Channel (유통경로내의 거래비용에 대한 개념적 고찰)

  • Kwon, Young-Sik;Mun, Jang-Sil
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.29-41
    • /
    • 2012
  • This paper undertakes a conceptual review of transaction cost to broaden the understanding of the transaction cost analysis (TCA) approach. More than 40 years have passed since Coase's fundamental insight that transaction, coordination, and contracting costs must be considered explicitly in explaining the extent of vertical integration. Coase (1937) forced economists to identify previously neglected constraints on the trading process to foster efficient intrafirm, rather than interfirm, transactions. The transaction cost approach to economic organization study regards transactions as the basic units of analysis and holds that understanding transaction cost economy is central to organizational study. The approach applies to determining efficient boundaries, as between firms and markets, and to internal transaction organization, including employment relations design. TCA, developed principally by Oliver Williamson (1975,1979,1981a) blends institutional economics, organizational theory, and contract law. Further progress in transaction costs research awaits the identification of critical dimensions in which transaction costs differ and an examination of the economizing properties of alternative institutional modes for organizing transactions. The crucial investment distinction is: To what degree are transaction-specific (non-marketable) expenses incurred? Unspecialized items pose few hazards, since buyers can turn toalternative sources, and suppliers can sell output intended for one order to other buyers. Non-marketability problems arise when specific parties' identities have important cost-bearing consequences. Transactions of this kind are labeled idiosyncratic. The summarized results of the review are as follows. First, firms' distribution decisions often prompt examination of the make-or-buy question: Should a marketing activity be performed within the organization by company employees or contracted to an external agent? Second, manufacturers introducing an industrial product to a foreign market face a difficult decision. Should the product be marketed primarily by captive agents (the company sales force and distribution division) or independent intermediaries (outside sales agents and distribution)? Third, the authors develop a theoretical extension to the basic transaction cost model by combining insights from various theories with the TCA approach. Fourth, other such extensions are likely required for the general model to be applied to different channel situations. It is naive to assume the basic model appliesacross markedly different channel contexts without modifications and extensions. Although this study contributes to scholastic research, it is limited by several factors. First, the theoretical perspective of TCA has attracted considerable recent interest in the area of marketing channels. The analysis aims to match the properties of efficient governance structures with the attributes of the transaction. Second, empirical evidence about TCA's basic propositions is sketchy. Apart from Anderson's (1985) study of the vertical integration of the selling function and John's (1984) study of opportunism by franchised dealers, virtually no marketing studies involving the constructs implicated in the analysis have been reported. We hope, therefore, that further research will clarify distinctions between the different aspects of specific assets. Another important line of future research is the integration of efficiency-oriented TCA with organizational approaches that emphasize specific assets' conceptual definition and industry structure. Finally, research of transaction costs, uncertainty, opportunism, and switching costs is critical to future study.

  • PDF

A Study on Recent Research Trend in Management of Technology Using Keywords Network Analysis (키워드 네트워크 분석을 통해 살펴본 기술경영의 최근 연구동향)

  • Kho, Jaechang;Cho, Kuentae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.101-123
    • /
    • 2013
  • Recently due to the advancements of science and information technology, the socio-economic business areas are changing from the industrial economy to a knowledge economy. Furthermore, companies need to do creation of new value through continuous innovation, development of core competencies and technologies, and technological convergence. Therefore, the identification of major trends in technology research and the interdisciplinary knowledge-based prediction of integrated technologies and promising techniques are required for firms to gain and sustain competitive advantage and future growth engines. The aim of this paper is to understand the recent research trend in management of technology (MOT) and to foresee promising technologies with deep knowledge for both technology and business. Furthermore, this study intends to give a clear way to find new technical value for constant innovation and to capture core technology and technology convergence. Bibliometrics is a metrical analysis to understand literature's characteristics. Traditional bibliometrics has its limitation not to understand relationship between trend in technology management and technology itself, since it focuses on quantitative indices such as quotation frequency. To overcome this issue, the network focused bibliometrics has been used instead of traditional one. The network focused bibliometrics mainly uses "Co-citation" and "Co-word" analysis. In this study, a keywords network analysis, one of social network analysis, is performed to analyze recent research trend in MOT. For the analysis, we collected keywords from research papers published in international journals related MOT between 2002 and 2011, constructed a keyword network, and then conducted the keywords network analysis. Over the past 40 years, the studies in social network have attempted to understand the social interactions through the network structure represented by connection patterns. In other words, social network analysis has been used to explain the structures and behaviors of various social formations such as teams, organizations, and industries. In general, the social network analysis uses data as a form of matrix. In our context, the matrix depicts the relations between rows as papers and columns as keywords, where the relations are represented as binary. Even though there are no direct relations between papers who have been published, the relations between papers can be derived artificially as in the paper-keyword matrix, in which each cell has 1 for including or 0 for not including. For example, a keywords network can be configured in a way to connect the papers which have included one or more same keywords. After constructing a keywords network, we analyzed frequency of keywords, structural characteristics of keywords network, preferential attachment and growth of new keywords, component, and centrality. The results of this study are as follows. First, a paper has 4.574 keywords on the average. 90% of keywords were used three or less times for past 10 years and about 75% of keywords appeared only one time. Second, the keyword network in MOT is a small world network and a scale free network in which a small number of keywords have a tendency to become a monopoly. Third, the gap between the rich (with more edges) and the poor (with fewer edges) in the network is getting bigger as time goes on. Fourth, most of newly entering keywords become poor nodes within about 2~3 years. Finally, keywords with high degree centrality, betweenness centrality, and closeness centrality are "Innovation," "R&D," "Patent," "Forecast," "Technology transfer," "Technology," and "SME". The results of analysis will help researchers identify major trends in MOT research and then seek a new research topic. We hope that the result of the analysis will help researchers of MOT identify major trends in technology research, and utilize as useful reference information when they seek consilience with other fields of study and select a new research topic.