• Title/Summary/Keyword: 기업공개

Search Result 360, Processing Time 0.03 seconds

Comparison of ESG Evaluation Methods: Focusing on the K-ESG Guideline (ESG 평가방법 비교: K-ESG 가이드라인을 중심으로)

  • Chanhi Cho;Hyoung-Yong Lee
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.1-25
    • /
    • 2023
  • ESG management is becoming a necessity of the times, but there are about 600 ESG evaluation indicators worldwide, causing confusion in the market as different ESG ratings were assigned to individual companies according to evaluation agencies. In addition, since the method of applying ESG was not disclosed, there were not many ways for companies that wanted to introduce ESG management to get help. Accordingly, the Ministry of Trade, Industry and Energy announced the K-ESG guideline jointly with the ministries. In previous studies, there were few studies on the comparison of evaluation grades by ESG evaluation company or the application of evaluation diagnostic items. Therefore, in this study, the ease of application and improvement of the K-ESG guideline was attempted by applying the K-ESG guideline to companies that already have ESG ratings. The position of the K-ESG guideline is also confirmed by comparing the scores calculated through the K-ESG guideline for companies that have ESG ratings from global ESG evaluation agencies and domestic ESG evaluation agencies. As a result of the analysis, first, the K-ESG guideline provide clear and detailed standards for individual companies to set their own ESG goals and set the direction of ESG practice. Second, the K-ESG guideline is suitable for domestic and global ESG evaluation standards as it has 61 diagnostic items and 12 additional diagnostic items covering the evaluation indicators of global representative ESG evaluation agencies and KCGS in Korea. Third, the ESG rating of the K-ESG guideline was higher than that of a global ESG rating company and lower than or similar to that of a domestic ESG rating company. Fourth, the ease of application of the K-ESG guideline is judged to be high. Fifth, the point to be improved in the K-ESG guideline is that the government needs to compile industry average statistics on diagnostic items in the K-ESG environment area and publish them on the government's ESG-only site. In addition, the applied weights of E, S, and G by industry should be determined and disclosed. This study will help ESG evaluation agencies, corporate management, and ESG managers interested in ESG management in establishing ESG management strategies and contributing to providing improvements to be referenced when revising the K-ESG guideline in the future.

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

An Exploratory Study on Determinants Affecting R Programming Acceptance (R 프로그래밍 수용 결정 요인에 대한 탐색 연구)

  • Rubianogroot, Jennifer;Namn, Su Hyeon
    • Management & Information Systems Review
    • /
    • v.37 no.1
    • /
    • pp.139-154
    • /
    • 2018
  • R programming is free and open source system associated with a rich and ever-growing set of libraries of functions developed and submitted by independent end-users. It is recognized as a popular tool for handling big data sets and analyzing them. Reflecting these characteristics, R has been gaining popularity from data analysts. However, the antecedents of R technology acceptance has not been studied yet. In this study we identify and investigates cognitive factors contributing to build user acceptance toward R in education environment. We extend the existing technology acceptance model by incorporating social norms and software capability. It was found that the factors of subjective norm, perceived usefulness, ease of use affect positively on the intention of acceptance R programming. In addition, perceived usefulness is related to subjective norms, perceived ease of use, and software capability. The main difference of this research from the previous ones is that the target system is not a stand-alone. In addition, the system is not static in the sense that the system is not a final version. Instead, R system is evolving and open source system. We applied the Technology Acceptance Model (TAM) to the target system which is a platform where diverse applications such as statistical, big data analyses, and visual rendering can be performed. The model presented in this work can be useful for both colleges that plan to invest in new statistical software and for companies that need to pursue future installations of new technologies. In addition, we identified a modified version of the TAM model which is extended by the constructs such as subjective norm and software capability to the original TAM model. However one of the weak aspects that might inhibit the reliability and validity of the model is that small number of sample size.

A personalized TV service under Open network environment (개방형 환경에서의 개인 맞춤형 TV 서비스)

  • Lye, Ji-Hye;Pyo, Sin-Ji;Im, Jeong-Yeon;Kim, Mun-Churl;Lim, Sun-Hwan;Kim, Sang-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.279-282
    • /
    • 2006
  • IP망을 이용한 IPTV 방송 서비스가 새로운 수익 모델로 인정받고 현재 국내의 KT, SKT 등이 IPTV 시범서비스를 준비하거나 진행 중에 있다 이 IPTV 서비스는 이전의 단방향 방송과는 달리 사용자와의 인터렉션을 중시하는 양방향 방송을 표방하기 때문에 지금까지의 방송과는 다른 혁신적인 방송서비스가 기대된다. 하지만 IPTV 서비스에 있어서 여러 통신사와 방송사가 참여할 수 있을 것으로 보여지는 것과는 달리 실상은 몇몇 거대 통신기업이 자신들의 망을 이용하는 가입자들을 상대로 한정된 사업을 벌이고 있다. 이는 IPTV 서비스를 위한 인프라가 구축되어 있지 않고 방통융합망의 개념을 만족시키기 위해 서비스 개발자가 알아야 할 프로토콜들이 너무나 많기 때문이다. 따라서 본 논문에서는 이러한 상황을 타개할 수 있는 수단을 Open API로 제안한다. 맞춤형 방송을 위한 시나리오를 TV-Anytime의 벤치마킹과 유저 시나리오를 참고하여 재구성하고 이 시나리오로부터 IPTV 방송 서비스를 위한 방통융합망의 기본적이고 강력한 기능들을 Open API 함수로 정의하였다. 여기에서의 방송 서비스는 NDR, EPG, 개인 맞춤형 광고 서비스를 말하며 각 서비스를 위한 서버는 통합망 위에 존재하고 이 서버들이 개방하는 API들은 다른 응용프로그램에 의해 사용되는 것이기 때문에 가장 기본적인 기능을 정의하게 된다. 또한, 제안한 Open API 함수를 이용하여 개인 맞춤형 방송 응용 서비스를 구현함으로써 서비스 검증을 하였다. Open API는 웹서비스를 통해 공개된 기능들로써 게이트웨이를 통해 다른 망에서 사용할 수 있게 된다. Open API 함수의 정의는 함수 이름, 기능, 입 출력 파라메터로 이루어져 있다. 사용자 맞춤 서비스를 위해 전달되는 사용자 상세 정보와 콘텐츠 상세 정보는 TV-Anytime 포럼에서 정의한 메타데이터 스키마를 이용하여 정의하였다.가능하게 한다. 제안된 방법은 프레임 간 모드 결정을 고속화함으로써 스케일러블 비디오 부호화기의 연산량과 복잡도를 최대 57%감소시킨다. 그러나 연산량 감소에 따른 비트율의 증가나 화질의 열화는 최대 1.74% 비트율 증가 및 0.08dB PSNR 감소로 무시할 정도로 작다., 반드시 이에 대한 검증이 필요함을 알 수 있었다. 현지관측에 비해 막대한 비용과 시간을 절약할 수 있는 위성영상해석방법을 이용한 방법은 해양수질파악이 가능할 것으로 판단되며, GIS를 이용하여 다양하고 복잡한 자료를 데이터베이스화함으로써 가시화하고, 이를 기초로 공간분석을 실시함으로써 환경요소별 공간분포에 대한 파악을 통해 수치모형실험을 이용한 각종 환경영향의 평가 및 예측을 위한 기초자료로 이용이 가능할 것으로 사료된다.염총량관리 기본계획 시 구축된 모형 매개변수를 바탕으로 분석을 수행하였다. 일차오차분석을 이용하여 수리매개변수와 수질매개변수의 수질항목별 상대적 기여도를 파악해 본 결과, 수리매개변수는 DO, BOD, 유기질소, 유기인 모든 항목에 일정 정도의 상대적 기여도를 가지고 있는 것을 알 수 있었다. 이로부터 수질 모형의 적용 시 수리 매개변수 또한 수질 매개변수의 추정 시와 같이 보다 세심한 주의를 기울여 추정할 필요가 있을 것으로 판단된다.변화와 기흉 발생과의 인과관계를 확인하고 좀 더 구체화하기 위한 연구가 필요할 것이다.게 이루어질 수 있을 것으로 기대된다.는 초과수익률이 상승하지만, 이후로는 감소하므로, 반전거래전략을 활용하는 경우 주식투자기간은 24개월이하의 중단기가 적합함을 발견하였다. 이상의 행태적 측면과 투자성과측면의 실증결과를 통하여 한국주식시장에 있어서 시장수익률을 평균적으로 초과할 수 있는 거래전략은 존재하므로 이러한 전략을 개발 및 활용할 수 있으며, 특히, 한국주식시장에 적합한 거래전략은 반전거래전략이고, 이 전략의 유용성은 투자자가 설정한 투자기간보다

  • PDF

Toward a Sociological Understanding of Koreans in Small Business in the United States (미국에서 한인 자영업에 관한 연구)

  • 최병목
    • Korea journal of population studies
    • /
    • v.19 no.2
    • /
    • pp.139-173
    • /
    • 1996
  • This study is an attempt to identify factors affecting korean immigrants concentration in small business enterprises in the middleman minority sector including the priphery and core sectors, with the private wage and self-employed worker examined in each sector, employing the 5 percent public use sample from the 1980 United States census. One out of five koreans aged 25∼64 years is engaged in self-employed small businesses, while the majority of koreans (4 out of 5) are in the private wage sector. In contrast to expectations, English language difficulties and inferior education are not the prime factors affecting self-employment small businesses. The korean self-employed small business owners both in the periphery sector and in the core sector showed the 'middle' strata of their position in the social structure in terms of their industry, occupation, earnings, etc.

  • PDF

CIA-Level Driven Secure SDLC Framework for Integrating Security into SDLC Process (CIA-Level 기반 보안내재화 개발 프레임워크)

  • Kang, Sooyoung;Kim, Seungjoo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.909-928
    • /
    • 2020
  • From the early 1970s, the US government began to recognize that penetration testing could not assure the security quality of products. Results of penetration testing such as identified vulnerabilities and faults can be varied depending on the capabilities of the team. In other words none of penetration team can assure that "vulnerabilities are not found" is not equal to "product does not have any vulnerabilities". So the U.S. government realized that in order to improve the security quality of products, the development process itself should be managed systematically and strictly. Therefore, the US government began to publish various standards related to the development methodology and evaluation procurement system embedding "security-by-design" concept from the 1980s. Security-by-design means reducing product's complexity by considering security from the initial phase of development lifecycle such as the product requirements analysis and design phase to achieve trustworthiness of product ultimately. Since then, the security-by-design concept has been spread to the private sector since 2002 in the name of Secure SDLC by Microsoft and IBM, and is currently being used in various fields such as automotive and advanced weapon systems. However, the problem is that it is not easy to implement in the actual field because the standard or guidelines related to Secure SDLC contain only abstract and declarative contents. Therefore, in this paper, we present the new framework in order to specify the level of Secure SDLC desired by enterprises. Our proposed CIA (functional Correctness, safety Integrity, security Assurance)-level-based security-by-design framework combines the evidence-based security approach with the existing Secure SDLC. Using our methodology, first we can quantitatively show gap of Secure SDLC process level between competitor and the company. Second, it is very useful when you want to build Secure SDLC in the actual field because you can easily derive detailed activities and documents to build the desired level of Secure SDLC.

Current Development of Company Law in the European Union (유럽주식회사법의 최근 동향에 관한 연구)

  • Choi, Yo-Sop
    • Journal of Legislation Research
    • /
    • no.41
    • /
    • pp.229-260
    • /
    • 2011
  • European Union (EU) law has been a complex but at the same time fascinating subject of study due to its dynamic evolution. In particular, the Lisbon Treaty which entered into force in December 2009 represents the culmination of a decade of attempts at Treaty reform and harmonisation in diverse sectors. Amongst the EU private law fields, company law harmonisation has been one of the hotly debated issues with regards to the freedom of establishment in the internal market. Due to the significant differences between national provisions on company law, it seemed somewhat difficult to harmonise company law. However, Council Regulation 2157/2001 was legislated in 2001 and now provides the basis for the Statute for a European Company (or Societas Europaea: SE). The Statute is also supplemented by the Council Directive 2001/86 on the involvement of employees. The SE Statute is a legal measure in order to contribute to the internal market, and provides a choice for companies that wish to merge, create a joint subsidiary or convert a subsidiary into an SE. Through this option, the SE became a corporate form which is only available to existing companies incorporated in different Member States in the EU. The important question on the meaning of the SE Statute is whether the distinctive characteristics of the SE make it an attractive option to ensure significant numbers of SE registration. In fact, the outcome that has been made through the SE Statute is an example of regulatory competition. The traditional regulatory competition in the freedom of establishment has been the one between national statutes between Member States. However, this time is not a competition between Member States, which means that the Union has joined the area in competition between legal orders and is now in competition with the systems of company law of the Member States.Key Words : European Union, EU Company Law, Societas Europaea, SE Statute, One-tier System, Two-tier System, Race to the Bottom A quite number of scholars expect that the number of SE will increase significantly. Of course, there is no evidence of regulatory competition that Korea faces currently. However, because of the increasing volume of international trade and expansion of regional economic bloc, it is necessary to consider the example of development of EU company law. Addition to the existing SE Statute, the EU Commission has also proposed a new corporate form, Societas Private Europaea (private limited liable company). All of this development in European company law will help firms make their best choice for company establishment. The Delaware-style development in the EU will foster the race to the bottom, thereby improving the contents of company law. To conclude, the study on the development of European company law becomes important to understand the evolution of company law and harmonisation efforts in the EU.

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.

The Study on the Influence of Capstone Design & Field Training on Employment Rate: Focused on Leaders in INdustry-university Cooperation(LINC) (캡스톤디자인 및 현장실습이 취업률에 미치는 영향: 산학협력선도대학(LINC)을 중심으로)

  • Park Namgue
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.4
    • /
    • pp.207-222
    • /
    • 2023
  • In order to improve employment rates, most universities operate programs to strengthen students' employment and entrepreneurship, regardless of whether they are selected as the Leading Industry-Innovative University (LINC) or not. In particular, in the case of non-metropolitan universities are risking their lives to improve employment rates. In order to overcome the limitations of university establishment type and university location, which absolutely affect the employment rate, we are operating a startup education & startup support program in order to strengthen employment and entrepreneurship, and capstone design & field training as industry-academia-linked education programs are always available. Although there are studies on effectiveness verification centered on LINC (Leaders in Industry-University Cooperation) in previous studies, but a longitudinal study was conducted on all factors of university factors, startup education & startup support, and capstone design & field training as industry-university-linked education programs as factors affecting the employment rate based on public disclosure indicators. No cases of longitudinal studies were reported. This study targets 116 universities that satisfy the conditions based on university disclosure indicators from 2018 to 2020 that were recently released on university factors, startup education & startup support, and capstone design & field training as industry-academia-linked education programs as factors affecting the employment rate. We analyzed the differences between the LINC (Leaders in Industry-University Cooperation) 51 participating universities and 64 non-participating universities. In addition, considering that there is no historical information on the overlapping participation of participating students due to the limitations of public indicators, the Exposure Effect theory states that long-term exposure to employment and entrepreneurship competency enhancement programs will affect the employment rate through competency enhancement. Based on this, the effectiveness of the 2nd LINC+ (socially customized Leaders in Industry-University Cooperation) was verified from 2017 to 2021 through a longitudinal causal relationship analysis. As a result of the study, it was found that the startup education & startup support and capstone design & field training as industry-academia-linked education programs of the 2nd LINC+ (socially customized Leaders in Industry-University Cooperation) did not affect the employment rate. As a result of the longitudinal causal relationship analysis, it was reconfirmed that universities in metropolitan areas still have higher employment rates than universities in non-metropolitan areas due to existing university factors, and that private universities have higher employment rates than national universities. Among employment and entrepreneurship competency strengthening programs, the number of people who complete entrepreneurship courses, the number of people who complete capstone design, the amount of capstone design payment, and the number of dedicated faculty members partially affect the employment rate by year, while field training has no effect at all by year. It was confirmed that long-term exposure to the entrepreneurship capacity building program did not affect the employment rate. Therefore, it was reconfirmed that in order to improve the employment rate of universities, the limitations of non-metropolitan areas and national and public universities must be overcome. To overcome this, as a program to strengthen employment and entrepreneurship capabilities, it is important to strengthen entrepreneurship through participation in entrepreneurship lectures and actively introduce and be confident in the capstone design program that strengthens the concept of PBL (Problem Based Learning), and the field training program improves the employment rate. In order for actually field training affect of the employment rate, it is necessary to proceed with a substantial program through reorganization of the overall academic system and organization.

  • PDF