• Title/Summary/Keyword: Research Information Systems

Search Result 12,224, Processing Time 0.042 seconds

Developing an Evaluation System for Certifying the Robot-Friendliness of Buildings through Focus Group Interviews and the Analytic Hierarchy Process (로봇 친화형 건축물 인증 지표 개발 : 초점집단면접(FGI)과 분석적 계층화 과정(AHP)의 활용)

  • Lee, Kwanyong;Gu, Hanmin;Lee, Yoonseo;Jung, Minseung;Yoon, Dongkeun;Kim, Kabsung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.52 no.2
    • /
    • pp.17-34
    • /
    • 2022
  • With rapid advancements taking place in the Fourth Industrial Revolution, human-robot interactions have been garnering increasing attention. Robots are being actively adopted in building systems and facilities. In this study, we developed robot-friendly building certification indicators. Because these indicators were being developed for the first time, we focused only on commercial buildings. We conducted exploratory research using methodologies such as focus group interviews and the analytic hierarchy process. First, the concept of the robot-friendly building was defined through focus group interviews, and the requirements were categorized by the appropriateness of operating facilities and systems and the appropriateness of architectural and robot operating systems and networks. Next, the relative importance of the evaluation items (23 items in total) was calculated using the analytic hierarchy process. Their average score of the marks was 4.4, and the minimum and maximum were 2.0 and 11.3, respectively. This study is significant because we collected the basic data necessary to develop a one-of-its-kind evaluation system for certifying the robot-friendliness of buildings using scientific methods.

A Case Study of New Franchise Brand Launching Through Proactive Market Response: BEERBARKET'S Successful Story of INTO FRANCHISE SYSTEMS (선행적 대응을 통한 프랜차이즈 뉴비즈니스 런칭 사례 : (주)인토외식산업의 맥주바켓 성공사례)

  • Seo, Min-Gyo
    • The Korean Journal of Franchise Management
    • /
    • v.3 no.1
    • /
    • pp.111-129
    • /
    • 2012
  • Domestic franchise industry is a promising business to more than 10% per year growth rate and emerging as core of retail. In addition, due to the socio-cultural phenomena, including the retirement of the baby-boom generation, the growth of the franchise industry for some time expected to continue. But Domestic franchise reveals that limits to ensure for new franchisees because that few industries are concentrated to advance for franchisor and franchisees. Franchisors that within the industry came to a saturated, are for the growth and expansion of business into new industries to deploy as second, third brand. But reality is that the more success rather than failure. Therefore, in this study is a new brand development approach and case study results it focus on the BEERBARKET's successful story of INTO FRANCHISE SYSTEMS, INC. Case analysis results of this study, are reveled that franchise headquarters derived through research methods and research information, environmental survey and analysis should be continuously and objectively. Thus, based on the derived contents, the new brand Biz-Model should be established for recognition from the industry and customers. Ability to respond sensitively to changes in the environment and business activities can be associated with linking franchise headquarters belonging to the saturated competitive environment more is needed. Through proactively respond Franchise New business launching instance that BEERBARKET's successful story of INTO FRANCHISE SYSTEMS, INC. suggests the need to study about how to respond to environmental changes.

Methodology for Estimating Highway Traffic Performance Based on Origin/Destination Traffic Volume (기종점통행량(O/D) 기반의 고속도로 통행실적 산정 방법론 연구)

  • Howon Lee;Jungyeol Hong;Yoonhyuk Choi
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.2
    • /
    • pp.119-131
    • /
    • 2024
  • Understanding accurate traffic performance is crucial for ensuring efficient highway operation and providing a sustainable mobility environment. On the other hand, an immediate and precise estimation of highway traffic performance faces challenges because of infrastructure and technological constraints, data processing complexities, and limitations in using integrated big data. This paper introduces a framework for estimating traffic performance by analyzing real-time data sourced from toll collection systems and dedicated short-range communications used on highways. In particular, this study addresses the data errors arising from segmented information in data, influencing the individual travel trajectories of vehicles and establishing a more reliable Origin-Destination (OD) framework. The study revealed the necessity of trip linkage for accurate estimations when consecutive segments of individual vehicle travel within the OD occur within a 20-minute window. By linking these trip ODs, the daily average highway traffic performance for South Korea was estimated to be248,624 thousand vehicle kilometers per day. This value shows an increase of approximately 458 thousand vehicle kilometers per day compared to the 248,166 thousand vehicle kilometers per day reported in the highway operations manual. This outcome highlights the potential for supplementing previously omitted traffic performance data through the methodology proposed in this study.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

The Impact of Perceived Risks Upon Consumer Trust and Purchase Intentions (인지된 위험의 유형이 소비자 신뢰 및 온라인 구매의도에 미치는 영향)

  • Hong, Il-Yoo B.;Kim, Woo-Sung;Lim, Byung-Ha
    • Asia pacific journal of information systems
    • /
    • v.21 no.4
    • /
    • pp.1-25
    • /
    • 2011
  • Internet-based commerce has undergone an explosive growth over the past decade as consumers today find it more economical as well as more convenient to shop online. Nevertheless, the shift in the common mode of shopping from offline to online commerce has caused consumers to have worries over such issues as private information leakage, online fraud, discrepancy in product quality and grade, unsuccessful delivery, and so forth, Numerous studies have been undertaken to examine the role of perceived risk as a chief barrier to online purchases and to understand the theoretical relationships among perceived risk, trust and purchase intentions, However, most studies focus on empirically investigating the effects of trust on perceived risk, with little attention devoted to the effects of perceived risk on trust, While the influence trust has on perceived risk is worth studying, the influence in the opposite direction is equally important, enabling insights into the potential of perceived risk as a prohibitor of trust, According to Pavlou (2003), the primary source of the perceived risk is either the technological uncertainty of the Internet environment or the behavioral uncertainty of the transaction partner. Due to such types of uncertainty, an increase in the worries over the perceived risk may negatively affect trust, For example, if a consumer who sends sensitive transaction data over Internet is concerned that his or her private information may leak out because of the lack of security, trust may decrease (Olivero and Lunt, 2004), By the same token, if the consumer feels that the online merchant has the potential to profit by behaving in an opportunistic manner taking advantage of the remote, impersonal nature of online commerce, then it is unlikely that the merchant will be trusted, That is, the more the probable danger is likely to occur, the less trust and the greater need to control the transaction (Olivero and Lunt, 2004), In summary, a review of the related studies indicates that while some researchers looked at the influence of overall perceived risk on trust level, not much attention has been given to the effects of different types of perceived risk, In this context the present research aims at addressing the need to study how trust is affected by different types of perceived risk, We classified perceived risk into six different types based on the literature, and empirically analyzed the impact of each type of perceived risk upon consumer trust in an online merchant and further its impact upon purchase intentions. To meet our research objectives, we developed a conceptual model depicting the nomological structure of the relationships among our research variables, and also formulated a total of seven hypotheses. The model and hypotheses were tested using an empirical analysis based on a questionnaire survey of 206 college students. The reliability was evaluated via Cronbach's alphas, the minimum of which was found to be 0.73, and therefore the questionnaire items are all deemed reliable. In addition, the results of confirmatory factor analysis (CFA) designed to check the validity of the measurement model indicate that the convergent, discriminate, and nomological validities of the model are all acceptable. The structural equation modeling analysis to test the hypotheses yielded the following results. Of the first six hypotheses (H1-1 through H1-6) designed to examine the relationships between each risk type and trust, three hypotheses including H1-1 (performance risk ${\rightarrow}$ trust), H1-2 (psychological risk ${\rightarrow}$ trust) and H1-5 (online payment risk ${\rightarrow}$ trust) were supported with path coefficients of -0.30, -0.27 and -0.16 respectively. Finally, H2 (trust ${\rightarrow}$ purchase intentions) was supported with relatively high path coefficients of 0.73. Results of the empirical study offer the following findings and implications. First. it was found that it was performance risk, psychological risk and online payment risk that have a statistically significant influence upon consumer trust in an online merchant. It implies that a consumer may find an online merchant untrustworthy if either the product quality or the product grade does not match his or her expectations. For that reason, online merchants including digital storefronts and e-marketplaces are suggested to pursue a strategy focusing on identifying the target customers and offering products that they feel best meet performance and psychological needs of those customers. Thus, they should do their best to make it widely known that their products are of as good quality and grade as those purchased from offline department stores. In addition, it may be inferred that today's online consumers remain concerned about the security of the online commerce environment due to the repeated occurrences of hacking or private information leakage. Online merchants should take steps to remove potential vulnerabilities and provide online notices to emphasize that their website is secure. Second, consumer's overall trust was found to have a statistically significant influence on purchase intentions. This finding, which is consistent with the results of numerous prior studies, suggests that increased sales will become a reality only with enhanced consumer trust.

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

Supporting Policy for GeoSpatial Information Convergence Industry by Comparing Laws about Convergence Industry (융합산업 관련 법제도 비교를 통한 공간정보융합산업 지원방안)

  • Song, Ki Sung;Woo, Hee Sook;Kim, Byung Guk;Hwang, Jeong Rae
    • Spatial Information Research
    • /
    • v.23 no.6
    • /
    • pp.9-17
    • /
    • 2015
  • The convergence industry is a combination of technologies or industries of the same type or various types, thereby maintaining and/or expanding the existing values or creating fresh values. As the industry is drawing greater attention over the world, each country is making huge efforts to provide support for it. GeoSpatial Information is a representative convergence industry characterized by being utilized as a basis for other industrial fields by being linked and fused with other industries and technologies. It is well recognized as being a promising industry that will likely lead the national economy in the future. GeoSpatial Information is necessary to analyze the distinctive features and obstacle factors of the convergence industry. Because it will be able to induce a smooth convergence among different industries. In this paper, we has segmented the support elements through a comparative analysis of the legal system related to (Nano Technology, Information and Communication Technology, Culture Technology, etc)the convergence industry. Based on this proposed policy support for GeoSpatial Information Convergence industry. We expect that this study will be used as basic data of the policy established to effectively support for the GeoSpatial Information convergence industry.

Analyzation and Improvements of the Revised 2015 Education Curriculum for Information Science of Highschool: Focusing on Information Ethics and Multimedia (고등학교 정보과학의 2015 개정 교육과정에 대한 분석 및 개선 방안: 정보윤리와 멀티미디어를 중심으로)

  • Jeong, Seungdo;Cho, Jungwon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.8
    • /
    • pp.208-214
    • /
    • 2016
  • With the rising interest in intelligence information technology built on artificial intelligence and big data technologies, all countries in the world including advanced countries such as the United States, the United Kingdom, Japan and so on, have launched national investment programs in preparation for the fourth industrial revolution centered on the software industry. Our country belatedly recognized the importance of software and initiated the 2015 revised educational curriculum for elementary and secondary informatics subjects. This paper thoroughly analyzes the new educational curriculum for information science in high schools and, then, suggests improvements in the areas of information ethics and multimedia. The analysis of the information science curriculum is applied to over twenty science high schools and schools for gifted children, which are expected to play a leading role in scientific research in our country. In the future artificial intelligence era, in which our dependence on information technology will be further increased, information ethics education for talented students who will play the leading role in making and utilizing artificial intelligence systems should be strongly emphasized, and the focus of their education should be different from that of the existing system. Also, it is necessary that multimedia education centered on digital principles and compression techniques for images, sound, videos, etc., which are commonly used in real life, should be included in the 2015 revised educational curriculum. In this way, the goal of the 2015 revised educational curriculum can be achieved, which is to encourage innovation and the efficient resolution of problems in real life and diverse academic fields based on the fundamental concepts, principles and technology of computer science.

The Study on Threats of Information Security and Their Solutions in the Fourth Industrial Revolution (4차 산업혁명 시대에 정보보안의 위협요인과 대응방안에 대한 연구)

  • Cho, Sung-Phil
    • Korean Security Journal
    • /
    • no.51
    • /
    • pp.11-35
    • /
    • 2017
  • The third industrial revolution, characterized by factory automation and informatization, are moving toward the fourth industrial revolution which is the era of superintelligence and supernetworking through rapid technology innovation. The most important resources in the fourth industrial revolution are information or data since the most of industrial and economic activities will be affected by information in the fourth industrial revolution. Therefore we can expect that more information will be utilized, shared and transfered through the networks or systems in real time than before so the significance of information management and security will also increase. As the importance of information resource management and security which is the core of the fourth industrial revolution increases, the threats on information security are also growing so security incidents such as data breeches and accidents take place more often. Various and thorough solutions are highly needed to protect information resources from security risks because information accidents or breaches seriously damage brand image and cause huge financial damage to organization. The purpose of this study is to research general trends on data breaches and accident that can be serious threat of information security. Also, we will provide resonable solutions to protect data from nine attack patterns or other risk factors after figuring out each characteristic of nin attack patterns in data breaches and accidents.

  • PDF

A Quality Management Model for Consumer-oriented Spatial Information (사용자 관점의 융·복합 공간정보 품질관리 방안 연구)

  • Choi, Jae-Yeon;Kim, Eun-Hyung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.50 no.1
    • /
    • pp.47-62
    • /
    • 2020
  • As demands and applications of spatial information increase, different aspects of quality management have been raised as an important issue. This study suggests a quality management method for consumer-oriented spatial information, providing the consumer satisfaction. As opposed to the demands mentioned, less attention has been paid on the quality for the spatial information. So far, most of spatial information producers have kept their own independent quality management system and standard. Because of this unconscious response, it has been difficult to reflect the consumer's various and flexible demands from the silo structure of the quality management systems and regulations. The explosive increase of spatial information products creates quality-related problems such as limited time and budget for the consumer-oriented quality management. To solve these problems this study suggests to use a minimum number of the basic spatial information products which can guarantee better qualities when other products are combined. Because each of the spatial information product can include several other sub-products inside, it can have intrinsic characteristics such as geographic accuracy relationships between the products when combined and a hierarchical structure in each product in terms of the quality management mentioned. To prove the usability of the model, a case of the National Spatial Data Infrastructure Center is used because the Center collects and distributes an enormous amount of spatial information to the public and private sector.