• Title/Summary/Keyword: Service providing system

Search Result 1,294, Processing Time 0.032 seconds

Busan Tourism Industry applying OECD Tourism Policy and ICT Convergence Platform (OECD 관광정책과 ICT 융합 플랫폼을 적용한 부산관광산업)

  • Lim, Yong-Suk;Jung, Ho-Jin;Lee, Jung-Won
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.12
    • /
    • pp.871-879
    • /
    • 2017
  • The purpose of this study is to propose a Busan tourism industry in which the 2016 OECD Tourism policy and ICT convergence platform are applied. OECD proposed 3 policies to promote the tourism industry: First, to maintain the competitiveness of the tourism industry as well as improve its efficiency and sustainability, second, to establish a seamless traffic system, and third, to build a response to the sharing economy. Centering on the OECD's three policies, we propose the developmental possibilities of tourism in Busan. At the same time, we suggest the necessity to build an ICT convergence platform that will help foster the industry. In building an ICT convergence platform, we especially focus on the necessity of: 1. Sharing and creating experience-based interactive contents on the software side, and 2. Developing high quality user experience (UX) and providing a data analysis-based customized service on the hardware side. In addition, we insist on the establishment of the Tourism Promotion Agency for the continuous performance and management of Busan tourism industry. The study ultimately suggests that the construction of ICT convergence platform based on OECD tourism policy can result in the expected outcomes of high effects with low cost for both consumers and suppliers related to the tourism industry.

A study on the development of a ship-handling simulation system based on actual maritime traffic conditions (선박조종 시뮬레이터를 이용한 연안 해역 디지털 트윈 구축에 연구)

  • Eunkyu Lee;Jae-Seok Han;Kwang-Hyun Ko;Eunbi Park;Kyunghun Park;Seong-Phil Ann
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.200-201
    • /
    • 2023
  • Digital twin technology is used in various fields as a method of creating a virtual world to minimize the cost of solving problems in the real world, and is also actively used in the maritime field, such as large-scale systems such as ships and offshore plants. In this paper, we tried to build a digital twin of coastal waters using a ship-handling simulator. The digital twin of the coastal waters developed in this way can be used to safely manage Korea's coastal waters, where maritime traffic is complicated, by providing a actual maritime traffic data. It can be usefully used to develop and advance technologies related to maritime autonomous surface ships and intelligent maritime traffic information services in coastal waters. In addition, it can be used as a 3D-based monitoring equipment for areas where physical monitoring is difficult but real-time maritime traffic monitoring is necessary, and can provide functions to safely manage maritime traffic situations such as aerial views of ports/control areas, bridge views/blind sector views of ships in operation.

  • PDF

Present Status on the Pesticide Residue Monitoring Program of South Korea and Its Improvement (한국의 잔류농약 모니터링 프로그램 현황과 개선)

  • Lee, Mi-Gyung
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.3
    • /
    • pp.219-226
    • /
    • 2019
  • This study was conducted to understand the overall status of the monitoring program for pesticide residues in foods of South Korea. Further propositions for its improvement were made, and from this study, the status on this program can be summarized as follows. In South Korea, the Ministry of Food and Drug Safety (MFDS) is responsible for overall control of pesticide residue monitoring. Depending on the time of monitoring (sampling at distribution or production step), the government agency responsible for monitoring is different: MFDS, Regional Offices of Food and Drug Safety and local governments are responsible for monitoring of foods at the distribution step, while the National Agricultural Products Quality Management Service (NAQS) and local governments are responsible for monitoring of foods in the production step (partially at sale and distribution steps). According to purpose of monitoring, domestic monitoring programs could be divided into two types: MFDS's "Residue Survey" and NAQS's "National Residue Survey" are conducted mainly for risk assessment purposes and various monitoring programs by the Regional Offices of Food and Drug Safety and local governments are conducted mainly for regulation purposes. For imported foods, monitoring should be conducted at both steps of customs clearance and distribution: the MFDS and the Regional Offices of Food and Drug Safety are responsible for the former, and for the latter, local governments are also responsible. However, it appeared that systematic and consistent monitoring programs are not being conducted for imported foods at the distribution step. Based on the information described above and more detailed information included in this paper, the following proposals for improving the monitoring program were forwarded: i) further clarification of monitoring program purpose, ii) strengthening of the monitoring program for imported foods, iii) providing the public with monitoring results by publication of an annual report and database. It is thought that exhaustive review on the pesticide residue monitoring program and efforts for its improvement are needed in order to assure both food safety and the success of the recently begun positive list system (PLS).

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Overview of Real-time Visibility System for Food (Livestock Products) Transportation Systems on HACCP Application and Systematization (축산물 유통단계의 HACCP 적용과 체계화를 위한 실시간 관제시스템에 대한 현황)

  • Kim, Hyoun-Wook;Lee, Joo-Yeon;Hong, Wan-Soo;Hwang, Sun-Min;Lee, Victor;Rhim, Seong-Ryul;Paik, Hyun-Dong
    • Food Science of Animal Resources
    • /
    • v.30 no.6
    • /
    • pp.896-904
    • /
    • 2010
  • HACCP is a scientific and systematic program that identifies specific hazards and gives measurements in order to control them and ensure the safety of foods. Transportation of livestock and its products is one of the vulnerable sectors regarding food safety in Korea, as meats are transported by truck in the form of a carcass or packaged meat in a box. HACCP application and its acceleration of distribution, in particular transportation, are regarded as important to providing consumers with ultimately safe livestock products. To achieve this goal, practical tools for HACCP application should be developed. Supply chain management (SCM) is a holistic and strategic approach to demand, operations, procurement, and logistics process management. SCM has been beneficially applied to several industries, notably in vehicle manufacture and the retail trade. HACCP-based real-time visibility system using wireless application (WAP) of the livestock distribution is centralized management system that enables control of temperature and HACCP management in real-time for livestock transportation. Therefore, the application of HACCP to livestock distribution (transportation, storage, and sale) can be activated. Using this system, HACCP management can be made easier, and distribution of safe livestock products can be achieved.

A Study on Practices and Improvement Factors of Financial Disclosures in early stages of IFRS Adoption - An Integrative Approach of Korean Cases: Embracing Views of Reporting Entities and Users of Financial Statements (IFRS 공시 실태 개선방안에 대한 소고 - 보고기업, 정보이용자 요인을 고려한 통합적 접근 -)

  • Kim, Hee-Suk
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.7 no.2
    • /
    • pp.113-127
    • /
    • 2012
  • From the end of 1st quarter of 2012, Korean mandatory firms had started releasing financial reports conforming to the K-IFRS(Korean adopted International Financial Reporting Standards). Major characteristics of IFRS, such as 'principles based' features, consolidated reporting, 'fair value' measurement, increased pressure for non-financial disclosures have resulted in brief and various disclosure practices regarding the main body of each statements and vast amount of note description requirements. Meanwhile, a host of previous studies on IFRS disclosures have incorporated regulatory and/or 'compete information' perspectives, mainly focusing on suggesting further enforcement of strengthened requirements and providing guidelines for specific treatments. Thus, as an extension of prior findings and suggestions this study had explored to conduct an integrative approach embracing views of the reporting entities and the users of financial information. In spite of all the state-driven efforts for faithful representation and comparability of corporate financial reports, an overhaul of disclosure practices of fiscal year 2010 and 2011 had revealed numerous cases of insufficiency and discordance in terms of mandatory norms and market expectations. As to the causes of such shortcomings, this study identified several factors from the corporate side and the users of the information; some inherent aspects of IFRS, industry/corporate-specific context, expenditures related to internalizing IFRS system, reduced time frame for presentation. lack of clarity and details to meet the quality of information - understandability, comparability etc. - commonly requested by the user group. In order to improve current disclosure practices, dual approach had been suggested; Firstly, to encourage and facilitate implementation, (1) further segmentation and differentiation of mandates among companies, (2) redefining the scope and depth of note descriptions, (3) diversification and coordination of reporting periods, (4) providing support for equipping disclosure systems and granting incentives for best practices had been discussed. Secondly, as for the hard measures, (5) regularizing active involvement of corporate and user group delegations in the establishment and amendment process of K-IFRS (6) enforcing detailed and standardized disclosure on reporting entities had been recommended.

  • PDF

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Beliefs About Gifted Education and Classroom Practices of the Science Teachers at Science Academy in Korea (과학영재학교 과학교사들의 영재교육에 대한 신념과 교수활동 유형)

  • Kim, Kyung-Jin;Kwon, Byung-Doo;Kim, Chan-Jong;Choe, Seung-Um
    • Journal of The Korean Association For Science Education
    • /
    • v.25 no.4
    • /
    • pp.514-525
    • /
    • 2005
  • The most important factor in providing education to gifted students as well as to students in general are the teachers themselves. However, at present in Korea, most of the teachers in charge of education for the gifted are educated by in-service training programs only for a short period of time. It is doubtful whether the teachers, who have taught ordinary students in general, can teach gifted students effectively only after completing such a short course. This research investigated the relationship between the teachers' beliefs about educating the gifted and the teachers' classroom practices in a Science Academy through case studies. The guiding questions for this study are as follows: First, what beliefs do the participating teachers have about education for the gifted? Second, how are the participants' beliefs reflected in their classroom practices? Of the five participants, two are physics teachers, two are biology teachers, and one is an earth science teacher. I observed and videotaped four classroom practices for each participant and conducted an in-depth interview with each participant. Further data were collected through e-mails with the participants. All data were carefully transcribed and analyzed. The results are as follows: Beliefs about education for the gifted do not exist independently, and form a belief system connecting with beliefs about teaching and learning, and subject matter. And the belief systems of participants can be divided into "student-centered," "teacher-centered," and "conflict chaos." In the classes of the participants who have "student-centered" belief system, students' questions or opinions played an important role and the participation structure in the classroom was determined by the students. On the contrary, participants who have "teacher-centered" belief system focused on teaching contents as much as possible in their classes. These teachers played a heavy role and formed a participation structure where students depended on their teacher's intellectual authority and therefore participated in their class passively. A participant who have "conflict chaos" belief did not form a firm belief system yet, and traditional beliefs about teaching and learning were reflected a lot in her classes. The research results imply teachers' beliefs play an important role in classroom practices and beliefs about teaching and learning and subject matter as well as beliefs about education for the gifted are important factors for teachers who guide gifted students. Additionally, I make some suggestions for the improvement of teacher education for the gifted.

International Case Studies on the Eco-friendly Energy Towns with Hybrid Thermal Energy Supply System and Borehole Thermal Energy Storage (BTES) (친환경에너지타운에서 보어홀지중열 저장(BTES) 활용 융복합 열에너지 공급 시스템 사례 연구)

  • Shim, Byoung Ohan
    • Economic and Environmental Geology
    • /
    • v.51 no.1
    • /
    • pp.67-76
    • /
    • 2018
  • This study reviews three eco-friendly energy towns with hybrid thermal energy supply systems and borehole thermal energy storage (BTES) in Canada and Denmark. The district heating and cooling systems were designed by using multi-source energy for the higher efficiency and reliability as well as environment. ADEU (Alexandra District Energy Utility) located at the developing area in the city of Richmond, Canada was designed to supply district energy with the installation of 726 borehole heat exchangers (BHEs) and a backup boiler using natural gas. DLSC (Drake Landing Solar Community) located in the town of Okotoks, Canada is a district system to store solar thermal energy underground during the summer season by seasonal BTES with 144 BHEs. Brædstrup Solpark district heating system located in Denmark has been conducted energy supply from multiple energy sources of solar thermal, heat pump, boiler plants and seasonal BTES with 48 BHEs. These systems are designed based on social and economic benefits as well as nature-friendly living space according to the city based energy perspective. Each system has the energy center which distribute the stored thermal energy to each house for heating during the winter season. The BHE depth and ground thermal storage volume are designed by the heating and cooling load as well as the condition of ground water flow and thermophysical properties of the ground. These systems have been proved the reliance and economic benefits by providing consistent energy supply with competitive energy price for many years. In addition, the several expansions of the service area in ADEU and Brædstrup Solpark have been processed based on energy supply master plan. In order to implement this kind of project in our country, the regulation and policy support of government or related federal organization are required. As well as the government have to make a energy management agency associated with long-term supply energy plan.

An Institutional Approach for Application of the Contracting-out in City Parks - Focused on the Case Study of City Park Management of Seongnam City - (도시공원의 민간위탁 적용을 위한 제도적 방안 - 성남시 도시공원 운영사례를 중심으로 -)

  • Byeon, Jae-Sang;Kim, In-Ho;Shin, Sang-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.39 no.5
    • /
    • pp.33-47
    • /
    • 2011
  • One of the most fundamental jobs of contemporary government is to look into various ways of providing its citizens with the best service work. This study aims to establish a procedure through which to consign the management of city parks to private companies, thus inviting participation and satisfaction on the part of citizens. In particular, this procedure includes creating a system of selecting private managing companies, for instance, specifying standards of selection and assembling selection committees. The results of this study can be summarized as follows. First, city parks can be managed better by private companies than by local governments in terms of cost cuts, personnel training, business efficiency, and know-how accumulation. The legal background for this is found in central and local legal articles. Second, it is recommended that the selection committee be composed of 6 to 9 members, both insiders and outsiders. In addition to selecting private managing companies for contracting-out, the committee should under take the role of consulting on how to perform and revise selecting standards, so that they can continue to improve these procedures. Third, the decision on private management should be noticed in advance and be made based on standards considering each local government's condition. These standards should consider the aspects of the public good, cost saving, quality of service, managing supervision, and citizen participation. The committee's assessment takes into account both the quality and the quantity of the standards. Fourth, the contracting-out for city park management should follow the order of: announcing consignment and receiving applicants, organizing selection committees and assessing applications, selecting and contracting, midterm evaluation, and re-announcement and re-consignment. To run city parks through the contracting-out is expected to increase the number of park visitors. Additionally, private consignment will involve a participation of diverse citizenship, thus playing an important role in city parks' building of a green-culture community.