• Title/Summary/Keyword: 저장 서버

Search Result 1,652, Processing Time 0.029 seconds

Smart Cart System for Commodity Browsing and Automatic Calculation (물품검색과 자동계산이 가능한 스마트카트 시스템)

  • Park, Cha-Hun;Hwang, Seong-Hun;Choi, Geon-Woo;Park, Jae-Hwi;Lee, Seung-Hyun;Kim, Sung-Hyeon;Jung, Ui-Hoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.669-670
    • /
    • 2020
  • 현재 4차 산업이 진행됨으로써 대부분의 사물들이 자율화 기능이 더해지는 시대가 오고 있다. 자율화 기술이 발전됨으로써 모든 사람들은 삶을 살아가면서 스스로 문제를 해결해 나갈수 있으며 기존의 생활속도보다 빨라지는 것을 느낄수 있을 것이다. 그래서 마트에서도 쇼핑을하면서 소비자들이 어떻게 쇼핑을 할 때 현재의 수준보다 쇼핑의 질이 높아 질지 고안해보았다. 본 과제물은 소비자들이 쇼핑을할 때 보다 편리하게 일을 처리할수도록 스마트 기능을 카트와 카운터에 추가하였다. 카트에 디스플레이와 바코드 스캐너를 부착함으로써 검색을 통해 소비자들이 원하는 물품의 가격, 위치등의 정보를 알아 낼 수 있고 현재 카트에 담긴 물품의 총 가격을 알 수 있다. 또한, 쇼핑을 마치고 계산을할 때 계산 대기줄이 길어지는 불편함을 해소하기위해 자동계산 기능이 있다. 쇼핑을 마친 소비자가 카트를 카운터로 끌고가면 카트에 저장되어 있는 쇼핑정보가 카운터의 디스플레이에 표시되고 카트와 카운터의 정보가 일치한다면 소비자가 카트에 요금을 충전해 스스로 계산을 수행할수 있다. 이런 자동화, 스마트 기능들은 소비자들의 편리함과 시간을 단축시킬수 있을 것이다.

  • PDF

Development of Android App to Record and Manage Travel Routes for Location Information Protection (위치정보 보호를 위한 이동 경로 기록 및 관리 서비스 앱 개발)

  • Seoyeon Kim;Ah Young Kim;Minjung Oh;Saem Oh;Sungwook Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.437-444
    • /
    • 2023
  • Location-based services play a vital role in our daily lives. While these services enhance user convenience, user's privacy is at risk because they lead to a rapid surge in collecting and utilizing location information for a user. In this paper, we design and implement an application that securely records and manages user location information. We enhance the privacy protection aspect concerning location information by providing some features. Utilizing Room DB, we store collected personal location information in the user's local database instead of the server of the location-based service provider. Furthermore, user can initiate and terminate recording at their discretion, thereby enhancing the protection of personal information related to location data. User's unease regarding their movement paths is reduced by empowering them to have control over their own location information.

The Design of Mobile Medical Image Communication System based on CDMA 1X-EVDO for Emergency Care (CDMA2000 1X-EVDO망을 이용한 이동형 응급 의료영상 전송시스템의 설계)

  • Kang, Won-Suk;Yong, Kun-Ho;Jang, Bong-Mun;Namkoong, Wook;Jung, Hai-Jo;Yoo, Sun-Kook;Kim, Hee-Joung
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2004.11a
    • /
    • pp.53-55
    • /
    • 2004
  • In emergency cases, such as the severe trauma involving the fracture of skull, spine, or cervical bone, from auto accident or a fall, and/or pneumothorax which can not be diagnosed exactly by the eye examination, it is necessary the radiological examination during transferring to the hospital for emergency care. The aim of this study was to design and evaluate the prototype of mobile medical image communication system based on CDMA 1X EVDO. The system consists of a laptop computer used as a transmit DICOM client, linked with cellular phone which support to the CDMA 1X EVDO communication service, and a receiving DICOM server installed in the hospital. The DR images were stored with DICOM format in the storage of transmit client. Those images were compressed into JPEG2000 format and transmitted from transmit client to the receiving server. All of those images were progressively transmitted to the receiving server and displayed on the server monitor. To evaluate the image quality, PSNR of compressed image was measured. Also, several field tests had been performed using commercial CDMA2000 1X-EVDO reverse link with the TCP/IP data segments. The test had been taken under several velocity of vehicle in seoul areas.

  • PDF

Location Management & Message Delivery Protocol for Multi-region Mobile Agents in Multi-region Environment (다중 지역 환경에서 이동 에이전트를 위한 위치 관리 및 메시지 전달 기법)

  • Choi, Sung-Jin;Baik, Maeng-Soon;Song, Ui-Sung;Hwang, Chong-Sun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.11
    • /
    • pp.545-561
    • /
    • 2007
  • Location management and message delivery protocol is fundamental to the further development of mobile agent systems in a multi-region mobile agent computing environment in order to control mobile agents and guarantee message delivery between them. However, previous works have some problems when they are applied to a multi-region mobile agent computing environment. First, the cost of location management and message delivery is increased relatively. Second, a tracking problem arises. finally, cloned mobile agents and parent-child mobile agents do not get dealt with respect to location management and message delivery. In this paper, we present a HB (Home-Blackboard) protocol, which is a new location management and message delivery protocol in a multi-region mobile agent computing environment. The HB protocol places a region server in each region and manages the location of mobile agents by using intra-region migration and inter-region migration. It also places a blackboard in each region server and delivers messages to mobile agents when a region server receives location update form them. The HB protocol can decrease the cost of location update and message passing and solve the tracking problem with low communication cost. Also, this protocol deals with the location management and message passing of cloned mobile agents and parent-child mobile agents, so that it can guarantee message delivery of these mobile agents and pass messages without passing duplicate messages.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Automated-Database Tuning System With Knowledge-based Reasoning Engine (지식 기반 추론 엔진을 이용한 자동화된 데이터베이스 튜닝 시스템)

  • Gang, Seung-Seok;Lee, Dong-Joo;Jeong, Ok-Ran;Lee, Sang-Goo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06a
    • /
    • pp.17-18
    • /
    • 2007
  • 데이터베이스 튜닝은 일반적으로 데이터베이스 어플리케이션을 "좀 더 빠르게" 실행하게 하는 일련의 활동을 뜻한다[1]. 데이터베이스 관리자가 튜닝에 필요한 주먹구구식 룰(Rule of thumb)들을 모두 파악 하고 상황에 맞추어 적용하는 것은 비싼 비용과 오랜 시간을 요구한다. 그렇게 때문에 서로 다른 어플 리케이션들이 맞물려 있는 복잡한 서비스는 필수적으로 자동화된 데이터베이스 성능 관리와 튜닝을 필 요로 한다. 본 논문에서는 이를 해결하기 위하여 지식 도매인(Knowledge Domain)을 기초로 한 자동화 된 데이터베이스 튜닝 원칙(Tuning Principle)을 제시하는 시스템을 제안한다. 각각의 데이터베이스 튜닝 이론들은 지식 도매인의 지식으로 활용되며, 성능에 영향을 미치는 요소들을 개체(Object)와 콘셉트 (Concept)로 구성하고 추론 시스템을 통해 튜닝 원칙을 추론하여 쉽고 빠르게 현재 상황에 맞는 튜닝 방법론을 적용시킬 수 있다. 자동화된 데이터베이스 튜닝에 대해 여러 분야에 걸쳐 학문적인 연구가 이루어지고 있다. 그 예로써 Microsoft의 AutoAdmin Project[2], Oracle의 SQL 튜닝 아키텍처[3], COLT[4], DBA Companion[5], SQUASH[6] 등을 들 수 있다. 이러한 최적화 기법들을 각각의 기능적인 방법론에 따라 다시 분류하면 크게 Design Tuning, Logical Structure Tuning, Sentence Tuning, SQL Tuning, Server Tuning, System/Network Tuning으로 나누어 볼 수 있다. 이 중 SQL Tuning 등은 수치적으로 결정되어 이미 존재하는 정보를 이용하기 때문에 구조화된 모델로 표현하기 쉽고 사용자의 다양한 요구에 의해 변화하는 조건들을 수용하기 쉽기 때문에 이에 중점을 두고 성능 문제를 해결하는 데 초점을 맞추었다. 데이터베이스 시스템의 일련의 처리 과정에 따라 DBMS를 구성하는 개체들과 속성, 그리고 연관 관계들이 모델링된다. 데이터베이스 시스템은 Application / Query / DBMS Level의 3개 레벨에 따라 구조화되며, 본 논문에서는 개체, 속성, 연관 관계 및 데이터베이스 튜닝에 사용되는 Rule of thumb들을 분석하여 튜닝 원칙을 포함한 지식의 형태로 변환하였다. 튜닝 원칙은 데이터베이스 시스템에서 발생하는 문제를 해결할 수 있게 하는 일종의 황금률로써 지식 도매인의 바탕이 되는 사실(Fact)과 룰(Rule) 로써 표현된다. Fact는 모델링된 시스템을 지식 도매인의 하나의 지식 개체로 표현하는 방식이고, Rule 은 Fact에 기반을 두어 튜닝 원칙을 지식의 형태로 표현한 것이다. Rule은 다시 시스템 모델링을 통해 사전에 정의되는 Rule와 튜닝 원칙을 추론하기 위해 사용되는 Rule의 두 가지 타업으로 나뉘며, 대부분의 Rule은 입력되는 값에 따라 다른 솔루션을 취하게 하는 분기의 역할을 수행한다. 사용자는 제한적으로 자동 생성된 Fact와 Rule을 통해 튜닝 원칙을 추론하여 데이터베이스 시스템에 적용할 수 있으며, 요구나 필요에 따라 GUI를 통해 상황에 맞는 Fact와 Rule을 수동으로 추가할 수도 었다. 지식 도매인에서 튜닝 원칙을 추론하기 위해 JAVA 기반의 추론 엔진인 JESS가 사용된다. JESS는 스크립트 언어를 사용하는 전문가 시스템[7]으로 선언적 룰(Declarative Rule)을 이용하여 지식을 표현 하고 추론을 수행하는 추론 엔진의 한 종류이다. JESS의 지식 표현 방식은 튜닝 원칙을 쉽게 표현하고 수용할 수 있는 구조를 가지고 있으며 작은 크기와 빠른 추론 성능을 가지기 때문에 실시간으로 처리 되는 어플리케이션 튜닝에 적합하다. 지식 기반 모률의 가장 큰 역할은 주어진 데이터베이스 시스템의 모델을 통하여 필요한 새로운 지식을 생성하고 저장하는 것이다. 이를 위하여 Fact와 Rule은 지식 표현 의 기본 단위인 트리플(Triple)의 형태로 표현된다, 트리플은 Subject, Property, Object의 3가지 요소로 구성되며, 대부분의 Fact와 Rule들은 트리플의 기본 형태 또는 트리플의 조합으로 이루어진 C Condition과 Action의 두 부분의 결합으로 구성된다. 이와 같이 데이터베이스 시스템 모델의 개체들과 속성, 그리고 연관 관계들을 표현함으로써 지식들이 추론 엔진의 Fact와 Rule로 기능할 수 있다. 본 시스템에서는 이를 구현 및 실험하기 위하여 웹 기반 서버-클라이언트 시스템을 가정하였다. 서버는 Process Controller, Parser, Rule Database, JESS Reasoning Engine으로 구성 되 어 있으며, 클라이 언트는 Rule Manager Interface와 Result Viewer로 구성되어 었다. 실험을 통해 얻어지는 튜닝 원칙 적용 전후의 실행 시간 측정 등 데이터베이스 시스템 성능 척도를 비교함으로써 시스템의 효용을 판단하였으며, 실험 결과 적용 전에 비하여 튜닝 원칙을 적용한 경우 최대 1초 미만의 전처리에 따른 부하 시간 추가와 최소 약 1.5배에서 최대 약 3배까지의 처리 시간 개선을 확인하였다. 본 논문에서 제안하는 시스템은 튜닝 원칙을 자동으로 생성하고 지식 형태로 변형시킴으로써 새로운 튜닝 원칙을 파생하여 제공하고, 성능에 영향을 미치는 요소와 함께 직접 Fact과 Rule을 추가함으로써 커스터마이정된 튜닝을 수행할 수 있게 하는 장점을 가진다. 추후 쿼리 자체의 튜닝 및 인텍스 최적화 등의 프로세스 자동화와 Rule을 효율적으로 정의하고 추가하는 방법 그리고 시스템 모델링을 효과적으로 구성하는 방법에 대한 연구를 통해 본 연구를 더욱 개선시킬 수 있을 것이다.

  • PDF

Satellite Imagery and AI-based Disaster Monitoring and Establishing a Feasible Integrated Near Real-Time Disaster Monitoring System (위성영상-AI 기반 재난모니터링과 실현 가능한 준실시간 통합 재난모니터링 시스템)

  • KIM, Junwoo;KIM, Duk-jin
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.236-251
    • /
    • 2020
  • As remote sensing technologies are evolving, and more satellites are orbited, the demand for using satellite data for disaster monitoring is rapidly increasing. Although natural and social disasters have been monitored using satellite data, constraints on establishing an integrated satellite-based near real-time disaster monitoring system have not been identified yet, and thus a novel framework for establishing such system remains to be presented. This research identifies constraints on establishing satellite data-based near real-time disaster monitoring systems by devising and testing a new conceptual framework of disaster monitoring, and then presents a feasible disaster monitoring system that relies mainly on acquirable satellite data. Implementing near real-time disaster monitoring by satellite remote sensing is constrained by technological and economic factors, and more significantly, it is also limited by interactions between organisations and policy that hamper timely acquiring appropriate satellite data for the purpose, and institutional factors that are related to satellite data analyses. Such constraints could be eased by employing an integrated computing platform, such as Amazon Web Services(AWS), which enables obtaining, storing and analysing satellite data, and by developing a toolkit by which appropriate satellites'sensors that are required for monitoring specific types of disaster, and their orbits, can be analysed. It is anticipated that the findings of this research could be used as meaningful reference when trying to establishing a satellite-based near real-time disaster monitoring system in any country.

A Study on the Cloud Service Model of CaaS Based on the Object Identification, ePosition, with a Structured Form of Texts (문자열로 구조화된 사물식별아이디 이포지션(ePosition) 기반의 클라우드 CaaS(Contents as a Service) 서비스 모델에 관한 연구)

  • Lee, Sang-Zee;Kang, Myung-Su;Cho, Won-Hee
    • Information Systems Review
    • /
    • v.15 no.3
    • /
    • pp.129-139
    • /
    • 2013
  • The Internet of Things (or IoT for short) which refers to uniquely identifiable objects and their virtual representations in an Internet-like structure is to be reality today. The amount of data on IoT is expected to increase abruptly and there are several key issues like usefulness interoperability between multiple distributes systems, services and databases. In this paper a methodology is proposed to realize a recently developed cloud service model, Contents as a Service (CaaS), which is contents delivery model referred to as 'on-demand contents'. In the proposed method, the global object identification, ePosition, comprising the structured form of two sorts of text strings with a separation symbol like # is applied to identify a specific content and registered with the content at the same server. It is easy-to-realize and effective to solve the interoperability problem systematically and logically. Some APIs for the proposed CaaS service are to be converged to provide some upgraded cloud service model such as 'CaaS supported SaaS' and 'CaaS supported PaaS'.

  • PDF

A Dynamic Video Adaptation Scheme based on Size and Quality Predictions (동영상 스트림 크기 및 품질 예측에 기반한 동적 동영상 적응변환 방법)

  • Kim Jonghang;Nang Jongho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.2
    • /
    • pp.95-105
    • /
    • 2005
  • This paper proposes a new dynamic video adaptation scheme that could generate an adapted video stream customized to the requesting mobile device and current network status without repeated decode-encode cycles. In the proposed adaptation scheme, the characteristics of the video codec such as MPEG-1/-2/-4 are analyzed in advance focused on the relationships between the size and Quality of the encoded video stream, and they are stored in the proxy as a codec-dependent characteristic table. When a mobile device requests a video stream, it is dynamically decoded-encoded in the proxy with the highest quality to extract the contents-dependent attributes of the requested video stream. By comparing these attributes with codec-dependent characteristic table, the size and Quality of the requested video stream when being adapted to the target mobile device could be predicted. With this prediction, a version of adapted video stream, that meets the size constraints of mobile device while keeping the quality of encoded video stream as high as possible, could be selected without repeated decode-encode cycles. Experimental results show that the errors in our proposed scheme are less than 5% and produce an appropriate adapted video stream very quickly. It could be used t(1 build a proxy server for mobile devices that could quickly transcode the video streams widely spread in Internet which are encoded with various video codecs.

A Study on the Validation of Vector Data Model for River-Geospatial Information and Building Its Portal System (하천공간정보의 벡터데이터 모델 검증 및 포털 구축에 관한 연구)

  • Shin, Hyung-Jin;Chae, Hyo-Sok;Hwang, Eui-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.2
    • /
    • pp.95-106
    • /
    • 2014
  • In this study, the applicability of a standard vector model was evaluated using RIMGIS vector data and a portal based river-geospatial information web service system was developed using XML and JSON based data linkage between the server and the client. The RIMGIS vector data including points, lines, and polygons were converted to the Geospatial Data Model(GDM) developed in this study and were validated by layers. After the conversion, it was identified that the attribute data of a shape file remained without loss. The GeoServer GDB(GeoDataBase) that manages a DB in the portal was developed as a management module. The XML-based Geography Markup Language(GML) standards of OGC was used for accessing to and managing vector layers and encoding spatial data. The separation of data content and expression in the GML allowed the different expressions of the same data, convenient data revision and update, and enhancing the expandability. In the future, it is necessary to improve the access, exchange, and storage of river-geospatial information through the user's customized services and Internet accessibility.