• Title/Summary/Keyword: 한국정보관리

Search Result 30,313, Processing Time 0.061 seconds

A Study on Netwotk Effect by using System Dynamics Analysis: A Case of Cyworld (시스템 다이내믹스 기법을 이용한 네트워크 효과 분석: 싸이월드 사례)

  • Kim, Ga-Hye;Yang, Hee-Dong
    • Information Systems Review
    • /
    • v.11 no.1
    • /
    • pp.161-179
    • /
    • 2009
  • Nowadays an increasing number of Internet users are running individual websites as Blog or Cyworld. As this type of personal media has a great influence on communication among people, business comes to care about Network Effect, Network Software, and Social Network. For instance, Cyworld created the web service called 'Minihompy' for individual web-logs, and acquired 2.4milion users in 2007. Although many people assumed that the popularity of Minihompy, or Blog would be a passing fad, Cyworld has improved its service, and expanded its Network with various contents. This kind of expansion reflects survival efforts from infinite competitions among ISPs (Internet Service Provider) with focus on enhancing usability to users. However, Cyworld's Network Effect is gradually diminished in these days. Both of low production cost of service vendors and the low searching/conversing costs of users combine to make ISPs hard to keep their market share sustainable. To overcome this lackluster trend, Cyworld has adopted new strategies and try to lock their users in their service. Various efforts to improve the continuance and expansion of Network effect remain unclear and uncertain. If we understand beforehand how a service would improve Network effect, and which service could bring more effect, ISPs can get substantial help in launching their new business strategy. Regardless many diverse ideas to increase their user's duration online ISPs cannot guarantee 'how the new service strategies will end up in profitability. Therefore, this research studies about Network effect of Cyworld's 'Minihompy' using System-Dynamics method which could analyze dynamic relation between users and ISPs. Furthermore, the research aims to predict changes of Network Effect based on the strategy of new service. 'Page View' and 'Duration Time' can be enhanced for the short tenn because they enhance the service functionality. However, these services cannot increase the Network in the long-run. Limitations of this research include that we predict the future merely based on the limited data. We also limit the independent variables over Network Effect only to the following two issues: Increasing the number of users and increasing the Service Functionality. Despite of some limitations, this study perhaps gives some insights to the policy makers or others facing the stiff competition in the network business.

R-lambda Model based Rate Control for GOP Parallel Coding in A Real-Time HEVC Software Encoder (HEVC 실시간 소프트웨어 인코더에서 GOP 병렬 부호화를 지원하는 R-lambda 모델 기반의 율 제어 방법)

  • Kim, Dae-Eun;Chang, Yongjun;Kim, Munchurl;Lim, Woong;Kim, Hui Yong;Seok, Jin Wook
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.193-206
    • /
    • 2017
  • In this paper, we propose a rate control method based on the $R-{\lambda}$ model that supports a parallel encoding structure in GOP levels or IDR period levels for 4K UHD input video in real-time. For this, a slice-level bit allocation method is proposed for parallel encoding instead of sequential encoding. When a rate control algorithm is applied in the GOP level or IDR period level parallelism, the information of how many bits are consumed cannot be shared among the frames belonging to a same frame level except the lowest frame level of the hierarchical B structure. Therefore, it is impossible to manage the bit budget with the existing bit allocation method. In order to solve this problem, we improve the bit allocation procedure of the conventional ones that allocate target bits sequentially according to the encoding order. That is, the proposed bit allocation strategy is to assign the target bits in GOPs first, then to distribute the assigned target bits from the lowest depth level to the highest depth level of the HEVC hierarchical B structure within each GOP. In addition, we proposed a processing method that is used to improve subjective image qualities by allocating the bits according to the coding complexities of the frames. Experimental results show that the proposed bit allocation method works well for frame-level parallel HEVC software encoders and it is confirmed that the performance of our rate controller can be improved with a more elaborate bit allocation strategy by using the preprocessing results.

Design of a Crowd-Sourced Fingerprint Mapping and Localization System (군중-제공 신호지도 작성 및 위치 추적 시스템의 설계)

  • Choi, Eun-Mi;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.595-602
    • /
    • 2013
  • WiFi fingerprinting is well known as an effective localization technique used for indoor environments. However, this technique requires a large amount of pre-built fingerprint maps over the entire space. Moreover, due to environmental changes, these maps have to be newly built or updated periodically by experts. As a way to avoid this problem, crowd-sourced fingerprint mapping attracts many interests from researchers. This approach supports many volunteer users to share their WiFi fingerprints collected at a specific environment. Therefore, crowd-sourced fingerprinting can automatically update fingerprint maps up-to-date. In most previous systems, however, individual users were asked to enter their positions manually to build their local fingerprint maps. Moreover, the systems do not have any principled mechanism to keep fingerprint maps clean by detecting and filtering out erroneous fingerprints collected from multiple users. In this paper, we present the design of a crowd-sourced fingerprint mapping and localization(CMAL) system. The proposed system can not only automatically build and/or update WiFi fingerprint maps from fingerprint collections provided by multiple smartphone users, but also simultaneously track their positions using the up-to-date maps. The CMAL system consists of multiple clients to work on individual smartphones to collect fingerprints and a central server to maintain a database of fingerprint maps. Each client contains a particle filter-based WiFi SLAM engine, tracking the smartphone user's position and building each local fingerprint map. The server of our system adopts a Gaussian interpolation-based error filtering algorithm to maintain the integrity of fingerprint maps. Through various experiments, we show the high performance of our system.

A Study on the Establishment of desirable Model for Licensed Private Investigation Service System (공인탐정제도의 올바른 모델설정에 관한 연구)

  • Lee, Sang-Hun
    • Korean Security Journal
    • /
    • no.20
    • /
    • pp.249-270
    • /
    • 2009
  • There have been great demands for various private searches and collecting information activities. but in korea it is still banned to supply private investigation service and to use the term 'private investigation'. So establishment of desirable model for private investigation service system is essential factor in strategic approaching for privatization of policing. In most developed countries private investigation service system is generally permitted and various methods to solve the side effects of that are considered. It is necessary to revise more the Security Business Law to introduce private investigation service system so that the dispute on determining how to do and what to do. It looks like that police agrees with the introduction of the private investigation service system because this could be an option when it comes to the job that its members can take after retirement and because this system helpful their own work. Actually Korea government have tried to prepare the law enactment of the private investigation service system since 1999 but have been failed. This study focuses on implementing the suitable system for private investigation service in Korea, which includes the consideration of the logical validity of the introduction by comparing with other foreign private investigation service system. We should make research and effort to cope with such as a partial amendment about the problem and the side effect that can be happened in a beginning stage of system trial.

  • PDF

The Effect of Information Service Quality on Customer Loyalty: A Customer Relationship Management Perspective (정보서비스품질이 고객로열티에 미치는 영향에 관한 연구: 고객관계관리 관점)

  • Kim, Hyung-Su;Gim, Seung-Ha;Kim, Young-Gul
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.1-23
    • /
    • 2008
  • As managing customer relationship gets more important, companies are strengthening information service using multi-channels to their customers as a part of their customer relationship management (CRM) initiatives. It means companies are now accepting such information services not as simple information -delivering tools, but as strategic initiatives for acquiring and maintaining customer loyalty. In this paper, we attempt to validate whether or not such various information services would impact on organizational performance in terms of CRM strategy. More specifically, our research objective is to answer the next three questions: first, how to construct the instruments to measure not information quality but information service quality?; second, which attributes of information service quality can influence corporate image and customer loyalty?; finally, does each information service type have unique characteristics compared with others in terms of influencing corporate image and customer loyalty? With respect to providing answers to those questions, the previous studies had been limited in that those studies failed to consider the variety of types of information service or restricted the quality of information service to information quality. An appropriate research model answering the above questions should consider the fact that most companies are utilizing multi channels for their information services, and include the recent strategic information service such as customer online community. Moreover, since corporate information service could be regarded as a type of products or services delivered to customer, it is necessary to adopt the criteria for assessing customer's perceived value when to measure the quality of information service. Therefore, considering both multi-channels and multi-traits may enable us to tell the detailed causal routes showing which quality attributes of which information service would affect corporate image and customer loyalty. As information service channels, we include not only homepage and DM (direct mail), which are the most frequently applied information service channels, but also online community, which is getting more strategic importance in recent years. With respect to information service quality, we abstract information quality, convenience of information service, and timeliness of information service through a wide range of relevant literature reviews. As our dependant variables, we consider corporate image and customer loyalty that both of them are the critical determinants of organizational performance, and also attempt to grasp the relationship between the two constructs. We conducted a huge online survey at the homepage of one of representative dairy companies in Korea, and gathered 367 valid samples from 407 customers. The reliability and validity of our measurements were tested by using Cronbach's alpha coefficient and principal factor analysis respectively, and seven hypotheses were tested through performing correlation test and multiple regression analysis. The results from data analysis demonstrated that timeliness and convenience of homepage have positive effects on both corporate image and customer loyalty. In terms of DM, its' information quality was represented to influence both corporate image and customer loyalty, but we found its' convenience have a positive effect only on corporate image. With respect to online community, we found its timeliness contribute significantly both to corporate image and customer loyalty. Finally, as we expected, corporate image was revealed to provide a great influence to customer loyalty. This paper provides several academic and practical implications. Firstly, we think our research reinforces CRM literatures by developing the instruments for measuring information service quality. The previous relevant studies have mainly depended on the measurements of information quality or service quality which were developed independently. Secondly, the fact that we conducted our research in a real situation may enable academics and practitioners to understand the effects of information services more clearly. Finally, since our study involved three different types of information service which are most frequently applied in recent years, the results from our study might provide operational guidelines to the companies that are delivering their customers information by multi-channel. In other words, since we found that, in terms of customer loyalty, the key areas would be different from each other according to the types of information services, our analysis would help to make decisions such as selecting strengthening points or allocating resources by information service channels.

$H_2$ Receptor Antagonists and Gastric Cancer in the Elderly: A Nested Case-Control Study (노인 인구에서 $H_2$ Receptor Antagonist와 위암과의 관련성: 코호트 내 환자-대조군 연구)

  • Kim, Yoon-I;Heo, Dae-Seog;Lee, Seung-Mi;Youn, Kyoung-Eun;Koo, Hye-Won;Bae, Jong-Myon;Park, Byoung-Joo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.35 no.3
    • /
    • pp.245-254
    • /
    • 2002
  • Objective : To test if the intake of $H_2$ receptor antagonists ($H_2$-RAs) increases the risk of gastric cancer in the elderly. Methods : The source population for this study was drawn from the responders to a questionnaire survey administered to the Korea Elderly Pharmacoepidemiological Cohort (KEPEC), who were beneficiaries of the Korean Medical Insurance Corporation, were at least 65 years old, and residing in Busan in 1999. The information on $H_2$-RAs exposure was obtained from a drug prescription database compiled between inn. 1993 and Dec. 1994. The cases consisted of 76 gastric cancer patients, as confirmed from the KMIC claims data, the National Cancer Registry and the Busan Cancer Registry. The follow-up period was from Jan. 1993 to Dec. 1998. Cancer free controls were randomly selected by 1:4 individual matching, which took in to consideration the year of birth and gender. Information on confounders was collected by a mail questionnaire survey. The odds ratios, and their 95% confidence intervals, were calculated using a conditional logistic regression model. Results : After adjusting for a history of gastric ulcer symptoms, medication history, and body mass index, the adjusted OR (aOR) was 4.6 (95% CI=1.72-12.49). The odds ratio of long term use (more than 7 days) was 2.3 (95% CI=1.07-4.82). The odds ratio of short term use was 4.6 (95% CI=1.26-16.50). The odds ratio of parenteral use was 4.4 195% CI=1.16-17.05) and combination use between the oral and parenteral routes (aOR, 16.8; 95% CI=1.21-233.24) had the high risk of gastric cancer. The aOR of cimetidine was 1.7 (95% CI=1.04-2.95). The aOR of ranitidine was 2.0 (95% CI=1.21-3.40). The aOR of famotidine was 1.7 (95% CI=0.98-2.80). Conclusion : The intake of $H_2$-RAs might increase the risk of gastric cancer through achlorhydria in the elderly.

Improvement of Energy Efficiency of Plants Factory by Arranging Air Circulation Fan and Air Flow Control Based on CFD (CFD 기반의 순환 팬 배치 및 유속조절에 의한 식물공장의 에너지 효율 향상)

  • Moon, Seung-Mi;Kwon, Sook-Youn;Lim, Jae-Hyun
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.57-65
    • /
    • 2015
  • As information technology fusion is accelerated, the researches to improve the quality and productivity of crops inside a plant factory actively progress. Advanced growth environment management technology that can provide thermal environment and air flow suited to the growth of crops and considering the characteristics inside a facility is necessary to maximize productivity inside a plant factory. Currently running plant factories are designed to rely on experience or personal judgment; hence, design and operation technology specific to plant factories are not established, inherently producing problems such as uneven crop production due to the deviation of temperature and air flow and additional increases in energy consumption after prolonged cultivation. The optimization process has to be set up in advance for the arrangement of air flow devices and operation technology using computational fluid dynamics (CFD) during the design stage of a facility for plant factories to resolve the problems. In this study, the optimum arrangement and air flow of air circulation fans were investigated to save energy while minimizing temperature deviation at each point inside a plant factory using CFD. The condition for simulation was categorized into a total of 12 types according to installation location, quantity, and air flow changes in air circulation fans. Also, the variables of boundary conditions for simulation were set in the same level. The analysis results for each case showed that an average temperature of 296.33K matching with a set temperature and average air flow velocity of 0.51m/s suiting plant growth were well-maintained under Case 4 condition wherein two sets of air circulation fans were installed at the upper part of plant cultivation beds. Further, control of air circulation fan set under Case D yielded the most excellent results from Case D-3 conditions wherein air velocity at the outlet was adjusted to 2.9m/s.

Selection Model of System Trading Strategies using SVM (SVM을 이용한 시스템트레이딩전략의 선택모형)

  • Park, Sungcheol;Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.59-71
    • /
    • 2014
  • System trading is becoming more popular among Korean traders recently. System traders use automatic order systems based on the system generated buy and sell signals. These signals are generated from the predetermined entry and exit rules that were coded by system traders. Most researches on system trading have focused on designing profitable entry and exit rules using technical indicators. However, market conditions, strategy characteristics, and money management also have influences on the profitability of the system trading. Unexpected price deviations from the predetermined trading rules can incur large losses to system traders. Therefore, most professional traders use strategy portfolios rather than only one strategy. Building a good strategy portfolio is important because trading performance depends on strategy portfolios. Despite of the importance of designing strategy portfolio, rule of thumb methods have been used to select trading strategies. In this study, we propose a SVM-based strategy portfolio management system. SVM were introduced by Vapnik and is known to be effective for data mining area. It can build good portfolios within a very short period of time. Since SVM minimizes structural risks, it is best suitable for the futures trading market in which prices do not move exactly the same as the past. Our system trading strategies include moving-average cross system, MACD cross system, trend-following system, buy dips and sell rallies system, DMI system, Keltner channel system, Bollinger Bands system, and Fibonacci system. These strategies are well known and frequently being used by many professional traders. We program these strategies for generating automated system signals for entry and exit. We propose SVM-based strategies selection system and portfolio construction and order routing system. Strategies selection system is a portfolio training system. It generates training data and makes SVM model using optimal portfolio. We make $m{\times}n$ data matrix by dividing KOSPI 200 index futures data with a same period. Optimal strategy portfolio is derived from analyzing each strategy performance. SVM model is generated based on this data and optimal strategy portfolio. We use 80% of the data for training and the remaining 20% is used for testing the strategy. For training, we select two strategies which show the highest profit in the next day. Selection method 1 selects two strategies and method 2 selects maximum two strategies which show profit more than 0.1 point. We use one-against-all method which has fast processing time. We analyse the daily data of KOSPI 200 index futures contracts from January 1990 to November 2011. Price change rates for 50 days are used as SVM input data. The training period is from January 1990 to March 2007 and the test period is from March 2007 to November 2011. We suggest three benchmark strategies portfolio. BM1 holds two contracts of KOSPI 200 index futures for testing period. BM2 is constructed as two strategies which show the largest cumulative profit during 30 days before testing starts. BM3 has two strategies which show best profits during testing period. Trading cost include brokerage commission cost and slippage cost. The proposed strategy portfolio management system shows profit more than double of the benchmark portfolios. BM1 shows 103.44 point profit, BM2 shows 488.61 point profit, and BM3 shows 502.41 point profit after deducting trading cost. The best benchmark is the portfolio of the two best profit strategies during the test period. The proposed system 1 shows 706.22 point profit and proposed system 2 shows 768.95 point profit after deducting trading cost. The equity curves for the entire period show stable pattern. With higher profit, this suggests a good trading direction for system traders. We can make more stable and more profitable portfolios if we add money management module to the system.

A Study on the Field Data Applicability of Seismic Data Processing using Open-source Software (Madagascar) (오픈-소스 자료처리 기술개발 소프트웨어(Madagascar)를 이용한 탄성파 현장자료 전산처리 적용성 연구)

  • Son, Woohyun;Kim, Byoung-yeop
    • Geophysics and Geophysical Exploration
    • /
    • v.21 no.3
    • /
    • pp.171-182
    • /
    • 2018
  • We performed the seismic field data processing using an open-source software (Madagascar) to verify if it is applicable to processing of field data, which has low signal-to-noise ratio and high uncertainties in velocities. The Madagascar, based on Python, is usually supposed to be better in the development of processing technologies due to its capabilities of multidimensional data analysis and reproducibility. However, this open-source software has not been widely used so far for field data processing because of complicated interfaces and data structure system. To verify the effectiveness of the Madagascar software on field data, we applied it to a typical seismic data processing flow including data loading, geometry build-up, F-K filter, predictive deconvolution, velocity analysis, normal moveout correction, stack, and migration. The field data for the test were acquired in Gunsan Basin, Yellow Sea using a streamer consisting of 480 channels and 4 arrays of air-guns. The results at all processing step are compared with those processed with Landmark's ProMAX (SeisSpace R5000) which is a commercial processing software. Madagascar shows relatively high efficiencies in data IO and management as well as reproducibility. Additionally, it shows quick and exact calculations in some automated procedures such as stacking velocity analysis. There were no remarkable differences in the results after applying the signal enhancement flows of both software. For the deeper part of the substructure image, however, the commercial software shows better results than the open-source software. This is simply because the commercial software has various flows for de-multiple and provides interactive processing environments for delicate processing works compared to Madagascar. Considering that many researchers around the world are developing various data processing algorithms for Madagascar, we can expect that the open-source software such as Madagascar can be widely used for commercial-level processing with the strength of expandability, cost effectiveness and reproducibility.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.