• Title/Summary/Keyword: Communication Errors

Search Result 982, Processing Time 0.035 seconds

Fast Median Filtering Algorithms for Real-Valued 2-dimensional Data (실수형 2차원 데이터를 위한 고속 미디언 필터링 알고리즘)

  • Cho, Tai-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2715-2720
    • /
    • 2014
  • Median filtering is very effective to remove impulse type noises, so it has been widely used in many signal processing applications. However, due to the time complexity of its non-linearity, median filtering is often used using a small filter window size. A lot of work has been done on devising fast median filtering algorithms, but most of them can be efficiently applied to input data with finite integer values like images. Little work has been carried out on fast 2-d median filtering algorithms that can deal with real-valued 2-d data. In this paper, a fast and simple median 2-d filter is presented, and its performance is compared with the Matlab's 2-d median filter and a heap-based 2-d median filter. The proposed algorithm is shown to be much faster than the Matlab's 2-d median filter and consistently faster than the heap-based algorithm that is much more complicated than the proposed one. Also, a more efficient median filtering scheme for 2-d real valued data with a finite range of values is presented that uses higher-bit integer 2-d median filtering with negligible quantization errors.

Parallel Computation For The Edit Distance Based On The Four-Russians' Algorithm (4-러시안 알고리즘 기반의 편집거리 병렬계산)

  • Kim, Young Ho;Jeong, Ju-Hui;Kang, Dae Woong;Sim, Jeong Seop
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.2
    • /
    • pp.67-74
    • /
    • 2013
  • Approximate string matching problems have been studied in diverse fields. Recently, fast approximate string matching algorithms are being used to reduce the time and costs for the next generation sequencing. To measure the amounts of errors between two strings, we use a distance function such as the edit distance. Given two strings X(|X| = m) and Y(|Y| = n) over an alphabet ${\Sigma}$, the edit distance between X and Y is the minimum number of edit operations to convert X into Y. The edit distance between X and Y can be computed using the well-known dynamic programming technique in O(mn) time and space. The edit distance also can be computed using the Four-Russians' algorithm whose preprocessing step runs in $O((3{\mid}{\Sigma}{\mid})^{2t}t^2)$ time and $O((3{\mid}{\Sigma}{\mid})^{2t}t)$ space and the computation step runs in O(mn/t) time and O(mn) space where t represents the size of the block. In this paper, we present a parallelized version of the computation step of the Four-Russians' algorithm. Our algorithm computes the edit distance between X and Y in O(m+n) time using m/t threads. Then we implemented both the sequential version and our parallelized version of the Four-Russians' algorithm using CUDA to compare the execution times. When t = 1 and t = 2, our algorithm runs about 10 times and 3 times faster than the sequential algorithm, respectively.

Concurrency Control Using the Update Graph in Replicated Database Systems (중복 데이터베이스 시스템에서 갱신그래프를 이용한 동시성제어)

  • Choe, Hui-Yeong;Lee, Gwi-Sang;Hwang, Bu-Hyeon
    • The KIPS Transactions:PartD
    • /
    • v.9D no.4
    • /
    • pp.587-602
    • /
    • 2002
  • Replicated database system was emerged to resolve the problem of reduction of the availability and the reliability due to the communication failures and site errors generated at centralized database system. But if update transactions are many occurred, the update is equally executed for all replicated data. Therefore, there are many problems the same thing a message overhead generated by synchronization and the reduce of concurrency happened because of delaying the transaction. In this paper, I propose a new concurrency control algorithm for enhancing the degree of parallelism of the transaction in fully replicated database designed to improve the availability and the reliability. To improve the system performance in the replicated database should be performed the last operations in the submitted site of transactions and be independently executed update-only transactions composed of write-only transactions in all sites. I propose concurrency control method to maintain the consistency of the replicated database and reflect the result of update-only transactions in all sites. The superiority of the proposed method has been tested from the respondence and withdrawal rate. The results confirm the superiority of the proposed technique over classical correlation based method.

Case of Service Design Process for Medical Space Focused on Users (사용자중심 의료공간을 위한 서비스디자인 프로세스의 적용사례)

  • Noh, Meekyung
    • Journal of The Korea Institute of Healthcare Architecture
    • /
    • v.21 no.4
    • /
    • pp.27-36
    • /
    • 2015
  • Purpose: Of late, the focus of service design is moving toward emphasizing customer satisfaction and taking users' experience more seriously. In addition to the change in perspective in service design, scholars in this area are paying more attention to service design methodology and process, as well as its theory and real-world case studies. In the case of medical space, there have been few studies in attempting to apply service design methods useful for deriving user-focused results. The author of this paper believes, however, case study-oriented approaches are more needed in this area rather than ones focusing on theoretical aspects. The author hopes thereby to expand the horizon to practical application of spatial design beyond service design methodology. Methods: In order to incorporate the strengths of service design methodology that can reflect a variety of user opinions, this study will introduce diverse tools in the framework of double diamond process. In addition, it will present field cases that successfully brought about best results in medical space design. It will end with summarizing the ideal process of medical space design which is reasonable and comprehensive. Results: Medical service encompasses preventive medicine as well as treatment of existing medical conditions. A study in establishing the platform of medical service design consists of a wide range of trend research, followed by the summary of two-matrix design classification based on results of the trend research. The draft of design process is divided into five stages composed of basic tools for establishing spatial flow lines created by matching service design tools with each stage of space design processes. In all this, most important elements to consider are communication and empathy. When service design is actually applied to space design, one can see that output has reflected the users' needs very well. The service design process for user-oriented medical space can thus be established by interactions on the final outcome and feedback on the results. Implications: One can see that the service design with the hospital at its center produces the result that encompasses the user's needs best. If the user-focused service design process for medical space can be extended to other space designs, the author believes that it would enhance the level of satisfaction for users and minimize trials and errors.

Extraction of Water Depth in Coastal Area Using EO-1 Hyperion Imagery (EO-1 Hyperion 영상을 이용한 연안해역의 수심 추출)

  • Seo, Dong-Ju;Kim, Jin-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.4
    • /
    • pp.716-723
    • /
    • 2008
  • With rapid development of science and technology and recent widening of mankind's range of activities, development of coastal waters and the environment have emerged as global issues. In relation to this, to allow more extensive analyses, the use of satellite images has been on the increase. This study aims at utilizing hyperspectral satellite images in determining the depth of coastal waters more efficiently. For this purpose, a partial image of the research subject was first extracted from an EO-1 Hyperion satellite image, and atmospheric and geometric corrections were made. Minimum noise fraction (MNF) transformation was then performed to compress the bands, and the band most suitable for analyzing the characteristics of the water body was selected. Within the chosen band, the diffuse attenuation coefficient Kd was determined. By deciding the end-member of pixels with pure spectral properties and conducting mapping based on the linear spectral unmixing method, the depth of water at the coastal area in question was ultimately determined. The research findings showed the calculated depth of water differed by an average of 1.2 m from that given on the digital sea map; the errors grew larger when the water to be measured was deeper. If accuracy in atmospheric correction, end-member determination, and Kd calculation is enhanced in the future, it will likely be possible to determine water depths more economically and efficiently.

Correlation Analysis Between O/D Trips and Call Detail Record: A Case Study of Daegu Metropolitan Area (모바일 통신 자료와 O/D 통행량의 상관성 분석 - 대구광역시 사례를 중심으로)

  • Kim, Keun-uk;Chung, Younshik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.39 no.5
    • /
    • pp.605-612
    • /
    • 2019
  • Traditionally, travel demand forecasts have been conducted based on the data collected by a survey of individual travel behavior, and their limitations such as the accuracy of travel demand forecasts have been also raised. In recent, advancements in information and communication technologies are enabling new datasets in travel demand forecasting research. Such datasets include data from global positioning system (GPS) devices, data from mobile phone signalling, and data from call detail record (CDR), and they are used for reducing the errors in travel demand forecasts. Based on these background, the objective of this study is to assess the feasibility of CDR as a base data for travel demand forecasts. To perform this objective, CDR data collected for Daegu Metropolitan area for four days in April including weekdays and weekend days, 2017, were used. Based on these data, we analyzed the correlation between CDR and travel demand by travel survey data. The result showed that there exists the correlation and the correlation tends to be higher in discretionary trips such as non-home based business, non-home based shopping, and non-home based other trips.

Effects of Knowledge Management Activities on Transaction Satisfaction and Business Performance (지식전달체계가 거래만족과 사업성과에 미치는 영향)

  • LEE, Chang Won
    • The Korean Journal of Franchise Management
    • /
    • v.12 no.4
    • /
    • pp.1-11
    • /
    • 2021
  • Purpose: The franchise system started by Singer Sewing Machine in the US is acting as a national economic growth engine in terms of job creation and economic growth. In China, the franchise system was introduced in the mid-1980s. And since joining the WTO, it has grown by 5-6% every year. However, compared to the growth rate of franchises, studies on shared growth between the chain headquarters and franchisees were insufficient. Accordingly, recent studies related to shared growth between the chain headquarters and franchisees have been active in China. The purpose of this study is to examine the knowledge transfer system between the knowledge creation, knowledge sharing, and the use of knowledge by franchise chain headquarters in China. In addition, the relationship between franchise satisfaction and performance is identified. Research design, data, and methodology: The data were collected from franchise stores in Sichuan, China, and were conducted with the help of ○○ Incubation, a Sichuan Province-certified incubator. From November 2020 to January 2021, 350 copies of the questionnaire were distributed in China, and 264 copies were returned. Of these, 44 copies with insincere answers and response errors were excluded, and 222 copies were used for analysis. The data were analyzed with SPSS 22.0 and AMOS 22.0 statistical packages. Result: The results of this study are as follows. First, knowledge creation has been shown to have a statistically significant impact on knowledge sharing and knowledge utilization. In particular, the effectiveness of knowledge creation was higher in knowledge sharing than in knowledge utilization. And we can see that knowledge sharing also has a statistically significant e ffect on knowledge utilization. Second, knowledge sharing was not significant for transaction satisfaction and business performance, and knowledge utilization was significant for transaction satisfaction and business performance. These results can be said to mean less interdependence of the Chinese franchise system. Finally, transaction satisfaction was statistically significant to business performance. The purpose of this study was to examine the importance of knowledge management to secure long-term competitive advantage for Chinese franchises. This study shows that knowledge sharing is important for long-term franchise growth. And we can see that there is a lack of knowledge sharing methods in the case of franchises in China. I n addition, it was found that the growth of Chinese franchises requires systematization of communication, information sharing measures and timing, help from chain headquarters, and mutual responsibility awareness.

Evaluation of International Quality Control Procedures for Detecting Outliers in Water Temperature Time-series at Ieodo Ocean Research Station (이어도 해양과학기지 수온 시계열 자료의 이상값 검출을 위한 국제 품질검사의 성능 평가)

  • Min, Yongchim;Jun, Hyunjung;Jeong, Jin-Yong;Park, Sung-Hwan;Lee, Jaeik;Jeong, Jeongmin;Min, Inki;Kim, Yong Sun
    • Ocean and Polar Research
    • /
    • v.43 no.4
    • /
    • pp.229-243
    • /
    • 2021
  • Quality control (QC) to process observed time series has become more critical as the types and amount of observed data have increased along with the development of ocean observing sensors and communication technology. International ocean observing institutions have developed and operated automatic QC procedures for these observed time series. In this study, the performance of automated QC procedures proposed by U.S. IOOS (Integrated Ocean Observing System), NDBC (National Data Buy Center), and OOI (Ocean Observatory Initiative) were evaluated for observed time-series particularly from the Yellow and East China Seas by taking advantage of a confusion matrix. We focused on detecting additive outliers (AO) and temporary change outliers (TCO) based on ocean temperature observation from the Ieodo Ocean Research Station (I-ORS) in 2013. Our results present that the IOOS variability check procedure tends to classify normal data as AO or TCO. The NDBC variability check tracks outliers well but also tends to classify a lot of normal data as abnormal, particularly in the case of rapidly fluctuating time-series. The OOI procedure seems to detect the AO and TCO most effectively and the rate of classifying normal data as abnormal is also the lowest among the international checks. However, all three checks need additional scrutiny because they often fail to classify outliers when intermittent observations are performed or as a result of systematic errors, as well as tending to classify normal data as outliers in the case where there is abrupt change in the observed data due to a sensor being located within a sharp boundary between two water masses, which is a common feature in shallow water observations. Therefore, this study underlines the necessity of developing a new QC algorithm for time-series occurring in a shallow sea.

Problems of Applying Information Technologies in Public Governance

  • Goshovska, Valentyna;Danylenko, Lydiia;Hachkov, Andrii;Paladiiichuk, Sergii;Dzeha, Volodymyr
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.71-78
    • /
    • 2021
  • The relevance of research provides the necessity to identify the basic problems in the public governance sphere and information technology relations, forasmuch as understanding such interconnections can indicate the consequences of the development and spreading information technologies. The purpose of the research is to outline the issues of applying information technologies in public governance sphere. 500 civil servants took part in the survey (Ukraine). A two-stage study was conducted in order to obtain practical results of the research. The first stage involved collecting and analyzing the responses of civil servants on the Mentimeter online platform. In the second stage, the administrator used the SWOT-analysis system. The tendencies in using information technologies have been determined as follows: the institutional support development; creation of analytical portals for ensuring public control; level of accountability, transparency, activity of civil servants; implementation of e-government projects; changing the philosophy of electronic services development. Considering the threats and risks to the public governance system in the context of applying information technologies, the following aspects generated by societal requirements have been identified, namely: creation of the digital bureaucracy system; preservation of information and digital inequality; insufficient level of knowledge and skills in the field of digital technologies, reducing the publicity of the state and municipal governance system. Weaknesses of modern public governance in the context of IT implementation have been highlighted, namely: "digitization for digitalization"; lack of necessary legal regulation; inefficiency of electronic document management (issues caused by the imperfection of the interface of reporting interactive forms, frequent changes in the composition of indicators in reporting forms, the desire of higher authorities to solve the problem of their introduction); lack of data analysis infrastructure (due to imperfections in the organization of interaction between departments and poor capacity of information resources; lack of analytical databases), lack of necessary digital competencies for civil servants. Based on the results of SWOT-analysis, the strengths have been identified as follows: (possibility of continuous communication; constant self-learning); weaknesses (age restrictions for civil servants; insufficient acquisition of knowledge); threats (system errors in the provision of services through automation); opportunities for the introduction of IT in the public governance system (broad global trends; facilitation of the document management system). The practical significance of the research lies in providing recommendations for eliminating the problems of IT implementation in the public governance sphere outlined by civil servants..

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.