Journal of the Korea Institute of Information and Communication Engineering
/
v.18
no.11
/
pp.2715-2720
/
2014
Median filtering is very effective to remove impulse type noises, so it has been widely used in many signal processing applications. However, due to the time complexity of its non-linearity, median filtering is often used using a small filter window size. A lot of work has been done on devising fast median filtering algorithms, but most of them can be efficiently applied to input data with finite integer values like images. Little work has been carried out on fast 2-d median filtering algorithms that can deal with real-valued 2-d data. In this paper, a fast and simple median 2-d filter is presented, and its performance is compared with the Matlab's 2-d median filter and a heap-based 2-d median filter. The proposed algorithm is shown to be much faster than the Matlab's 2-d median filter and consistently faster than the heap-based algorithm that is much more complicated than the proposed one. Also, a more efficient median filtering scheme for 2-d real valued data with a finite range of values is presented that uses higher-bit integer 2-d median filtering with negligible quantization errors.
Kim, Young Ho;Jeong, Ju-Hui;Kang, Dae Woong;Sim, Jeong Seop
KIPS Transactions on Computer and Communication Systems
/
v.2
no.2
/
pp.67-74
/
2013
Approximate string matching problems have been studied in diverse fields. Recently, fast approximate string matching algorithms are being used to reduce the time and costs for the next generation sequencing. To measure the amounts of errors between two strings, we use a distance function such as the edit distance. Given two strings X(|X| = m) and Y(|Y| = n) over an alphabet ${\Sigma}$, the edit distance between X and Y is the minimum number of edit operations to convert X into Y. The edit distance between X and Y can be computed using the well-known dynamic programming technique in O(mn) time and space. The edit distance also can be computed using the Four-Russians' algorithm whose preprocessing step runs in $O((3{\mid}{\Sigma}{\mid})^{2t}t^2)$ time and $O((3{\mid}{\Sigma}{\mid})^{2t}t)$ space and the computation step runs in O(mn/t) time and O(mn) space where t represents the size of the block. In this paper, we present a parallelized version of the computation step of the Four-Russians' algorithm. Our algorithm computes the edit distance between X and Y in O(m+n) time using m/t threads. Then we implemented both the sequential version and our parallelized version of the Four-Russians' algorithm using CUDA to compare the execution times. When t = 1 and t = 2, our algorithm runs about 10 times and 3 times faster than the sequential algorithm, respectively.
Replicated database system was emerged to resolve the problem of reduction of the availability and the reliability due to the communication failures and site errors generated at centralized database system. But if update transactions are many occurred, the update is equally executed for all replicated data. Therefore, there are many problems the same thing a message overhead generated by synchronization and the reduce of concurrency happened because of delaying the transaction. In this paper, I propose a new concurrency control algorithm for enhancing the degree of parallelism of the transaction in fully replicated database designed to improve the availability and the reliability. To improve the system performance in the replicated database should be performed the last operations in the submitted site of transactions and be independently executed update-only transactions composed of write-only transactions in all sites. I propose concurrency control method to maintain the consistency of the replicated database and reflect the result of update-only transactions in all sites. The superiority of the proposed method has been tested from the respondence and withdrawal rate. The results confirm the superiority of the proposed technique over classical correlation based method.
Journal of The Korea Institute of Healthcare Architecture
/
v.21
no.4
/
pp.27-36
/
2015
Purpose: Of late, the focus of service design is moving toward emphasizing customer satisfaction and taking users' experience more seriously. In addition to the change in perspective in service design, scholars in this area are paying more attention to service design methodology and process, as well as its theory and real-world case studies. In the case of medical space, there have been few studies in attempting to apply service design methods useful for deriving user-focused results. The author of this paper believes, however, case study-oriented approaches are more needed in this area rather than ones focusing on theoretical aspects. The author hopes thereby to expand the horizon to practical application of spatial design beyond service design methodology. Methods: In order to incorporate the strengths of service design methodology that can reflect a variety of user opinions, this study will introduce diverse tools in the framework of double diamond process. In addition, it will present field cases that successfully brought about best results in medical space design. It will end with summarizing the ideal process of medical space design which is reasonable and comprehensive. Results: Medical service encompasses preventive medicine as well as treatment of existing medical conditions. A study in establishing the platform of medical service design consists of a wide range of trend research, followed by the summary of two-matrix design classification based on results of the trend research. The draft of design process is divided into five stages composed of basic tools for establishing spatial flow lines created by matching service design tools with each stage of space design processes. In all this, most important elements to consider are communication and empathy. When service design is actually applied to space design, one can see that output has reflected the users' needs very well. The service design process for user-oriented medical space can thus be established by interactions on the final outcome and feedback on the results. Implications: One can see that the service design with the hospital at its center produces the result that encompasses the user's needs best. If the user-focused service design process for medical space can be extended to other space designs, the author believes that it would enhance the level of satisfaction for users and minimize trials and errors.
Journal of the Korea Institute of Information and Communication Engineering
/
v.12
no.4
/
pp.716-723
/
2008
With rapid development of science and technology and recent widening of mankind's range of activities, development of coastal waters and the environment have emerged as global issues. In relation to this, to allow more extensive analyses, the use of satellite images has been on the increase. This study aims at utilizing hyperspectral satellite images in determining the depth of coastal waters more efficiently. For this purpose, a partial image of the research subject was first extracted from an EO-1 Hyperion satellite image, and atmospheric and geometric corrections were made. Minimum noise fraction (MNF) transformation was then performed to compress the bands, and the band most suitable for analyzing the characteristics of the water body was selected. Within the chosen band, the diffuse attenuation coefficient Kd was determined. By deciding the end-member of pixels with pure spectral properties and conducting mapping based on the linear spectral unmixing method, the depth of water at the coastal area in question was ultimately determined. The research findings showed the calculated depth of water differed by an average of 1.2 m from that given on the digital sea map; the errors grew larger when the water to be measured was deeper. If accuracy in atmospheric correction, end-member determination, and Kd calculation is enhanced in the future, it will likely be possible to determine water depths more economically and efficiently.
KSCE Journal of Civil and Environmental Engineering Research
/
v.39
no.5
/
pp.605-612
/
2019
Traditionally, travel demand forecasts have been conducted based on the data collected by a survey of individual travel behavior, and their limitations such as the accuracy of travel demand forecasts have been also raised. In recent, advancements in information and communication technologies are enabling new datasets in travel demand forecasting research. Such datasets include data from global positioning system (GPS) devices, data from mobile phone signalling, and data from call detail record (CDR), and they are used for reducing the errors in travel demand forecasts. Based on these background, the objective of this study is to assess the feasibility of CDR as a base data for travel demand forecasts. To perform this objective, CDR data collected for Daegu Metropolitan area for four days in April including weekdays and weekend days, 2017, were used. Based on these data, we analyzed the correlation between CDR and travel demand by travel survey data. The result showed that there exists the correlation and the correlation tends to be higher in discretionary trips such as non-home based business, non-home based shopping, and non-home based other trips.
Purpose: The franchise system started by Singer Sewing Machine in the US is acting as a national economic growth engine in terms of job creation and economic growth. In China, the franchise system was introduced in the mid-1980s. And since joining the WTO, it has grown by 5-6% every year. However, compared to the growth rate of franchises, studies on shared growth between the chain headquarters and franchisees were insufficient. Accordingly, recent studies related to shared growth between the chain headquarters and franchisees have been active in China. The purpose of this study is to examine the knowledge transfer system between the knowledge creation, knowledge sharing, and the use of knowledge by franchise chain headquarters in China. In addition, the relationship between franchise satisfaction and performance is identified. Research design, data, and methodology: The data were collected from franchise stores in Sichuan, China, and were conducted with the help of ○○ Incubation, a Sichuan Province-certified incubator. From November 2020 to January 2021, 350 copies of the questionnaire were distributed in China, and 264 copies were returned. Of these, 44 copies with insincere answers and response errors were excluded, and 222 copies were used for analysis. The data were analyzed with SPSS 22.0 and AMOS 22.0 statistical packages. Result: The results of this study are as follows. First, knowledge creation has been shown to have a statistically significant impact on knowledge sharing and knowledge utilization. In particular, the effectiveness of knowledge creation was higher in knowledge sharing than in knowledge utilization. And we can see that knowledge sharing also has a statistically significant e ffect on knowledge utilization. Second, knowledge sharing was not significant for transaction satisfaction and business performance, and knowledge utilization was significant for transaction satisfaction and business performance. These results can be said to mean less interdependence of the Chinese franchise system. Finally, transaction satisfaction was statistically significant to business performance. The purpose of this study was to examine the importance of knowledge management to secure long-term competitive advantage for Chinese franchises. This study shows that knowledge sharing is important for long-term franchise growth. And we can see that there is a lack of knowledge sharing methods in the case of franchises in China. I n addition, it was found that the growth of Chinese franchises requires systematization of communication, information sharing measures and timing, help from chain headquarters, and mutual responsibility awareness.
Min, Yongchim;Jun, Hyunjung;Jeong, Jin-Yong;Park, Sung-Hwan;Lee, Jaeik;Jeong, Jeongmin;Min, Inki;Kim, Yong Sun
Ocean and Polar Research
/
v.43
no.4
/
pp.229-243
/
2021
Quality control (QC) to process observed time series has become more critical as the types and amount of observed data have increased along with the development of ocean observing sensors and communication technology. International ocean observing institutions have developed and operated automatic QC procedures for these observed time series. In this study, the performance of automated QC procedures proposed by U.S. IOOS (Integrated Ocean Observing System), NDBC (National Data Buy Center), and OOI (Ocean Observatory Initiative) were evaluated for observed time-series particularly from the Yellow and East China Seas by taking advantage of a confusion matrix. We focused on detecting additive outliers (AO) and temporary change outliers (TCO) based on ocean temperature observation from the Ieodo Ocean Research Station (I-ORS) in 2013. Our results present that the IOOS variability check procedure tends to classify normal data as AO or TCO. The NDBC variability check tracks outliers well but also tends to classify a lot of normal data as abnormal, particularly in the case of rapidly fluctuating time-series. The OOI procedure seems to detect the AO and TCO most effectively and the rate of classifying normal data as abnormal is also the lowest among the international checks. However, all three checks need additional scrutiny because they often fail to classify outliers when intermittent observations are performed or as a result of systematic errors, as well as tending to classify normal data as outliers in the case where there is abrupt change in the observed data due to a sensor being located within a sharp boundary between two water masses, which is a common feature in shallow water observations. Therefore, this study underlines the necessity of developing a new QC algorithm for time-series occurring in a shallow sea.
International Journal of Computer Science & Network Security
/
v.21
no.8
/
pp.71-78
/
2021
The relevance of research provides the necessity to identify the basic problems in the public governance sphere and information technology relations, forasmuch as understanding such interconnections can indicate the consequences of the development and spreading information technologies. The purpose of the research is to outline the issues of applying information technologies in public governance sphere. 500 civil servants took part in the survey (Ukraine). A two-stage study was conducted in order to obtain practical results of the research. The first stage involved collecting and analyzing the responses of civil servants on the Mentimeter online platform. In the second stage, the administrator used the SWOT-analysis system. The tendencies in using information technologies have been determined as follows: the institutional support development; creation of analytical portals for ensuring public control; level of accountability, transparency, activity of civil servants; implementation of e-government projects; changing the philosophy of electronic services development. Considering the threats and risks to the public governance system in the context of applying information technologies, the following aspects generated by societal requirements have been identified, namely: creation of the digital bureaucracy system; preservation of information and digital inequality; insufficient level of knowledge and skills in the field of digital technologies, reducing the publicity of the state and municipal governance system. Weaknesses of modern public governance in the context of IT implementation have been highlighted, namely: "digitization for digitalization"; lack of necessary legal regulation; inefficiency of electronic document management (issues caused by the imperfection of the interface of reporting interactive forms, frequent changes in the composition of indicators in reporting forms, the desire of higher authorities to solve the problem of their introduction); lack of data analysis infrastructure (due to imperfections in the organization of interaction between departments and poor capacity of information resources; lack of analytical databases), lack of necessary digital competencies for civil servants. Based on the results of SWOT-analysis, the strengths have been identified as follows: (possibility of continuous communication; constant self-learning); weaknesses (age restrictions for civil servants; insufficient acquisition of knowledge); threats (system errors in the provision of services through automation); opportunities for the introduction of IT in the public governance system (broad global trends; facilitation of the document management system). The practical significance of the research lies in providing recommendations for eliminating the problems of IT implementation in the public governance sphere outlined by civil servants..
KIPS Transactions on Computer and Communication Systems
/
v.10
no.3
/
pp.71-80
/
2021
The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.