• Title/Summary/Keyword: Computation Efficiency

Search Result 790, Processing Time 0.024 seconds

Efficient RSA-Based PAKE Procotol for Low-Power Devices (저전력 장비에 적합한 효율적인 RSA 기반의 PAKE 프로토콜)

  • Lee, Se-Won;Youn, Taek-Young;Park, Yung-Ho;Hong, Seok-Hie
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.6
    • /
    • pp.23-35
    • /
    • 2009
  • Password-Authenticated Key Exchange (PAKE) Protocol is a useful tool for secure communication conducted over open networks without sharing a common secret key or assuming the existence of the public key infrastructure (PKI). It seems difficult to design efficient PAKE protocols using RSA, and thus many PAKE protocols are designed based on the Diffie-Hellman key exchange (DH-PAKE). Therefore it is important to design an efficient PAKE based on RSA function since the function is suitable for designing a PAKE protocol for imbalanced communication environment. In this paper, we propose a computationally-efficient key exchange protocol based on the RSA function that is suitable for low-power devices in imbalanced environment. Our protocol is more efficient than previous RSA-PAKE protocols, required theoretical computation and experiment time in the same environment. Our protocol can provide that it is more 84% efficiency key exchange than secure and the most efficient RSA-PAKE protocol CEPEK. We can improve the performance of our protocol by computing some costly operations in offline step. We prove the security of our protocol under firmly formalized security model in the random oracle model.

Verifiable Could-Based Personal Health Record with Recovery Functionality Using Zero-Knowledge Proof (영지식 증명을 활용한 복원 기능을 가진 검증 가능한 클라우드 기반의 개인 건강기록)

  • Kim, Hunki;Kim, Jonghyun;Lee, Dong Hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.999-1012
    • /
    • 2020
  • As the utilize of personal health records increases in recent years, research on cryptographic protocol for protecting personal information of personal health records has been actively conducted. Currently, personal health records are commonly encrypted and outsourced to the cloud. However, this method is limited in verifying the integrity of personal health records, and there is a problem with poor data availability because it is essential to use it in decryption. To solve this problem, this paper proposes a verifiable cloud-based personal health record management scheme using Redactable signature scheme and zero-knowledge proof. Verifiable cloud-based personal health record management scheme can be used to verify the integrity of the original document while preserving privacy by deleting sensitive information by using Redactable signature scheme, and to verify that the redacted document has not been deleted or modified except for the deleted part of the original document by using the zero-knowledge proof. In addition, it is designed to increase the availability of data than the existing management schemes by designing to recover deleted parts only when necessary through the Redact Recovery Authority. And we propose a verifiable cloud-based personal health record management model using the proposed scheme, and analysed its efficiency by implementing the proposed scheme.

Numerical comparative investigation on blade tip vortex cavitation and cavitation noise of underwater propeller with compressible and incompressible flow solvers (압축성과 비압축성 유동해석에 따른 수중 추진기 날개 끝 와류공동과 공동소음에 대한 수치비교 연구)

  • Ha, Junbeom;Ku, Garam;Cho, Junghoon;Cheong, Cheolung;Seol, Hanshin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.4
    • /
    • pp.261-269
    • /
    • 2021
  • Without any validation of the incompressible assumption, most of previous studies on cavitation flow and its noise have utilized numerical methods based on the incompressible Reynolds Average Navier-Stokes (RANS) equations because of advantage of its efficiency. In this study, to investigate the effects of the flow compressibility on the Tip Vortex Cavitation (TVC) flow and noise, both the incompressible and compressible simulations are performed to simulate the TVC flow, and the Ffowcs Williams and Hawkings (FW-H) integral equation is utilized to predict the TVC noise. The DARPA Suboff submarine body with an underwater propeller of a skew angle of 17 degree is targeted to account for the effects of upstream disturbance. The computation domain is set to be same as the test-section of the large cavitation tunnel in Korea Research Institute of Ships and Ocean Engineering to compare the prediction results with the measured ones. To predict the TVC accurately, the Delayed Detached Eddy Simulation (DDES) technique is used in combination with the adaptive grid techniques. The acoustic spectrum obtained using the compressible flow solver shows closer agreement with the measured one.

Text Classification Using Heterogeneous Knowledge Distillation

  • Yu, Yerin;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.29-41
    • /
    • 2022
  • Recently, with the development of deep learning technology, a variety of huge models with excellent performance have been devised by pre-training massive amounts of text data. However, in order for such a model to be applied to real-life services, the inference speed must be fast and the amount of computation must be low, so the technology for model compression is attracting attention. Knowledge distillation, a representative model compression, is attracting attention as it can be used in a variety of ways as a method of transferring the knowledge already learned by the teacher model to a relatively small-sized student model. However, knowledge distillation has a limitation in that it is difficult to solve problems with low similarity to previously learned data because only knowledge necessary for solving a given problem is learned in a teacher model and knowledge distillation to a student model is performed from the same point of view. Therefore, we propose a heterogeneous knowledge distillation method in which the teacher model learns a higher-level concept rather than the knowledge required for the task that the student model needs to solve, and the teacher model distills this knowledge to the student model. In addition, through classification experiments on about 18,000 documents, we confirmed that the heterogeneous knowledge distillation method showed superior performance in all aspects of learning efficiency and accuracy compared to the traditional knowledge distillation.

A Case Study on the Introduction and Use of Artificial Intelligence in the Financial Sector (금융권 인공지능 도입 및 활용 사례 연구)

  • Byung-Jun Kim;Sou-Bin Yun;Mi-Ok Kim;Sam-Hyun Chun
    • Industry Promotion Research
    • /
    • v.8 no.2
    • /
    • pp.21-27
    • /
    • 2023
  • This study studies the policies and use cases of the government and the financial sector for artificial intelligence, and the future policy tasks of the financial sector. want to derive According to Gartner, noteworthy technologies leading the financial industry in 2022 include 'generative AI', 'autonomous system', 'Privacy Enhanced Computation (PEC) was selected. The financial sector is developing new technologies such as artificial intelligence, big data, and blockchain. Developments are spurring innovation in the financial sector. Data loss due to the spread of telecommuting after the corona pandemic As interests in sharing and personal information protection increase, companies are expected to change in new digital technologies. Global financial companies also utilize new digital technology to develop products or manage and operate existing businesses. I n order to promote process innovation, I T expenses are being expanded. The financial sector utilizes new digital technology to prevent money laundering, improve work efficiency, and strengthen personal information protection. are applying In the era of Big Blur, where the boundaries between industries are disappearing, the competitive edge in the challenge of new entrants In order to preoccupy the market, financial institutions must actively utilize new technologies in their work.

A Study on the Shape and Movement in Dissolved Air Flotation for the Algae Removal (수중조류제거(水中藻類除去)를 위한 가압부상(加壓浮上)에 있어서 기포(氣泡)의 양태(模態)에 관한 연구(研究))

  • Kim, Hwan Gi;Jeong, Tae Seop
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.4 no.4
    • /
    • pp.79-93
    • /
    • 1984
  • The dissolved air flotation(DAF) has been shown to be efficient process for the removal of algae ftom water. The efficiency of DAF can be affected by the volume ratio of pressurized liquid to sample, the pressure pressurized liquid, the contact time, the appropriate coagulant and its amount, the water temperature, the turbulence of reactor, the bubble size and rising velocity etc. The purpose of this paper is to compare the practical bubble rising velocity with the theoretical one, to investigate the adhesion phenomenon of bubbles and floc, and the influence of bubble size and velocity upon the process. The results through theoretical review and experimental investigation are as follows: Ives' equation is more suitable than Stokes' equation in computation of the bubble rising velocity. The collection of bubble and algae floc is convective collection type and resulted from absorption than adhesion or collision. The treatment efficiency is excellent when the bubble sizes are smaller than $l00{\mu}m$, and the turbulence of reactor is small. In the optimum condition of continuous type DAF the volume ratio of pressurized liquid to sample is 15%, the contact time in reactor is 15 minutes, the pressure of pressurized liquid is $4kg/cm^2$ and the distance from jet needle to inlet is 30cm.

  • PDF

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

A Study on the Method of Minimizing the Bit-Rate Overhead of H.264 Video when Encrypting the Region of Interest (관심영역 암호화 시 발생하는 H.264 영상의 비트레이트 오버헤드 최소화 방법 연구)

  • Son, Dongyeol;Kim, Jimin;Ji, Cheongmin;Kim, Kangseok;Kim, Kihyung;Hong, Manpyo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.2
    • /
    • pp.311-326
    • /
    • 2018
  • This paper has experimented using News sample video with QCIF ($176{\times}144$) resolution in JM v10.2 code of H.264/AVC-MPEG. The region of interest (ROI) to be encrypted occurred the drift by unnecessarily referring to each frame continuously in accordance with the characteristics of the motion prediction and compensation of the H.264 standard. In order to mitigate the drift, the latest related research method of re-inserting encrypted I-picture into a certain period leads to an increase in the amount of additional computation that becomes the factor increasing the bit-rate overhead of the entire video. Therefore, the reference search range of the block and the frame in the ROI to be encrypted is restricted in the motion prediction and compensation for each frame, and the reference search range in the non-ROI not to be encrypted is not restricted to maintain the normal encoding efficiency. In this way, after encoding the video with restricted reference search range, this article proposes a method of RC4 bit-stream encryption for the ROI such as the face to be able to identify in order to protect personal information in the video. Also, it is compared and analyzed the experimental results after implementing the unencrypted original video, the latest related research method, and the proposed method in the condition of the same environment. In contrast to the latest related research method, the bit-rate overhead of the proposed method is 2.35% higher than that of the original video and 14.93% lower than that of the latest related method, while mitigating temporal drift through the proposed method. These improved results have verified by experiments of this study.

A Study on Design of Agent based Nursing Records System in Attending System (에이전트기반 개방병원 간호기록시스템 설계에 관한 연구)

  • Kim, Kyoung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.73-94
    • /
    • 2010
  • The attending system is a medical system that allows doctors in clinics to use the extra equipment in hospitals-beds, laboratory, operating room, etc-for their patient's care under a contract between the doctors and hospitals. Therefore, the system is very beneficial in terms of the efficiency of the usage of medical resources. However, it is necessary to develop a strong support system to strengthen its weaknesses and supplement its merits. If doctors use hospital beds under the attending system of hospitals, they would be able to check a patient's condition often and provide them with nursing care services. However, the current attending system lacks delivery and assistance support. Thus, for the successful performance of the attending system, a networking system should be developed to facilitate communication between the doctors and nurses. In particular, the nursing records in the attending system could help doctors monitor the patient's condition and provision of nursing care services. A nursing record is the formal documentation associated with nursing care. It is merely a data repository that helps nurses to track their activities; nursing records thus represent a resource of primary information that can be reused. In order to maximize their usefulness, nursing records have been introduced as part of computerized patient records. However, nursing records are internal data that are not disclosed by hospitals. Moreover, the lack of standardization of the record list makes it difficult to share nursing records. Under the attending system, nurses would want to minimize the amount of effort they have to put in for the maintenance of additional records. Hence, they would try to maintain the current level of nursing records in the form of record lists and record attributes, while doctors would require more detailed and real-time information about their patients in order to monitor their condition. Therefore, this study developed a system for assisting in the maintenance and sharing of the nursing records under the attending system. In contrast to previous research on the functionality of computer-based nursing records, we have emphasized the practical usefulness of nursing records from the viewpoint of the actual implementation of the attending system. We suggested that nurses could design a nursing record dictionary for their convenience, and that doctors and nurses could confirm the definitions that they looked up in the dictionary through negotiations with intelligent agents. Such an agent-based system could facilitate networking among medical institutes. Multi-agent systems are a widely accepted paradigm for the distribution and sharing of computation workloads in the scientific community. Agent-based systems have been developed with differences in functional cooperation, coordination, and negotiation. To increase such communication, a framework for a multi-agent based system is proposed in this study. The agent-based approach is useful for developing a system that promotes trade-offs between transactions involving multiple attributes. A brief summary of our contributions follows. First, we propose an efficient and accurate utility representation and acquisition mechanism based on a preference scale while minimizing user interactions with the agent. Trade-offs between various transaction attributes can also be easily computed. Second, by providing a multi-attribute negotiation framework based on the attribute utility evaluation mechanism, we allow both the doctors in charge and nurses to negotiate over various transaction attributes in the nursing record lists that are defined by the latter. Third, we have designed the architecture of the nursing record management server and a system of agents that provides support to the doctors and nurses with regard to the framework and mechanisms proposed above. A formal protocol has also been developed to create and control the communication required for negotiations. We verified the realization of the system by developing a web-based prototype. The system was implemented using ASP and IIS5.1.

A stratified random sampling design for paddy fields: Optimized stratification and sample allocation for effective spatial modeling and mapping of the impact of climate changes on agricultural system in Korea (농지 공간격자 자료의 층화랜덤샘플링: 농업시스템 기후변화 영향 공간모델링을 위한 국내 농지 최적 층화 및 샘플 수 최적화 연구)

  • Minyoung Lee;Yongeun Kim;Jinsol Hong;Kijong Cho
    • Korean Journal of Environmental Biology
    • /
    • v.39 no.4
    • /
    • pp.526-535
    • /
    • 2021
  • Spatial sampling design plays an important role in GIS-based modeling studies because it increases modeling efficiency while reducing the cost of sampling. In the field of agricultural systems, research demand for high-resolution spatial databased modeling to predict and evaluate climate change impacts is growing rapidly. Accordingly, the need and importance of spatial sampling design are increasing. The purpose of this study was to design spatial sampling of paddy fields (11,386 grids with 1 km spatial resolution) in Korea for use in agricultural spatial modeling. A stratified random sampling design was developed and applied in 2030s, 2050s, and 2080s under two RCP scenarios of 4.5 and 8.5. Twenty-five weather and four soil characteristics were used as stratification variables. Stratification and sample allocation were optimized to ensure minimum sample size under given precision constraints for 16 target variables such as crop yield, greenhouse gas emission, and pest distribution. Precision and accuracy of the sampling were evaluated through sampling simulations based on coefficient of variation (CV) and relative bias, respectively. As a result, the paddy field could be optimized in the range of 5 to 21 strata and 46 to 69 samples. Evaluation results showed that target variables were within precision constraints (CV<0.05 except for crop yield) with low bias values (below 3%). These results can contribute to reducing sampling cost and computation time while having high predictive power. It is expected to be widely used as a representative sample grid in various agriculture spatial modeling studies.