• Title/Summary/Keyword: 계산정보

Search Result 9,135, Processing Time 0.04 seconds

A DB Pruning Method in a Large Corpus-Based TTS with Multiple Candidate Speech Segments (대용량 복수후보 TTS 방식에서 합성용 DB의 감량 방법)

  • Lee, Jung-Chul;Kang, Tae-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.572-577
    • /
    • 2009
  • Large corpus-based concatenating Text-to-Speech (TTS) systems can generate natural synthetic speech without additional signal processing. To prune the redundant speech segments in a large speech segment DB, we can utilize a decision-tree based triphone clustering algorithm widely used in speech recognition area. But, the conventional methods have problems in representing the acoustic transitional characteristics of the phones and in applying context questions with hierarchic priority. In this paper, we propose a new clustering algorithm to downsize the speech DB. Firstly, three 13th order MFCC vectors from first, medial, and final frame of a phone are combined into a 39 dimensional vector to represent the transitional characteristics of a phone. And then the hierarchically grouped three question sets are used to construct the triphone trees. For the performance test, we used DTW algorithm to calculate the acoustic similarity between the target triphone and the triphone from the tree search result. Experimental results show that the proposed method can reduce the size of speech DB by 23% and select better phones with higher acoustic similarity. Therefore the proposed method can be applied to make a small sized TTS.

Effects of Ultrasonic Scanner Setting Parameters on the Quality of Ultrasonic Images (초음파 진단기의 설정 파라미터가 영상의 질에 미치는 효과)

  • Yang, Jeong-Hwa;Lee, Kyung-Sung;Kang, Gwan-Suk;Paeng, Dong-Guk;Choi, Min-Joo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.2
    • /
    • pp.57-65
    • /
    • 2008
  • Setting parameters of Ultrasonic scanners influence the quality of ultrasonic images. In order to obtain optimized images sonographers need to understand the effects of the setting parameters on ultrasonic images. The present study considered typical four parameters including TGC (Time Gain Control), Gain, Frequency, DR (Dynamic Range). LCS (low contrast sensitivity) was chosen to quantitatively compare the quality of the images. In the present experiment LCS targets of a standard ultrasonic test phantom (539, ATS, USA) were imaged using a clinical ultrasonic scanner (SA-9000 PRIME, Medison, Korea). Altering the settings in the parameters of the ultrasonic scanner, 6 LCS target images (+15 dB, +6 dB, +3 dB, -3 dB, -6 dB, -15 dB) to each setting were obtained, and their LCS values were calculated. The results show that the mean pixel value (LCS) is the highest at the max setting in TGC, mid to max in gain and pen mode in frequency and 40-66 dB in DR. Among all images, the image being the highest in LCS was obtained at the setting of DR 40 dB. It is expected that the results will be of use in setting the parameters when ultrasonically examining masses often clinically found In either solid lesions (similar to +15, +6, +3 dB targets) or cystic lesions (similar to -15, -6, -3 dB targets).

Design of Authentication Mechinism for Command Message based on Double Hash Chains (이중 해시체인 기반의 명령어 메시지 인증 메커니즘 설계)

  • Park Wang Seok;Park Chang Seop
    • Convergence Security Journal
    • /
    • v.24 no.1
    • /
    • pp.51-57
    • /
    • 2024
  • Although industrial control systems (ICSs) recently keep evolving with the introduction of Industrial IoT converging information technology (IT) and operational technology (OT), it also leads to a variety of threats and vulnerabilities, which was not experienced in the past ICS with no connection to the external network. Since various control command messages are sent to field devices of the ICS for the purpose of monitoring and controlling the operational processes, it is required to guarantee the message integrity as well as control center authentication. In case of the conventional message integrity codes and signature schemes based on symmetric keys and public keys, respectively, they are not suitable considering the asymmetry between the control center and field devices. Especially, compromised node attacks can be mounted against the symmetric-key-based schemes. In this paper, we propose message authentication scheme based on double hash chains constructed from cryptographic hash function without introducing other primitives, and then propose extension scheme using Merkle tree for multiple uses of the double hash chains. It is shown that the proposed scheme is much more efficient in computational complexity than other conventional schemes.

Literature Review of AI Hallucination Research Since the Advent of ChatGPT: Focusing on Papers from arXiv (챗GPT 등장 이후 인공지능 환각 연구의 문헌 검토: 아카이브(arXiv)의 논문을 중심으로)

  • Park, Dae-Min;Lee, Han-Jong
    • Informatization Policy
    • /
    • v.31 no.2
    • /
    • pp.3-38
    • /
    • 2024
  • Hallucination is a significant barrier to the utilization of large-scale language models or multimodal models. In this study, we collected 654 computer science papers with "hallucination" in the abstract from arXiv from December 2022 to January 2024 following the advent of Chat GPT and conducted frequency analysis, knowledge network analysis, and literature review to explore the latest trends in hallucination research. The results showed that research in the fields of "Computation and Language," "Artificial Intelligence," "Computer Vision and Pattern Recognition," and "Machine Learning" were active. We then analyzed the research trends in the four major fields by focusing on the main authors and dividing them into data, hallucination detection, and hallucination mitigation. The main research trends included hallucination mitigation through supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), inference enhancement via "chain of thought" (CoT), and growing interest in hallucination mitigation within the domain of multimodal AI. This study provides insights into the latest developments in hallucination research through a technology-oriented literature review. This study is expected to help subsequent research in both engineering and humanities and social sciences fields by understanding the latest trends in hallucination research.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

Comparison of Two Methods for Estimating the Appearance Probability of Seawater Temperature Difference for the Development of Ocean Thermal Energy (해양온도차에너지 개발을 위한 해수온도차 출현확률 산정 방법 비교)

  • Yoon, Dong-Young;Choi, Hyun-Woo;Lee, Kwang-Soo;Park, Jin-Soon;Kim, Kye-Hyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.2
    • /
    • pp.94-106
    • /
    • 2010
  • Understanding of the amount of energy resources and site selection are required prior to develop Ocean Thermal Energy (OTE). It is necessary to calculate the appearance probability of difference of seawater temperature(${\Delta}T$) between sea surface layer and underwater layers. This research mainly aimed to calculate the appearance probability of ${\Delta}T$ using frequency analysis(FA) and harmonic analysis(HA), and compare the advantages and weaknesses of those methods which has used in the South Sea of Korea. Spatial scale for comparison of two methods was divided into local and global scales related to the estimation of energy resources amount and site selection. In global scale, the Probability Differences(PD) of calculated ${\Delta}T$ from using both methods were created as spatial distribution maps, and compared areas of PD. In local scale, both methods were compared with not only the results of PD at the region of highest probability but also bimonthly probabilities in the regions of highest and lowest PD. Basically, the strong relationship(pearson r=0.96, ${\alpha}$=0.05) between probabilities of two methods showed the usefulness of both methods. In global scale, the area of PD more than 10% was less than 5% of the whole area, which means both methods can be applied to estimate the amount of OTE resources. However, in practice, HA method was considered as a more pragmatic method due to its capability of calculating under various ${\Delta}T$ conditions. In local scale, there was no significant difference between the high probability areas by both methods, showing difference under 5%. However, while FA could detect the whole range of probability, HA had a disadvantage of inability of detecting probability less than 10%. Therefore it was analyzed that the HA is more suitable to estimate the amount of energy resources, and FA is more suitable to select the site for OTE development.

The Effects of Enterprise Value and Corporate Tax on Credit Evaluation Based on the Corporate Financial Ratio Analysis (기업 재무비율 분석을 토대로 기업가치 및 법인세가 신용평가에 미치는 영향)

  • Yoo, Joon-soo
    • Journal of Venture Innovation
    • /
    • v.2 no.2
    • /
    • pp.95-115
    • /
    • 2019
  • In the context of today's business environment, not only is the nation or company's credit rating considered very important in our recent society, but it is also becoming important in international transactions. Likewise, at this point of time when the importance and reliability of credit evaluation are becoming important at home and abroad, this study analyzes financial ratios related to corporate profitability, safety, activity, financial growth, and profit growth to study the impact of financial indicators on enterprise value and corporate taxes on credit evaluation. To proceed with this, the financial ratio of 465 companies of KOSPI securities listed in 2017 was calculated and the impact of enterprise value and corporate taxes on credit evaluation was analyzed. Especially, this further study tried to derive a reliable and consistent conclusion by analyzing the financial data of KOSPI securities listed companies for eight years from 2011, which is the first year of K-IFRS introduction, to 2018. Research has shown that the significance levels among variables that show the profitability, safety, activity, financial growth, and profit growth of each financial ratio were significant at the 99% level, except for the profit growth. Validation of the research hypothesis found that while the profitability of KOSPI-listed companies significantly affects corporate value and income tax, indicators such as safety ratio and growth ratio do not significantly affect corporate value and income tax. Activity ratio resulted in significant effects on the value of enterprise value but not significant impacts on income taxes. In addition, it was found that the enterprise value has a significant effect on the company's credit and corporate income taxes, and that corporate income taxes also have a significant effect on the corporate credit evaluation, and this also shows that there is a mediating function of corporate tax. And as a result of further study, when looking at the financial ratio for eight years from 2011 to 2018, it was found that two variables, KARA and LTAX, are significant at a 1% significant level to KISC, whereas LEVE variables is not significant to KISC. The limitation of this study is that credit rating score and financial score cannot be said to be reliable indicators that investors in the capital market can normally obtain, compared to ranking criteria for corporate bonds or corporate bills directly related to capital procurement costs of enterprise. Above all, it is necessary to develop credit rating score and financial score reflecting financial indicators such as business cash flow or net assets market value and non-financial indicators such as industry growth potential or production efficiency.

Applications of Fuzzy Theory on The Location Decision of Logistics Facilities (퍼지이론을 이용한 물류단지 입지 및 규모결정에 관한 연구)

  • 이승재;정창무;이헌주
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.1
    • /
    • pp.75-85
    • /
    • 2000
  • In existing models in optimization, the crisp data improve has been used in the objective or constraints to derive the optimal solution, Besides, the subjective environments are eliminated because the complex and uncertain circumstances were regarded as Probable ambiguity, In other words those optimal solutions in the existing models could be the complete satisfactory solutions to the objective functions in the Process of application for industrial engineering methods to minimize risks of decision-making. As a result of those, decision-makers in location Problems couldn't face appropriately with the variation of demand as well as other variables and couldn't Provide the chance of wide selection because of the insufficient information. So under the circumstance. it has been to develop the model for the location and size decision problems of logistics facility in the use of the fuzzy theory in the intention of making the most reasonable decision in the Point of subjective view under ambiguous circumstances, in the foundation of the existing decision-making problems which must satisfy the constraints to optimize the objective function in strictly given conditions in this study. Introducing the Process used in this study after the establishment of a general mixed integer Programming(MIP) model based upon the result of existing studies to decide the location and size simultaneously, a fuzzy mixed integer Programming(FMIP) model has been developed in the use of fuzzy theory. And the general linear Programming software, LINDO 6.01 has been used to simulate, to evaluate the developed model with the examples and to judge of the appropriateness and adaptability of the model(FMIP) in the real world.

  • PDF