• Title/Summary/Keyword: 스피커시스템

Search Result 257, Processing Time 0.02 seconds

Transmitted Noise Reduction Performance of Piezoelectric Single Panel through Piezo-damping (압전감쇠를 통한 압전단일패널의 전달 소음저감성능)

  • 이중근;김재환;김기선;이형식
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.2 no.2
    • /
    • pp.49-56
    • /
    • 2001
  • The possibility of a noise reduction of piezoelectric single Panels is experimentally studied. Piezoelectric single panel is basically a plate structure on which piezoelectric patch with shunt circuit is mounted. The use of piezoelectric shunt damping can reduce the transmission at resonance frequencies of the panel structure. Piezo-damping is implemented by using a newly proposed tuning method. This method is based on electrical impedance model and maximizing the dissipated energy at the shunt circuit. By measuring the electrical impedance at the piezoelectric patch bonded on a structure, an equivalent electrical model is constructed near the system resonance frequency. Resonant shunt circuit for piezoelectric shunt damping is composed of register and inductor in series, and they are determined by maximizing the dissipated energy throughout the circuit. The transmitted noise reduction performance of single Panel is tested on an acoustic tunnel. The tunnel is a tube with a square cross section and a loud speaker is mounted at one side of the tube as a sound source. Panels are mounted in the middle of the tunnel and the transmitted sound pressure across Panels is measured. By enabling the piezoelectric shunt damping noise reduction is achieved at the resonance frequencies as well. Piezoelectric single panel with piezoelectric shunt damping is a promising technology for noise reduction in a broadband frequency.

  • PDF

Artificial Intelligence and College Mathematics Education (인공지능(Artificial Intelligence)과 대학수학교육)

  • Lee, Sang-Gu;Lee, Jae Hwa;Ham, Yoonmee
    • Communications of Mathematical Education
    • /
    • v.34 no.1
    • /
    • pp.1-15
    • /
    • 2020
  • Today's healthcare, intelligent robots, smart home systems, and car sharing are already innovating with cutting-edge information and communication technologies such as Artificial Intelligence (AI), the Internet of Things, the Internet of Intelligent Things, and Big data. It is deeply affecting our lives. In the factory, robots have been working for humans more than several decades (FA, OA), AI doctors are also working in hospitals (Dr. Watson), AI speakers (Giga Genie) and AI assistants (Siri, Bixby, Google Assistant) are working to improve Natural Language Process. Now, in order to understand AI, knowledge of mathematics becomes essential, not a choice. Thus, mathematicians have been given a role in explaining such mathematics that make these things possible behind AI. Therefore, the authors wrote a textbook 'Basic Mathematics for Artificial Intelligence' by arranging the mathematics concepts and tools needed to understand AI and machine learning in one or two semesters, and organized lectures for undergraduate and graduate students of various majors to explore careers in artificial intelligence. In this paper, we share our experience of conducting this class with the full contents in http://matrix.skku.ac.kr/math4ai/.

Real-time Implementation of the AMR Speech Coder Using $OakDSPCore^{\circledR}$ ($OakDSPCore^{\circledR}$를 이용한 적응형 다중 비트 (AMR) 음성 부호화기의 실시간 구현)

  • 이남일;손창용;이동원;강상원
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.34-39
    • /
    • 2001
  • An adaptive multi-rate (AMR) speech coder was adopted as a standard of W-CDMA by 3GPP and ETSI. The AMR coder is based on the CELP algorithm operating at rates ranging from 12.2 kbps down to 4.75 kbps, and it is a source controlled codec according to the channel error conditions and the traffic loading. In this paper, we implement the DSP S/W of the AMR coder using OakDSPCore. The implementation is based on the CSD17C00A chip developed by C&S Technology, and it is tested using test vectors, for the AMR speech codec, provided by ETSI for the bit exact implementation. The DSP B/W requires 20.6 MIPS for the encoder and 2.7 MIPS for the decoder. Memories required by the Am coder were 21.97 kwords, 6.64 kwords and 15.1 kwords for code, data sections and data ROM, respectively. Also, actual sound input/output test using microphone and speaker demonstrates its proper real-time operation without distortions or delays.

  • PDF

Development of Hardware for the Architecture of A Remote Vital Sign Monitor (무선 체온 모니터기 아키텍처 하드웨어 개발)

  • Jang, Dong-Wook;Jang, Sung-Whan;Jeong, Byoung-Jo;Cho, Hyun-Seob
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.7
    • /
    • pp.2549-2558
    • /
    • 2010
  • A Remote Vital Sign Monitor is an in-home healthcare system designed to wirelessly monitor core-body temperature. The Remote Vital Sign Monitor provides accuracy and features which are comparable to hospital equipment while minimizing cost with ease-of-use. It has two parts, a bandage and a monitor. The bandage and the monitor both use the Chipcon2430(CC2430) which contains an integrated 2.4GHz Direct Sequence Spread Spectrum radio. The CC2430 allows Remote Vital Sign Monitor to operate at over a 100-foot indoor radius. A simple user interface allows the user to set an upper temperature and a lower temperature that is monitored with respect to the core-body temperature. If the core-body temperature exceeds the one of two defined temperatures, the alarm will sound. The alarm is powered by a low-voltage audio amplifier circuit which is connected to a speaker. In order to accurately calculate the core-body temperature, the Remote Vital Sign Monitor must utilize an accurate temperature sensing device. The thermistor selected from GE Sensing satisfies the need for a sensitive and accurate temperature reading. The LCD monitor has a screen size that measures 64.5mm long by 16.4mm wide and also contains back light, and this should allow the user to clearly view the monitor from at least 3 feet away in both light and dark situations.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

An Influence of Artificial Intelligence Attributes on the Adoption Level of Artificial Intelligence-Enabled Products (인공지능 기반 제품 수용 정도에 인공지능 속성이 미치는 영향 연구)

  • Kwonsang Sohn;Kun Woo Yoo;Ohbyung Kwon
    • Information Systems Review
    • /
    • v.21 no.3
    • /
    • pp.111-129
    • /
    • 2019
  • Recently, artificial intelligence (AI)-enabled products and services such as smartphones, smart speakers, chatbots are being released due to advances in AI technology. Thus researchers making effort to reveal that consumers' intention to adopt AI-enabled products. Yet, little is known about the intended adoption of AI-enabled products. Because most of studies has been not consideredthe perceived utility value of consumers for each attribute by classified based on the characteristics of AI-enabled products. Therefore, the purpose of this study is to investigate the difference in importance between attributes that affect the intention to adopt of AI-enabled products. For this, first, identified and classified the attributes of AI-enabled products based on IS Success Model of DeLone and McLean. Second, measured the utility value of each attribute on the adoption of AI-enabled products through conjoint analysis. And we employed construal level theory to see whether there are differences in the relative importance of AI-enabled products attributes depending on the temporal distance. Third, we segmented the market based on the utility value of each respondent through cluster analysis and tried to understand the characteristics and needs of consumers in each segment market. We expect to provide theoretical implications for conceptually structured attributes and factors of AI-enabled products and practical implications for how development efforts of AI-enabled products are needed to reach consumers need for each segment.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.