• Title/Summary/Keyword: Technology System

Search Result 60,663, Processing Time 0.095 seconds

The Evolution of Cyber Singer Viewed from the Coevolution of Man and Machine (인간과 기계의 공진화적 관점에서 바라본 사이버가수의 진화과정)

  • Kim, Dae-Woo
    • Cartoon and Animation Studies
    • /
    • s.39
    • /
    • pp.261-295
    • /
    • 2015
  • Cyber singer appeared in the late 1990s has disappeared briefly appeared. although a few attempts in the 2000s, it did not show significant successes. cyber singer was born thanks to the technical development of the IT industry and the emergence of an idol training system in the music industry. It was developed by Vocaloid 'Seeyou' starting from 'Adam'. cyber singer that differenatiated typical digital characters in a cartoon or game may be subject to idolize to the music as a medium. They also feature forming a plurality of fandom. therefore, such attempts and repeated failures, this could be considered a fashion, but it flew content creation and ongoing attempts to take advantage of the new media, such as Vocaloid can see that there are expectations for a true Cyber-born singer. Early-Cyber singer is made only resemble human appearance, but 'Sciart' and 'Seeyou' has been evolving to becoming more like the human capabilities. in this paper, stylized cyber singer had disappeared in the past in the process of developing the technology to evolve into own artificial life does not end in failure cases, gradually led to a change in public perceptions of the image look looking machine was an attempt in that sense. With the direction of the evolution of the mechanical function to obtain a human, fun and human exchanges and mutual feelings. And it is equipped with an artificial life form that evolved with it only in appearance and function. in order to support this logic, I refer to the study of the coevolution of man and machine at every Bruce Mazlish. And, I have analyzed the evolution of cyber singer Bruce research from the perspective of the development process since the late 1990s, the planning of the eight singers who have appeared and design of the cyber character and important voices to be evaluated as a singer (vocal). The machine has been evolving coevolution with humans. cyber singer ambivalent development targets are recognized, but strive to become the new artificial creatures of horror idea of human desire and death continues. therefore, the new Cyber-organisms are likely to be the same style as 'Seeyou'. because, cartoon forms and whirring voice may not be in the form of a signifier is the real human desires, but this is because the contemporary public's desire to be desired and the technical development of this type can be created at the point where the cross-signifier.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Attention to the Internet: The Impact of Active Information Search on Investment Decisions (인터넷 주의효과: 능동적 정보 검색이 투자 결정에 미치는 영향에 관한 연구)

  • Chang, Young Bong;Kwon, YoungOk;Cho, Wooje
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.117-129
    • /
    • 2015
  • As the Internet becomes ubiquitous, a large volume of information is posted on the Internet with exponential growth every day. Accordingly, it is not unusual that investors in stock markets gather and compile firm-specific or market-wide information through online searches. Importantly, it becomes easier for investors to acquire value-relevant information for their investment decision with the help of powerful search tools on the Internet. Our study examines whether or not the Internet helps investors assess a firm's value better by using firm-level data over long periods spanning from January 2004 to December 2013. To this end, we construct weekly-based search volume for information technology (IT) services firms on the Internet. We limit our focus to IT firms since they are often equipped with intangible assets and relatively less recognized to the public which makes them hard-to measure. To obtain the information on those firms, investors are more likely to consult the Internet and use the information to appreciate the firms more accurately and eventually improve their investment decisions. Prior studies have shown that changes in search volumes can reflect the various aspects of the complex human behaviors and forecast near-term values of economic indicators, including automobile sales, unemployment claims, and etc. Moreover, search volume of firm names or stock ticker symbols has been used as a direct proxy of individual investors' attention in financial markets since, different from indirect measures such as turnover and extreme returns, they can reveal and quantify the interest of investors in an objective way. Following this line of research, this study aims to gauge whether the information retrieved from the Internet is value relevant in assessing a firm. We also use search volume for analysis but, distinguished from prior studies, explore its impact on return comovements with market returns. Given that a firm's returns tend to comove with market returns excessively when investors are less informed about the firm, we empirically test the value of information by examining the association between Internet searches and the extent to which a firm's returns comove. Our results show that Internet searches are negatively associated with return comovements as expected. When sample is split by the size of firms, the impact of Internet searches on return comovements is shown to be greater for large firms than small ones. Interestingly, we find a greater impact of Internet searches on return comovements for years from 2009 to 2013 than earlier years possibly due to more aggressive and informative exploit of Internet searches in obtaining financial information. We also complement our analyses by examining the association between return volatility and Internet search volumes. If Internet searches capture investors' attention associated with a change in firm-specific fundamentals such as new product releases, stock splits and so on, a firm's return volatility is likely to increase while search results can provide value-relevant information to investors. Our results suggest that in general, an increase in the volume of Internet searches is not positively associated with return volatility. However, we find a positive association between Internet searches and return volatility when the sample is limited to larger firms. A stronger result from larger firms implies that investors still pay less attention to the information obtained from Internet searches for small firms while the information is value relevant in assessing stock values. However, we do find any systematic differences in the magnitude of Internet searches impact on return volatility by time periods. Taken together, our results shed new light on the value of information searched from the Internet in assessing stock values. Given the informational role of the Internet in stock markets, we believe the results would guide investors to exploit Internet search tools to be better informed, as a result improving their investment decisions.

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

Design and Implementation of Web Based Instruction Based on Constructivism for Self-Directed Learning Ablity (구성주의 이론에 기반한 자기주도적 웹 기반 교육의 설계와 구현)

  • Kim Gi-Nam;Kim Eui-Jeong;Kim Chang-Suk
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.855-858
    • /
    • 2006
  • First of all, Developing information technology makes it possible to change a paradigm of all kinds of areas, including an education. Students can choose learning goals and objects themselves and acquire not the accumulation of knowledge but the method of their learning. Moreover, Teachers get to be adviser, and students play a key role in teaming. That is, the subject of leaning is students. Constructivism emphasizes the student-oriented environment of education, which corresponds to the characteristics of hypeimedia. In addition, Internet allows us to make a practical plan for constructivism. Web Based Internet provides us with a proper environment to make constructivism practice md causes an education system to change. Sure Web Based Instruction makes them motivated to learn more, they can gain plenty of information regardless of places or time. Besides, they are able to consult more up-to-date information regarding their learning use hypermedia such as an image, audio, video, and test, and effectively communicate with their instructor through a board, an e-mail, a chatting etc. A school and instructors have been making effort to develop a new model of a teaching method to cope with a new environment change. In this thesis, with 'Design and Implementation of Web Based Instruction Based on Constructivism', providing online learner-oriented and indexed video lesson, learners can get chance of self-oriented learning. In addition, learners doesn't have to cover all contents of a lesson but can choose contents they want to have from a indexed list of a lesson, and they ran search contents they want to have with a 'Keyword Search' on a main page, which can make learners improve learner's achievement.

  • PDF

MARGINAL MICROLEAKAGE AND SHEAR BOND STRENGTH OF COMPOSITE RESIN ACCORDING TO TREATMENT METHODS OF ARTIFICIAL SALIVA-CONTAMINATED SURFACE AFTER PRIMING (접착강화제 도포후 인공타액에 오염된 표면의 처리방법에 따른 복합레진의 번연누출과 전단결합강도)

  • Cho, Young-Gon;Ko, Kee-Jong;Lee, Suk-Jong
    • Restorative Dentistry and Endodontics
    • /
    • v.25 no.1
    • /
    • pp.46-55
    • /
    • 2000
  • During bonding procedure of composite resin, the prepared cavity can be contaminated by saliva. In this study, marginal microleakage and shear bond strength of a composite resin to primed enamel and dentin treated with artificial saliva(Taliva$^{(R)}$) were evaluated. For the marginal microleakage test, Class V cavities were prepared in the buccal surfaces of fifty molars. The samples were randomly assigned into 5 groups with 10 samples in each group. Control group was applied with a bonding system (Scotchbond$^{TM}$ Multi-Purpose plus) according to manufacture's directions without saliva contamination. Experimental groups were divided into 4 groups and contaminated with artificial saliva for 30 seconds after priming: Experimental 1 group ; artificial saliva was dried with compressed air only, Experimental 2 group ; artificial saliva was rinsed and dried. Experimental 3 group ; cavities were etched with 35% phosphoric acid for 15 seconds after rinsing and drying artificial saliva. Experimental 4 group ; cavities were etched with 35% phosphoric acid for 15 seconds and primer was reapplied after rinsing and drying artificial saliva. All the cavities were applied a bonding agent and filled with a composite resin (Z-100$^{TM}$). Specimens were immersed in 0.5% basic fuschin dye for 24 hours and embedded in transparent acrylic resin and sectioned buccolingually with diamond wheel saw. Four sections were obtained from one specimen. Degree of marginal leakage was scored under stereomicroscope and their scores were averaged from four sections. The data were analyzed by Kruscal-Wallis test and Fisher's LSD. For the shear bond strength test, the buccal or occlusal surfaces of one hundred molar teeth were ground to expose enamel(n=50) or dentin(n=50) using diamond wheel saw and its surface was smoothed with Lapping and Polishing Machine(South Bay Technology Co., U.S.A.). Samples were divided into 5 groups. Treatment of saliva-contaminated enamel and dentin surfaces was same as the marginal microleakage test and composite resin was bonded via a gelatin capsule. All specimens were stored in distilled water for 48 hours. The shear bond strengths were measured by universal testing machine (AGS-1000 4D, Shimaduzu Co., Japan) with a crosshead speed of 5 mm/minute. Failure mode of fracture sites was examined under stereomicroscope. The data were analyzed by ANOVA and Tukey's studentized range test. The results of this study were as follows : 1. Enamel marginal microleakage showed no significant difference among groups. 2. Dentinal marginal microleakages of control, experimental 2 and 4 groups were lower than those of experimental 1 and 3 groups (p<0.05). 3. The shear bond strength to enamel was the highest value in control group (20.03${\pm}$4.47MPa) and the lowest value in experimental 1 group (13.28${\pm}$6.52MPa). There were significant differences between experimental 1 group and other groups (p<0.05). 4. The shear bond strength to dentin was higher in control group (17.87${\pm}$4.02MPa) and experimental 4 group (16.38${\pm}$3.23MPa) than in other groups, its value was low in experimental 1 group (3.95${\pm}$2.51 MPa) and experimental 2 group (6.72${\pm}$2.26MPa)(p<0.05). 5. Failure mode of fractured site on the enamel showed mostly adhesive failures in experimental 1 and 3 groups. 6. Failure mode of fractured site on the dentin did not show adhesive failures in control group, but showed mostly adhesive failure in experimental groups. As a summary of above results, if the primed tooth surface was contaminated with artificial saliva, primer should be reapplied after re-etching it.

  • PDF

Measurement of shoulder motion fraction and motion ratio (견관절 운동 분율의 측정)

  • Kang, Yeong-Han
    • Journal of radiological science and technology
    • /
    • v.29 no.2
    • /
    • pp.57-62
    • /
    • 2006
  • Purpose : This study was to understand about the measurement of shoulder motion fraction and motion ratio. We proposed the radiological criterior of glenohumeral and scapulothoracic movement ratio. Materials and Methods : We measured the motion fraction of the glenohumeral and scapulothoracic movement using CR(computed radiological system) of arm elevation at neutral, 90 degree, full elevation. Central ray was $15^{\circ},\;19^{\circ},\;22^{\circ}$ to the cephald for the parallel scapular spine, and the tilting of torso was external oblique $40^{\circ},\;36^{\circ},\;22^{\circ}$ for perpendicular to glenohumeral surface. Healthful donor of 100 was divided 5 groups by age(20, 30, 40, 50, 60). The angle of glenohumeral motion and scapulothoracic motion could be taken from gross arm angle and radiological arm angle. We acquired 3 images at neutral, $90^{\circ}$ and full elevation position and measured radiographic angle of glenoheumeral, scapulothoracic movement respectively. Results : While the arm elevation was $90^{\circ}$, the shoulder motion fraction was 1.22(M), 1.70(W) in right arm and 1.31, 1.54 in left. In full elevation, Right arm fraction was 1.63, 1.84, and left was 1.57, 1.32. In right dominant arm(78%), $90^{\circ} and Full motion fraction was 1.58, 1.43, in left(22%) 1.82, 1.94. In generation 20, $90^{\circ} and Full motion fraction was 1.56, 1.52, 30' was 1.82, 1.43, 40' was 1.23, 1.16, 50' was 1.80, 1.28, 60' was 1.24, 1.75. There was not significantly by gender, dominant arm and age. Conclusion : The criterior of motion fraction was useful reference for clinical dignosis the shoulder instability.

  • PDF

Usefulness of Acoustic Noise Reduction in Brain MRI Using Quiet-T2 (뇌 자기공명영상에서 Quiet-T2 기법을 이용한 소음감소의 유용성)

  • Lee, SeJy;Kim, Young-Keun
    • Journal of radiological science and technology
    • /
    • v.39 no.1
    • /
    • pp.51-57
    • /
    • 2016
  • Acoustic noise during magnetic resonance imaging (MRI) is the main source for patient discomfort. we report our preliminary experience with this technique in neuroimaging with regard to subjective and objective noise levels and image quality. 60 patients(29 males, 31 females, average age of 60.1) underwent routine brain MRI with 3.0 Tesla (MAGNETOM Tim Trio; Siemens, Germany) system and 12-channel head coil. Q-$T_2$ and $T_2$ sequence were performed. Measurement of sound pressure levels (SPL) and heart rate on Q-$T_2$ and $T_2$ was performed respectively. Quantitative analysis was carried out by measuring the SNR, CNR, and SIR values of Q-$T_2$, $T_2$ and a statistical analysis was performed using independent sample T-test. Qualitative analysis was evaluated by the eyes for the overall quality image of Q-$T_2$ and $T_2$. A 5-point evaluation scale was used, including excellent(5), good(4), fair(3), poor(2), and unacceptable(1). The average noise and peak noise decreased by $15dB_A$ and $10dB_A$ on $T_2$ and Q-$T_2$ test. Also, the average value of heartbeat rate was lower in Q-$T_2$ for 120 seconds in each test, but there was no statistical significance. The quantitative analysis showed that there was no significant difference between CNR and SIR, and there was a significant difference (p<0.05) as SNR had a lower average value on Q-$T_2$. According to the qualitative analysis, the overall quality image of 59 case $T_2$ and Q-$T_2$ was evaluated as excellent at 5 points, and 1 case was evaluated as good at 4 points due to a motion artifact. Q-$T_2$ is a promising technique for acoustic noise reduction and improved patient comfort.

Evaluation on Organ Dose and Image Quality of Lumbar Spine Radiography Using Glass Dosimeter (유리선량계를 이용한 요추검사의 장기선량 및 영상의 평가)

  • Kim, Jae-Kyeom;Kim, Jeong-Koo
    • Journal of radiological science and technology
    • /
    • v.39 no.1
    • /
    • pp.1-11
    • /
    • 2016
  • The purpose of this study was to provide resources for medical exposure reduction through evaluation of organ dose and image resolution for lumbar spine around according to the size of the collimator in DR system. The size of the collimator were varied from $8^{\prime\prime}{\times}17^{\prime\prime}$ to $14^{\prime\prime}{\times}17^{\prime\prime}$ by 1" in AP and lateral projection for the lumbar spine radiography with RANDO phantom. The organ dose measured for liver, stomach, pancreas, kidney and gonad by the glass dosimeter. The image resolution was analyzed using the Image J program. The organ dose of around lumbar spine were reduced as the size of the collimator is decreased in AP projection. There were no significant changes decreasing rate whenever the size of the collimator were reduced 1" in the gonad. The organ dose showed higher on liver and kidney near the surface in lateral projection. There were decreasing rate of less than 5% in liver and kidney, but decreasing rate was 24.34% in the gonad whenever the size of the collimator were reduced 1". Organ dose difference for internal and external of collimator measured $549.8{\mu}Gy$ in the liver and $264.6{\mu}Gy$ in the stomach. There were no significant changes organ dose difference that measured $1,135.1{\mu}Gy$ in the gonad. Image Quality made no difference because SNR and PSNR were over than 30 dB when the collimator size is less than $9^{\prime\prime}{\times}17^{\prime\prime}$ on AP projection and $10^{\prime\prime}{\times}17^{\prime\prime}$ on lateral projection. Therefore, we are considered that the recommendations criterion for control of collimator were suggested in order to reduce unnecessary X-ray exposure and to obtain good image quality because lumbar spine radiography contains a lot of peripheral organs rather than other area radiography.

Can We Hear the Shape of a Noise Source\ulcorner (소음원의 모양을 들어서 상상할 수 있을까\ulcorner)

  • Kim, Yang-Hann
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.14 no.7
    • /
    • pp.586-603
    • /
    • 2004
  • One of the subtle problems that make noise control difficult for engineers is “the invisibility of noise or sound.” The visual image of noise often helps to determine an appropriate means for noise control. There have been many attempts to fulfill this rather challenging objective. Theoretical or numerical means to visualize the sound field have been attempted and as a result, a great deal of progress has been accomplished, for example in the field of visualization of turbulent noise. However, most of the numerical methods are not quite ready to be applied practically to noise control issues. In the meantime, fast progress has made it possible instrumentally by using multiple microphones and fast signal processing systems, although these systems are not perfect but are useful. The state of the art system is recently available but still has many problematic issues : for example, how we can implement the visualized noise field. The constructed noise or sound picture always consists of bias and random errors, and consequently it is often difficult to determine the origin of the noise and the spatial shape of noise, as highlighted in the title. The first part of this paper introduces a brief history, which is associated with “sound visualization,” from Leonardo da Vinci's famous drawing on vortex street (Fig. 1) to modern acoustic holography and what has been accomplished by a line or surface array. The second part introduces the difficulties and the recent studies. These include de-Dopplerization and do-reverberation methods. The former is essential for visualizing a moving noise source, such as cars or trains. The latter relates to what produces noise in a room or closed space. Another mar issue associated this sound/noise visualization is whether or not Ivecan distinguish mutual dependence of noise in space : for example, we are asked to answer the question, “Can we see two birds singing or one bird with two beaks?"