• Title/Summary/Keyword: unit speed

Search Result 1,486, Processing Time 0.027 seconds

The Mirror-based real-time dynamic projection mapping design and dynamic object detection system research (미러 방식의 실시간 동적 프로젝션 매핑 설계 및 동적 사물 검출 시스템 연구)

  • Soe-Young Ahn;Bum-Suk Seo;Sung Dae Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.85-91
    • /
    • 2024
  • In this paper, we studied projection mapping, which is being utilized as a digital canvas beyond space and time for theme parks, mega events, and exhibition performances. Since the existing projection technology used for fixed objects has the limitation that it is difficult to map moving objects in terms of utilization, it is urgent to develop a technology that can track and map moving objects and a real-time dynamic projection mapping system based on dynamically moving objects so that it can respond to various markets such as performances, exhibitions, and theme parks. In this paper, we propose a system that can track real-time objects in real time and eliminate the delay phenomenon by developing hardware and performing high-speed image processing. Specifically, we develop a real-time object image analysis and projection focusing control unit, an integrated operating system for a real-time object tracking system, and an image processing library for projection mapping. This research is expected to have a wide range of applications in the technology-intensive industry that utilizes real-time vision machine-based detection technology, as well as in the industry where cutting-edge science and technology are converged and produced.

Simulation analysis and evaluation of decontamination effect of different abrasive jet process parameters on radioactively contaminated metal

  • Lin Zhong;Jian Deng;Zhe-wen Zuo;Can-yu Huang;Bo Chen;Lin Lei;Ze-yong Lei;Jie-heng Lei;Mu Zhao;Yun-fei Hua
    • Nuclear Engineering and Technology
    • /
    • v.55 no.11
    • /
    • pp.3940-3955
    • /
    • 2023
  • A new method of numerical simulating prediction and decontamination effect evaluation for abrasive jet decontamination to radioactively contaminated metal is proposed. Based on the Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) coupled simulation model, the motion patterns and distribution of abrasives can be predicted, and the decontamination effect can be evaluated by image processing and recognition technology. The impact of three key parameters (impact distance, inlet pressure, abrasive mass flow rate) on the decontamination effect is revealed. Moreover, here are experiments of reliability verification to decontamination effect and numerical simulation methods that has been conducted. The results show that: 60Co and other homogeneous solid solution radioactive pollutants can be removed by abrasive jet, and the average removal rate of Co exceeds 80%. It is reliable for the proposed numerical simulation and evaluation method because of the well goodness of fit between predicted value and actual values: The predicted values and actual values of the abrasive distribution diameter are Ф57 and Ф55; the total coverage rate is 26.42% and 23.50%; the average impact velocity is 81.73 m/s and 78.00 m/s. Further analysis shows that the impact distance has a significant impact on the distribution of abrasive particles on the target surface, the coverage rate of the core area increases at first, and then decreases with the increase of the impact distance of the nozzle, which reach a maximum of 14.44% at 300 mm. It is recommended to set the impact distance around 300 mm, because at this time the core area coverage of the abrasive is the largest and the impact velocity is stable at the highest speed of 81.94 m/s. The impact of the nozzle inlet pressure on the decontamination effect mainly affects the impact kinetic energy of the abrasive and has little impact on the distribution. The greater the inlet pressure, the greater the impact kinetic energy, and the stronger the decontamination ability of the abrasive. But in return, the energy consumption is higher, too. For the decontamination of radioactively contaminated metals, it is recommended to set the inlet pressure of the nozzle at around 0.6 MPa. Because most of the Co elements can be removed under this pressure. Increasing the mass and flow of abrasives appropriately can enhance the decontamination effectiveness. The total mass of abrasives per unit decontamination area is suggested to be 50 g because the core area coverage rate of the abrasive is relatively large under this condition; and the nozzle wear extent is acceptable.

Multi-Dimensional Analysis Method of Product Reviews for Market Insight (마켓 인사이트를 위한 상품 리뷰의 다차원 분석 방안)

  • Park, Jeong Hyun;Lee, Seo Ho;Lim, Gyu Jin;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.57-78
    • /
    • 2020
  • With the development of the Internet, consumers have had an opportunity to check product information easily through E-Commerce. Product reviews used in the process of purchasing goods are based on user experience, allowing consumers to engage as producers of information as well as refer to information. This can be a way to increase the efficiency of purchasing decisions from the perspective of consumers, and from the seller's point of view, it can help develop products and strengthen their competitiveness. However, it takes a lot of time and effort to understand the overall assessment and assessment dimensions of the products that I think are important in reading the vast amount of product reviews offered by E-Commerce for the products consumers want to compare. This is because product reviews are unstructured information and it is difficult to read sentiment of reviews and assessment dimension immediately. For example, consumers who want to purchase a laptop would like to check the assessment of comparative products at each dimension, such as performance, weight, delivery, speed, and design. Therefore, in this paper, we would like to propose a method to automatically generate multi-dimensional product assessment scores in product reviews that we would like to compare. The methods presented in this study consist largely of two phases. One is the pre-preparation phase and the second is the individual product scoring phase. In the pre-preparation phase, a dimensioned classification model and a sentiment analysis model are created based on a review of the large category product group review. By combining word embedding and association analysis, the dimensioned classification model complements the limitation that word embedding methods for finding relevance between dimensions and words in existing studies see only the distance of words in sentences. Sentiment analysis models generate CNN models by organizing learning data tagged with positives and negatives on a phrase unit for accurate polarity detection. Through this, the individual product scoring phase applies the models pre-prepared for the phrase unit review. Multi-dimensional assessment scores can be obtained by aggregating them by assessment dimension according to the proportion of reviews organized like this, which are grouped among those that are judged to describe a specific dimension for each phrase. In the experiment of this paper, approximately 260,000 reviews of the large category product group are collected to form a dimensioned classification model and a sentiment analysis model. In addition, reviews of the laptops of S and L companies selling at E-Commerce are collected and used as experimental data, respectively. The dimensioned classification model classified individual product reviews broken down into phrases into six assessment dimensions and combined the existing word embedding method with an association analysis indicating frequency between words and dimensions. As a result of combining word embedding and association analysis, the accuracy of the model increased by 13.7%. The sentiment analysis models could be seen to closely analyze the assessment when they were taught in a phrase unit rather than in sentences. As a result, it was confirmed that the accuracy was 29.4% higher than the sentence-based model. Through this study, both sellers and consumers can expect efficient decision making in purchasing and product development, given that they can make multi-dimensional comparisons of products. In addition, text reviews, which are unstructured data, were transformed into objective values such as frequency and morpheme, and they were analysed together using word embedding and association analysis to improve the objectivity aspects of more precise multi-dimensional analysis and research. This will be an attractive analysis model in terms of not only enabling more effective service deployment during the evolving E-Commerce market and fierce competition, but also satisfying both customers.

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Elementary Schooler's Recognition and Understanding of the Scientific Units in Daily Life (초등학교 학생들의 생활 속 과학단위 인식과 이해)

  • Kim, Sung-Kyu
    • Journal of Science Education
    • /
    • v.36 no.2
    • /
    • pp.235-250
    • /
    • 2012
  • This paper aims to find out whether or not elementary school students recognize and understand scientific units that they encounter in their everyday life. To select appropriate units for the survey, first, scientific units in elementary textbooks of science and other science related subjects were analyzed. Then it was examined how these units were related to the learners' daily life. The participants in the current survey were 320 elementary school 6th graders. A questionnaire consisted of 11 units of science, such as kg for mass, km for distance, L for volume, V for voltage, s for time, $^{\circ}C$ for temperature, km/h for speed, kcal for heat, % for percentage, W for electric power, pH for acidity, which can often be seen and used in daily life. The students were asked to do the following four tasks, (1) to see presented pictures and select appropriate scientific units, (2) to write reasons for choosing the units, (3) to answer what the units are used for, and (4) to check where to find the units. The data were analyzed in terms of the percentage of the students who seemed to well recognize and understand the units, using SPSS 17.0 statistical program. The results are as follows: Regarding the general use of the units, it was revealed that almost the same units were repeated in science and other subject textbooks from the same grade. With an increase of the students' grade more difficult units were used. As for the use of each unit, it was found that they seemed to relatively well understand what these units kg, km, L, $^{\circ}C$, kcal, km/h, and W stand for, showing more than 91% right. However, the units of V, s, in particular, %, and pH did not seem to be understood. With respect to the recognition of the units, most students did not recognize such units as L for volume and pH for acidity, probably because the units are difficult at the elementary level in comparison to other scientific units. The students indicated that schools were the best place where they could learn and find scientific units related to life, followed by shops/marts, newspapers/broadcasting, streets/roads, homes, and others in that order. The results show that scientific unit learning should be conducted in a systematic way at school and that teachers can play a major role in improving students' understanding and use of the units.

  • PDF

EFFECT OF THE EXPONENTIAL CURING OF COMPOSITE RESIN ON THE MICROTENSILE DENTIN BOND STRENGTH OF ADHESIVES (복합레진의 exponential 중합법이 상아질접착제의 미세인장접착강도에 미치는 영향)

  • Seong, So-Rae;Seo, Duck-kyu;Lee, In-Bog;Son, Ho-Hyun;Cho, Byeong-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.35 no.2
    • /
    • pp.125-133
    • /
    • 2010
  • Objectives: Rapid polymerization of overlying composite resin causes high polymerization shrinkage stress at the adhesive layer. In order to alleviate the shrinkage stress, increasing the light intensity over the first 5 seconds was suggested as an exponential curing mode by an LED light curing unit (Elipar FreeLight2, 3M ESPE). In this study, the effectiveness of the exponential curing mode on reducing stress was evaluated with measuring microtensile bond strength of three adhesives after the overlying composite resin was polymerized with either continuous or exponential curing mode. Methods: Scotchbond Multipurpose Plus (MP, 3M ESPE), Single Bond 2 (SB, 3M ESPE), and Adper Prompt (AP, 3M ESPE) were applied onto the flat occlusal dentin of extracted human molar. The overlying hybrid composite (Denfil, Vericom, Korea) was cured under one of two exposing modes of the curing unit. At 48h from bonding, microtensile bond strength was measured at a crosshead speed of 1.0 mm/min. The fractured surfaces were observed under FE-SEM. Results: There was no statistically significant difference in the microtensile bond strengths of each adhesive between curing methods (Two-way ANOVA, p > 0.05). The microtensile bond strengths of MP and SB were significantly higher than that of AP (p < 0.05). Mixed failures were observed in most of the fractured surfaces, and differences in the failure mode were not observed among groups. Conclusion: The exponential curing method had no beneficial effect on the microtensile dentin bond strengths of three adhesives compared to continuous curing method.

Influence of Spring Warming in the Arctic-East Asia Region on the Arctic Oscillation and Dust Days in Korea Attributed to Dust Storms (북극-동아시아 지역의 봄철 온난화가 북극 진동-한국의 황사 사례일의 종관 기상에 미치는 영향 분석)

  • Ji-Sun Kim;Jae-Hee Cho;Hak-Sung Kim
    • Journal of the Korean earth science society
    • /
    • v.45 no.2
    • /
    • pp.121-135
    • /
    • 2024
  • This study examined the influence of near-surface atmospheric warming in the Arctic-East Asia region during spring (March-May) from 1991 to 2020 on the synoptic-scale meteorology of dust storm-induced dust days in Seoul, Korea, in response to the Arctic Oscillation. Increased springtime warming in the Arctic-East Asia region correlated with a reduction of six days in the occurrence of dust storm-induced dust days in Seoul, Korea, along with a decline in the intensity of these days by -1.6 ㎍ m-3yr-1 in PM10 mass concentration. The declining number of dust storm-induced dust days in Korea during the 2010s was the result of synoptic-scale meteorological analysis, which showed increased high-pressure activity as indicated by the negative potential vorticity unit. Moreover, a distinct pattern emerged in the distribution of dust storm-induced dust days in Korea based on the Arctic Oscillation Index (AOI), showing an increase in negative AOI and a decrease in positive AOI. Although the northward shift of the polar jet weakened the southerly low-pressure system activity over Mongolia and northern China, a reinforced high-pressure system formed over the Chinese continent during dust-storm-induced dust days with a negative AOI. This resulted in both a decrease in the frequency of dust-storm-induced dust days and reduction in wind speeds, facilitating their transport from source regions to Korea. Conversely, on days with positive AOIs, an extensive warm and stagnant high-pressure system dominated mainland China, accompanied by further cooling of the northern segment of the polar jet. A notable decline in wind speed in the lower troposphere across the Mongolia-northern China-Korea region diminished the occurrence of dust storm-induced dust days and also weakened their long-range transport.

The Integer Number Divider Using Improved Reciprocal Algorithm (개선된 역수 알고리즘을 사용한 정수 나눗셈기)

  • Song, Hong-Bok;Park, Chang-Soo;Cho, Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.7
    • /
    • pp.1218-1226
    • /
    • 2008
  • With the development of semiconductor integrated technology and with the increasing use of multimedia functions in computer, more functions have been implemented as hardware. Nowadays, most microprocessors beyond 32 bits generally implement an integer multiplier as hardware. However, as for a divider, only specific microprocessor implements traditional SRT algorithm as hardware due to complexity of implementation and slow speed. This paper suggested an algorithm that uses a multiplier, 'w bit $\times$ w bit = 2w bit', to process $\frac{N}{D}$ integer division. That is, the reciprocal number D is first calculated, and then multiply dividend N to process integer division. In this paper, when the divisor D is '$D=0.d{\times}2^L$, 0.5 < 0.d < 1.0', approximate value of ' $\frac{1}{D}$', '$1.g{\times}2^{-L}$', which satisfies ' $0.d{\times}1.g=1+e$, $e<2^{-w}$', is defined as over reciprocal number and then an algorithm for over reciprocal number is suggested. This algorithm multiplies over reciprocal number '$01.g{\times}2^{-L}$' by dividend N to process $\frac{N}{D}$ integer division. The algorithm suggested in this paper doesn't require additional revision, because it can calculate correct reciprocal number. In addition, this algorithm uses only multiplier, so additional hardware for division is not required to implement microprocessor. Also, it shows faster speed than the conventional SRT algorithm and performs operation by word unit, accordingly it is more suitable to make compiler than the existing division algorithm. In conclusion, results from this study could be used widely for implementation SOC(System on Chip) and etc. which has been restricted to microprocessor and size of the hardware.

A Study on the Development of High Sensitivity Collision Simulation with Digital Twin (디지털 트윈을 적용한 고감도 충돌 시뮬레이션 개발을 위한 연구)

  • Ki, Jae-Sug;Hwang, Kyo-Chan;Choi, Ju-Ho
    • Journal of the Society of Disaster Information
    • /
    • v.16 no.4
    • /
    • pp.813-823
    • /
    • 2020
  • Purpose: In order to maximize the stability and productivity of the work through simulation prior to high-risk facilities and high-cost work such as dismantling the facilities inside the reactor, we intend to use digital twin technology that can be closely controlled by simulating the specifications of the actual control equipment. Motion control errors, which can be caused by the time gap between precision control equipment and simulation in applying digital twin technology, can cause hazards such as collisions between hazardous facilities and control equipment. In order to eliminate and control these situations, prior research is needed. Method: Unity 3D is currently the most popular engine used to develop simulations. However, there are control errors that can be caused by time correction within Unity 3D engines. The error is expected in many environments and may vary depending on the development environment, such as system specifications. To demonstrate this, we develop crash simulations using Unity 3D engines, which conduct collision experiments under various conditions, organize and analyze the resulting results, and derive tolerances for precision control equipment based on them. Result: In experiments with collision experiment simulation, the time correction in 1/1000 seconds of an engine internal function call results in a unit-hour distance error in the movement control of the collision objects and the distance error is proportional to the velocity of the collision. Conclusion: Remote decomposition simulators using digital twin technology are considered to require limitations of the speed of movement according to the required precision of the precision control devices in the hardware and software environment and manual control. In addition, the size of modeling data such as system development environment, hardware specifications and simulations imitated control equipment and facilities must also be taken into account, available and acceptable errors of operational control equipment and the speed required of work.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.