• Title/Summary/Keyword: speed functions

Search Result 1,113, Processing Time 0.028 seconds

Efforts to Improve the E-Learning Center of the Korean Society of Radiology: Survey on User Experience and Satisfaction (대한영상의학회 이러닝 센터 발전을 위한 노력: 대한영상의학회 회원 설문조사)

  • Yong Eun Chung;Hyun Cheol Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.6
    • /
    • pp.1259-1272
    • /
    • 2022
  • Purpose As part of ongoing efforts to improve the current e-learning center, a survey was conducted regarding user experience and satisfaction to identify areas of improvement. Materials and Methods Radiologists (n = 454/617) and radiology residents (n = 163/617) of the Korean Society of Radiology were asked to answer a survey via email. The questionnaire asked for basic user information as well as user experiences relating to the e-learning center, such as workplace, frequency of use, overall satisfaction levels, reasons for satisfaction or dissatisfaction, and other suggestions for improvement. Results Annual members and all members of the e-learning center reported above average satisfaction levels of 67% and 42%, respectively. Approximately 30% of respondents viewed e-learning center lectures more than 5 times a month, with residents having a particularly high usage frequency. There was a high demand for additional lectures covering more diverse specialties (e-learning for annual members only: n = 28/97, e-learning for all members: n = 72/166), a smoother and more convenient searching platform/interface (n = 37/97 and n = 58/166, respectively), and regular content updates. In addition, many of the members suggested the addition of user-friendly functions such as playback speed control, a way to save viewing history, as well as requests for improved system stability. Conclusion Based on survey results, the educational committee plans to continue its efforts to improve the e-learning center by increasing the quality and quantity of available lectures, and increasing technical support to improve the stability and convenience of the e-learning digital system.

A Study on the Improvement of Entity-Based 3D Artwork Data Modeling for Digital Twin Exhibition Content Development (디지털트윈 전시형 콘텐츠 개발을 위한 엔티티 기반 3차원 예술작품 데이터모델링 개선방안 연구)

  • So Jin Kim;Chan Hui Kim;An Na Kim;Hyun Jung Park
    • Smart Media Journal
    • /
    • v.13 no.1
    • /
    • pp.86-100
    • /
    • 2024
  • Recently, a number of virtual reality exhibition-type content services have been produced using archive resources of visual art records as a means of promoting cultural policy-based public companies. However, it is by no means easy to accumulate 3D works of art as data. Looking at the current state of metadata in public institutions, there was no digitalization of resources when developing digital twins because it was built based on old international standards. It was found that data modeling evolution is inevitable to connect multidimensional data at a capacity and speed that exceeds the functions of existing systems. Therefore, the elements and concepts of data modeling design were first considered among previous studies. When developing virtual reality content, when it is designed for the migration of 3D modeling data, the previously created metadata was analyzed to improve the upper elements that must be added to 3D modeling. Furthermore, this study demonstrated the possibility by directly implementing the process of using newly created metadata in virtual reality content in accordance with the data modeling process. If this study is gradually developed in the future, metadata-based data modeling can become more meaningful in the use of public data than it is today.

Research on Radiation Shielding Film for Replacement of Lead(Pb) through Roll-to-Roll Sputtering Deposition (롤투롤 스퍼터링 증착을 통한 납(Pb) 대체용 방사선 차폐필름 개발)

  • Sung-Hun Kim;Jung-Sup Byun;Young-Bin Ji
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.3
    • /
    • pp.441-447
    • /
    • 2023
  • Lead(Pb), which is currently mainly used for shielding purposes in the medical radiation, has excellent radiation shielding functions, but is continuously exposed to radiation directly or indirectly due to the harmfulness of lead itself to the human body and the inconvenience caused by its heavy weight. Research on shielding materials that are human-friendly, lightweight, and convenient to use that can block risks and replace lead is continuously being conducted. In this study, based on the commonly used polyethylene terephthalate (PET) film and the fabric material used in actual radiation protective clothing, a multi-layer thin film was realized through sputtering and vacuum deposition of bismuth, tungsten, and tin, which are metal materials that can shield radiation. Thus, a shielding film was produced and its applicability as a radiation shielding material was evaluated. The radiation shielding film was manufactured by establishing the optimized conditions for each shielding material while controlling the applied voltage, roll driving speed, and gas supply amount to manufacture the shielding film. The adhesion between the parent material and the shielding metal thin film was confirmed by Cross-cut 100/100, and the stability of the thin film was confirmed through a hot water test for 1 hour to measure the change of the thin film over time. The shielding performance of the finally realized shielding film was measured by the Korea association for radiation application (KARA), and the test conditions (inverse wide beam, tube voltage 50 kV, half layer 1.828 mmAl) were set to obtain an attenuation ratio of 16.4 (initial value 0.300 mGy/s, measured value 0.018 mGy/s) and damping ratio 4.31 (initial value 0.300 mGy/s, measured value 0.069 mGy/s) were obtained. by securing process efficiency for future commercialization, light and shielding films and fabrics were used to lay the foundation for the application of films to radiation protective clothing or construction materials with shielding functions.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

A Comparative Analysis between Photogrammetric and Auto Tracking Total Station Techniques for Determining UAV Positions (무인항공기의 위치 결정을 위한 사진 측량 기법과 오토 트래킹 토탈스테이션 기법의 비교 분석)

  • Kim, Won Jin;Kim, Chang Jae;Cho, Yeon Ju;Kim, Ji Sun;Kim, Hee Jeong;Lee, Dong Hoon;Lee, On Yu;Meng, Ju Pil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.553-562
    • /
    • 2017
  • GPS (Global Positioning System) receiver among various sensors mounted on UAV (Unmanned Aerial Vehicle) helps to perform various functions such as hovering flight and waypoint flight based on GPS signals. GPS receiver can be used in an environment where GPS signals are smoothly received. However, recently, the use of UAV has been diversifying into various fields such as facility monitoring, delivery service and leisure as UAV's application field has been expended. For this reason, GPS signals may be interrupted by UAV's flight in a shadow area where the GPS signal is limited. Multipath can also include various noises in the signal, while flying in dense areas such as high-rise buildings. In this study, we used analytical photogrammetry and auto tracking total station technique for 3D positioning of UAV. The analytical photogrammetry is based on the bundle adjustment using the collinearity equations, which is the geometric principle of the center projection. The auto tracking total station technique is based on the principle of tracking the 360 degree prism target in units of seconds or less. In both techniques, the target used for positioning the UAV is mounted on top of the UAV and there is a geometric separation in the x, y and z directions between the targets. Data were acquired at different speeds of 0.86m/s, 1.5m/s and 2.4m/s to verify the flight speed of the UAV. Accuracy was evaluated by geometric separation of the target. As a result, there was an error from 1mm to 12.9cm in the x and y directions of the UAV flight. In the z direction with relatively small movement, approximately 7cm error occurred regardless of the flight speed.

Minimizing Estimation Errors of a Wind Velocity Forecasting Technique That Functions as an Early Warning System in the Agricultural Sector (농업기상재해 조기경보시스템의 풍속 예측 기법 개선 연구)

  • Kim, Soo-ock;Park, Joo-Hyeon;Hwang, Kyu-Hong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.2
    • /
    • pp.63-77
    • /
    • 2022
  • Our aim was to reduce estimation errors of a wind velocity model used as an early warning system for weather risk management in the agricultural sector. The Rural Development Administration (RDA) agricultural weather observation network's wind velocity data and its corresponding estimated data from January to December 2020 were used to calculate linear regression equations (Y = aX + b). In each linear regression, the wind estimation error at 87 points and eight time slots per day (00:00, 03:00, 06:00, 09.00, 12.00, 15.00, 18.00, and 21:00) is the dependent variable (Y), while the estimated wind velocity is the independent variable (X). When the correlation coefficient exceeded 0.5, the regression equation was used as the wind velocity correction equation. In contrast, when the correlation coefficient was less than 0.5, the mean error (ME) at the corresponding points and time slots was substituted as the correction value instead of the regression equation. To enable the use of wind velocity model at a national scale, a distribution map with a grid resolution of 250 m was created. This objective was achieved b y performing a spatial interpolation with an inverse distance weighted (IDW) technique using the regression coefficients (a and b), the correlation coefficient (R), and the ME values for the 87 points and eight time slots. Interpolated grid values for 13 weather observation points in rural areas were then extracted. The wind velocity estimation errors for 13 points from January to December 2019 were corrected and compared with the system's values. After correction, the mean ME of the wind velocities reduced from 0.68 m/s to 0.45 m/s, while the mean RMSE reduced from 1.30 m/s to 1.05 m/s. In conclusion, the system's wind velocities were overestimated across all time slots; however, after the correction model was applied, the overestimation reduced in all time slots, except for 15:00. The ME and RMSE improved b y 33% and 19.2%, respectively. In our system, the warning for wind damage risk to crops is driven by the daily maximum wind speed derived from the daily mean wind speed obtained eight times per day. This approach is expected to reduce false alarms within the context of strong wind risk, by reducing the overestimation of wind velocities.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

A Study on a Effect of Product Design and a Primary factor of Qualify Competitiveness (제품 디자인의 파급효과와 품질경쟁력의 결정요인에 관한 연구)

  • Lim, Chae-Suk;Yoon, Jong-Young
    • Archives of design research
    • /
    • v.18 no.4 s.62
    • /
    • pp.95-104
    • /
    • 2005
  • The purpose of this study is to estimate the determinants of product design and analyze the impacts of product design on quality competitiveness, product reliability, and consumer satisfaction in an attempt to provide a foundation for the theory of design management. For this empirical analysis, this study has derived the relevant measurement variables from a survey on 400 Korean manufacturing firms during the period of $August{\sim}October$ 2003. The empirical findings are summarized as follows: First, the determinants of product design are very significantly (at p<0.001) estimated to be the R&D capability, the level of R&D expenditure, the level of innovative activities(5S, TQM, 6Sigma, QC, etc.). This empirical result can support Pawar and Driva(1999)'s two principles by which the performance of product design and product development can be simultaneously evaluated in the context of CE(concurrent engineering) of NPD(newly product development) activities. Second, the hypothesis on the causality: product design${\rightarrow}$quality competitiveness${\rightarrow}$customer satisfaction${\rightarrow}$customer loyalty is very significantly (at p<0.001) accepted. This implies that product design positively affects consumer satisfaction, not directly but indirectly, by influencing quality competitiveness. This empirical result of this study can also support the studies of for example Flynn et al.(1994), Ahire et at.(1996), Afire and Dreyfus(2000) which conclude that design management is a significant determinant of product quality. The aforementioned empirical results are important in the following sense: the empirical result that quality competitiveness plays a bridging role between product design and consumer satisfaction can reconcile the traditional debate between QFD(quality function development) approach asserted by product developers and conjoint analysis maintained by marketers. The first empirical result is related to QFD approach whereas the second empirical result is related to conjoint analysis. At the same time, the empirical results of this study can support the rationale of design integration(DI) of Ettlie(1997), i.e., the coordination of the timing and substance of product development activities performed by the various disciplines and organizational functions of a product's life cycle. Finally, the policy implication (at the corporate level) from the empirical results is that successful design management(DM) requires not only the support of top management but also the removal of communication barriers, (i.e. the adoption of cross-functional teams) so that concurrent engineering(CE), the simultaneous development of product and process designs can assure product development speed, design quality, and market success.

  • PDF

Modeling of Sensorineural Hearing Loss for the Evaluation of Digital Hearing Aid Algorithms (디지털 보청기 알고리즘 평가를 위한 감음신경성 난청의 모델링)

  • 김동욱;박영철
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.1
    • /
    • pp.59-68
    • /
    • 1998
  • Digital hearing aids offer many advantages over conventional analog hearing aids. With the advent of high speed digital signal processing chips, new digital techniques have been introduced to digital hearing aids. In addition, the evaluation of new ideas in hearing aids is necessarily accompanied by intensive subject-based clinical tests which requires much time and cost. In this paper, we present an objective method to evaluate and predict the performance of hearing aid systems without the help of such subject-based tests. In the hearing impairment simulation(HIS) algorithm, a sensorineural hearing impairment medel is established from auditory test data of the impaired subject being simulated. Also, the nonlinear behavior of the loudness recruitment is defined using hearing loss functions generated from the measurements. To transform the natural input sound into the impaired one, a frequency sampling filter is designed. The filter is continuously refreshed with the level-dependent frequency response function provided by the impairment model. To assess the performance, the HIS algorithm was implemented in real-time using a floating-point DSP. Signals processed with the real-time system were presented to normal subjects and their auditory data modified by the system was measured. The sensorineural hearing impairment was simulated and tested. The threshold of hearing and the speech discrimination tests exhibited the efficiency of the system in its use for the hearing impairment simulation. Using the HIS system we evaluated three typical hearing aid algorithms.

  • PDF

Feasibility of Ocean Survey by using Ocean Acoustic Tomography in southwestern part of the East Sea (동해 남서해역에서 해양음향 토모그래피 운용에 의한 해양탐사 가능성)

  • Han, Sang-Kyu;Na, Jung-Yul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.6
    • /
    • pp.75-82
    • /
    • 1994
  • The ray paths and travel times of sound wave in the ocean depend on the physical properties of the propagating media. Ocean Acoustic Tomography(OAT), which is inversely estimate the travel time variations between fixed sources and receivers the physical properties of the corresponding media can he understood. To apply ocean survey technology by using the OAT, the tomographic procedure requires forward problem that variation of the travel times be identified with the variability of the medium. Also, received signals must be satisfied the necessary conditions of ray path stability, identification and resolution in order for OAT to work. The canonical ocean has been determined based on the historical data and its travel time and ray path are used as reference values. The sound speed of canonical ocean in the East Sea is about 1523 m/s at the surface and 1458 m/s at the sound channel axis(400m). Sound speeds in the East Sea are perturbed by warm eddy whose horizontal extension is more than 100 km with deeper than 200 m in depth scale. In this study, an acoustic source and receiver are placed at the depth above the sound channel axis, 350 m, and are separated by 200 km range. Ray paths are identified by the ray theory methed in a range dependent medium whose sound speeds are functions of a range and depth. The eigenray information obtained from interpolation between the rays bracketing the receiver are used to simulate the received signal by convolution of source signal with the eigenray informations. The source signal is taken as a 400 Hz rectangular pulse signal, bandwidth is 16 Hz and pulse length is 64 ms. According to the analysis of the received signal and identified ray path by using numerical model of underwater sound propagation, simulated signals satisfy the necessary conditions of OAT, applied in the East Sea.

  • PDF