• 제목/요약/키워드: Variable Input

Search Result 1,444, Processing Time 0.032 seconds

A Variable Latency Goldschmidt's Floating Point Number Divider (가변 시간 골드스미트 부동소수점 나눗셈기)

  • Kim Sung-Gi;Song Hong-Bok;Cho Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.2
    • /
    • pp.380-389
    • /
    • 2005
  • The Goldschmidt iterative algorithm for a floating point divide calculates it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's divide algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To calculate a floating point divide '$\frac{N}{F}$', multifly '$T=\frac{1}{F}+e_t$' to the denominator and the nominator, then it becomes ’$\frac{TN}{TF}=\frac{N_0}{F_0}$'. And the algorithm repeats the following operations: ’$R_i=(2-e_r-F_i),\;N_{i+1}=N_i{\ast}R_i,\;F_{i+1}=F_i{\ast}R_i$, i$\in${0,1,...n-1}'. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than ‘$e_r=2^{-p}$'. The value of p is 29 for the single precision floating point, and 59 for the double precision floating point. Let ’$F_i=1+e_i$', there is $F_{i+1}=1-e_{i+1},\;e_{i+1}',\;where\;e_{i+1}, If '$[F_i-1]<2^{\frac{-p+3}{2}}$ is true, ’$e_{i+1}<16e_r$' is less than the smallest number which is representable by floating point number. So, ‘$N_{i+1}$ is approximate to ‘$\frac{N}{F}$'. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables ($T=\frac{1}{F}+e_t$) with varying sizes. 1'he superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a divider. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc

A Study on Implementation and Performance of the Power Control High Power Amplifier for Satellite Mobile Communication System (위성통신용 전력제어 고출력증폭기의 구현 및 성능평가에 관한 연구)

  • 전중성;김동일;배정철
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.1
    • /
    • pp.77-88
    • /
    • 2000
  • In this paper, the 3-mode variable gain high power amplifier for a transmitter of INMARSAT-B operating at L-band(1626.5-1646.5 MHz) was developed. This SSPA can amplify 42 dBm in high power mode, 38 dBm in medium power mode and 36 dBm in low power mode for INMARSAT-B. The allowable errol sets +1 dBm as the upper limit and -2 dBm as the lower limit, respectively. To simplify the fabrication process, the whole system is designed by two parts composed of a driving amplifier and a high power amplifier. The HP's MGA-64135 and Motorola's MRF-6401 were used for driving amplifier, and the ERICSSON's PTE-10114 and PTF-10021 for the high power amplifier. The SSPA was fabricated by the RP circuits, the temperature compensation circuits and 3-mode variable gain control circuits and 20 dB parallel coupled-line directional coupler in aluminum housing. In addition, the gain control method was proposed by digital attenuator for 3-mode amplifier. Then il has been experimentally verified that the gain is controlled for single tone signal as well as two tone signals. In this case, the SSPA detects the output power by 20 dB parallel coupled-line directional coupler and phase non-splitter amplifier. The realized SSPA has 41.6 dB, 37.6 dB and 33.2 dB for small signal gain within 20 MHz bandwidth, and the VSWR of input and output port is less than 1.3:1. The minimum value of the 1 dB compression point gets more than 12 dBm for 3-mode variable gain high power amplifier. A typical two tone intermodulation point has 36.5 dBc maximum which is single carrier backed off 3 dB from 1 dB compression point. The maximum output power of 43 dBm was achieved at the 1636.5 MHz. These results reveal a high power of 20 Watt, which was the design target.

  • PDF

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Computation (가변 시간 뉴톤-랍손 부동소수점 역수 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.2 s.92
    • /
    • pp.95-102
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal which is widely used for a floating point division, calculates the reciprocal by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the reciprocal of a floating point number F, the algorithm repeats the following operations: '$'X_{i+1}=X=X_i*(2-e_r-F*X_i),\;i\in\{0,\;1,\;2,...n-1\}'$ with the initial value $'X_0=\frac{1}{F}{\pm}e_0'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 27 for the single precision floating point, and 57 for the double precision floating point. Let $'X_i=\frac{1}{F}+e_i{'}$, these is $'X_{i+1}=\frac{1}{F}-e_{i+1},\;where\;{'}e_{i+1}, is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to $'\frac{1}{F}{'}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables $(X_0=\frac{1}{F}{\pm}e_0)$ with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal unit. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia scientific computing, etc.

Export Prediction Using Separated Learning Method and Recommendation of Potential Export Countries (분리학습 모델을 이용한 수출액 예측 및 수출 유망국가 추천)

  • Jang, Yeongjin;Won, Jongkwan;Lee, Chaerok
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.69-88
    • /
    • 2022
  • One of the characteristics of South Korea's economic structure is that it is highly dependent on exports. Thus, many businesses are closely related to the global economy and diplomatic situation. In addition, small and medium-sized enterprises(SMEs) specialized in exporting are struggling due to the spread of COVID-19. Therefore, this study aimed to develop a model to forecast exports for next year to support SMEs' export strategy and decision making. Also, this study proposed a strategy to recommend promising export countries of each item based on the forecasting model. We analyzed important variables used in previous studies such as country-specific, item-specific, and macro-economic variables and collected those variables to train our prediction model. Next, through the exploratory data analysis(EDA) it was found that exports, which is a target variable, have a highly skewed distribution. To deal with this issue and improve predictive performance, we suggest a separated learning method. In a separated learning method, the whole dataset is divided into homogeneous subgroups and a prediction algorithm is applied to each group. Thus, characteristics of each group can be more precisely trained using different input variables and algorithms. In this study, we divided the dataset into five subgroups based on the exports to decrease skewness of the target variable. After the separation, we found that each group has different characteristics in countries and goods. For example, In Group 1, most of the exporting countries are developing countries and the majority of exporting goods are low value products such as glass and prints. On the other hand, major exporting countries of South Korea such as China, USA, and Vietnam are included in Group 4 and Group 5 and most exporting goods in these groups are high value products. Then we used LightGBM(LGBM) and Exponential Moving Average(EMA) for prediction. Considering the characteristics of each group, models were built using LGBM for Group 1 to 4 and EMA for Group 5. To evaluate the performance of the model, we compare different model structures and algorithms. As a result, it was found that the separated learning model had best performance compared to other models. After the model was built, we also provided variable importance of each group using SHAP-value to add explainability of our model. Based on the prediction model, we proposed a second-stage recommendation strategy for potential export countries. In the first phase, BCG matrix was used to find Star and Question Mark markets that are expected to grow rapidly. In the second phase, we calculated scores for each country and recommendations were made according to ranking. Using this recommendation framework, potential export countries were selected and information about those countries for each item was presented. There are several implications of this study. First of all, most of the preceding studies have conducted research on the specific situation or country. However, this study use various variables and develops a machine learning model for a wide range of countries and items. Second, as to our knowledge, it is the first attempt to adopt a separated learning method for exports prediction. By separating the dataset into 5 homogeneous subgroups, we could enhance the predictive performance of the model. Also, more detailed explanation of models by group is provided using SHAP values. Lastly, this study has several practical implications. There are some platforms which serve trade information including KOTRA, but most of them are based on past data. Therefore, it is not easy for companies to predict future trends. By utilizing the model and recommendation strategy in this research, trade related services in each platform can be improved so that companies including SMEs can fully utilize the service when making strategies and decisions for exports.

Evaluation of Tumor Registry Validity in Samsung Medical Center Radiation Oncology Department (삼성서울병원 방사선종양학과 종양등록 정보의 타당도 평가)

  • Park Won;Huh Seung Jae;Kim Dae Yong;Shin Seong Soo;Ahn Yong Chan;Lim Do Hoon;Kim Seonwoo
    • Radiation Oncology Journal
    • /
    • v.22 no.1
    • /
    • pp.33-39
    • /
    • 2004
  • Purpose : A tumor registry system for the patients treated by radiotherapy at Samsung Medical Center since the opening of a hospital at 1994 was employed. In this study, the tumor registry system was introduced and the validity of the tumor registration was analyzed. Materials and Methods: The tumor registry system was composed of three parts: patient demographic, diagnostic, and treatment Information. All data were input in a screen using a mouse only. Among the 10,000 registered cases in the tumor registry system until Aug, 2002, 199 were randomly selected and their registration data were compared with the patients' medical records. Results : Total input errors were detected on 15 cases (7.5%). There were 8 error items In the part relating to diagnostic Information: tumor site 3, pathology 2, AJCC staging 2 and performance status 1. In the part relating to treatment information there were 9 mistaken items: combination treatment 4, the date of initial treatment 3 and radiation completeness 2. According to the assignment doctor, the error ratio was consequently variable. The doctors who 010 no double-checks showed higher errors than those that 010 (15.6%:3.7%). Conclusion: Our tumor registry had errors within 2% for each Item. Although the overall data qualify was high, further improvement might be achieved through promoting sincerity, continuing training, periodic validity tests and keeping double-checks. Also, some items associated with the hospital Information system will be input automatically In the next step.

Development of Deep-Learning-Based Models for Predicting Groundwater Levels in the Middle-Jeju Watershed, Jeju Island (딥러닝 기법을 이용한 제주도 중제주수역 지하수위 예측 모델개발)

  • Park, Jaesung;Jeong, Jiho;Jeong, Jina;Kim, Ki-Hong;Shin, Jaehyeon;Lee, Dongyeop;Jeong, Saebom
    • The Journal of Engineering Geology
    • /
    • v.32 no.4
    • /
    • pp.697-723
    • /
    • 2022
  • Data-driven models to predict groundwater levels 30 days in advance were developed for 12 groundwater monitoring stations in the middle-Jeju watershed, Jeju Island. Stacked long short-term memory (stacked-LSTM), a deep learning technique suitable for time series forecasting, was used for model development. Daily time series data from 2001 to 2022 for precipitation, groundwater usage amount, and groundwater level were considered. Various models were proposed that used different combinations of the input data types and varying lengths of previous time series data for each input variable. A general procedure for deep-learning-based model development is suggested based on consideration of the comparative validation results of the tested models. A model using precipitation, groundwater usage amount, and previous groundwater level data as input variables outperformed any model neglecting one or more of these data categories. Using extended sequences of these past data improved the predictions, possibly owing to the long delay time between precipitation and groundwater recharge, which results from the deep groundwater level in Jeju Island. However, limiting the range of considered groundwater usage data that significantly affected the groundwater level fluctuation (rather than using all the groundwater usage data) improved the performance of the predictive model. The developed models can predict the future groundwater level based on the current amount of precipitation and groundwater use. Therefore, the models provide information on the soundness of the aquifer system, which will help to prepare management plans to maintain appropriate groundwater quantities.

Prediction of Key Variables Affecting NBA Playoffs Advancement: Focusing on 3 Points and Turnover Features (미국 프로농구(NBA)의 플레이오프 진출에 영향을 미치는 주요 변수 예측: 3점과 턴오버 속성을 중심으로)

  • An, Sehwan;Kim, Youngmin
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.263-286
    • /
    • 2022
  • This study acquires NBA statistical information for a total of 32 years from 1990 to 2022 using web crawling, observes variables of interest through exploratory data analysis, and generates related derived variables. Unused variables were removed through a purification process on the input data, and correlation analysis, t-test, and ANOVA were performed on the remaining variables. For the variable of interest, the difference in the mean between the groups that advanced to the playoffs and did not advance to the playoffs was tested, and then to compensate for this, the average difference between the three groups (higher/middle/lower) based on ranking was reconfirmed. Of the input data, only this year's season data was used as a test set, and 5-fold cross-validation was performed by dividing the training set and the validation set for model training. The overfitting problem was solved by comparing the cross-validation result and the final analysis result using the test set to confirm that there was no difference in the performance matrix. Because the quality level of the raw data is high and the statistical assumptions are satisfied, most of the models showed good results despite the small data set. This study not only predicts NBA game results or classifies whether or not to advance to the playoffs using machine learning, but also examines whether the variables of interest are included in the major variables with high importance by understanding the importance of input attribute. Through the visualization of SHAP value, it was possible to overcome the limitation that could not be interpreted only with the result of feature importance, and to compensate for the lack of consistency in the importance calculation in the process of entering/removing variables. It was found that a number of variables related to three points and errors classified as subjects of interest in this study were included in the major variables affecting advancing to the playoffs in the NBA. Although this study is similar in that it includes topics such as match results, playoffs, and championship predictions, which have been dealt with in the existing sports data analysis field, and comparatively analyzed several machine learning models for analysis, there is a difference in that the interest features are set in advance and statistically verified, so that it is compared with the machine learning analysis result. Also, it was differentiated from existing studies by presenting explanatory visualization results using SHAP, one of the XAI models.

Evaluation of the Bending Moment of FRP Reinforced Concrete Using Artificial Neural Network (인공신경망을 이용한 FRP 보강 콘크리트 보의 휨모멘트 평가)

  • Park, Do Kyong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.10 no.5
    • /
    • pp.179-186
    • /
    • 2006
  • In this study, Multi-Layer Perceptron(MLP) among models of Artificial Neural Network(ANN) is used for the development of a model that evaluates the bending capacities of reinforced concrete beams strengthened by FRP Rebar. And the data of the existing researches are used for materials of ANN model. As the independent variables of input layer, main components of bending capacities, width, effective depth, compressive strength, reinforcing ratio of FRP, balanced steel ratio of FRP are used. And the moment performance measured in the experiment is used as the dependent variable of output layer. The developed model of ANN could be applied by GFRP, CFRP and AFRP Rebar and the model is verified by using the documents of other previous researchers. As the result of the ANN model presumption, comparatively precise presumption values are achieved to presume its bending capacities at the model of ANN(0.05), while observing remarkable errors in the model of ANN(0.1). From the verification of the ANN model, it is identified that the presumption values comparatively correspond to the given data ones of the experiment. In addition, from the Sensitivity Analysis of evaluation variables of bending performance, effective depth has the highest influence, followed by steel ratio of FRP, balanced steel ratio, compressive strength and width in order.

Influence of Heat Treatment Conditions on Temperature Control Parameter ((t1) for Shape Memory Alloy (SMA) Actuator in Nucleoplasty (수핵성형술용 형상기억합금(SMA) 액추에이터 와이어의 열처리 조건 변화가 온도제어 파라미터(t1)에 미치는 영향)

  • Oh, Dong-Joon;Kim, Cheol-Woong;Yang, Young-Gyu;Kim, Tae-Young;Kim, Jay-Jung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.34 no.5
    • /
    • pp.619-628
    • /
    • 2010
  • Shape Memory Alloy (SMA) has recently received attention in developing implantable surgical equipments and it is expected to lead the future medical device market by adequately imitating surgeons' flexible and delicate hand movement. However, SMA actuators have not been used widely because of their nonlinear behavior called hysteresis, which makes their control difficult. Hence, we propose a parameter, $t_1$, which is necessary for temperature control, by analyzing the open-loop step response between current and temperature and by comparing it with the values of linear differential equations. $t_1$ is a pole of the transfer function in the invariant linear model in which the input and output are current and temperature, respectively; hence, $t_1$ is found to be related to the state variable used for temperature control. When considering the parameter under heat treatment conditions, $T_{max}$ was found to assume the lowest value, and $t_1$ was irrelevant to the heat treatment.

Estimation and assessment of natural drought index using principal component analysis (주성분 분석을 활용한 자연가뭄지수 산정 및 평가)

  • Kim, Seon-Ho;Lee, Moon-Hwan;Bae, Deg-Hyo
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.6
    • /
    • pp.565-577
    • /
    • 2016
  • The objective of this study is to propose a method for computing the Natural Drought Index (NDI) that does not consider man-made drought facilities. Principal Component Analysis (PCA) was used to estimate the NDI. Three monthly moving cumulative runoff, soil moisture and precipitation were selected as input data of the NDI during 1977~2012. Observed precipitation data was collected from KMA ASOS (Korea Meteorological Association Automatic Synoptic Observation System), while model-driven runoff and soil moisture from Variable Infiltration Capacity Model (VIC Model) were used. Time series analysis, drought characteristic analysis and spatial analysis were used to assess the utilization of NDI and compare with existing SPI, SRI and SSI. The NDI precisely reflected onset and termination of past drought events with mean absolute error of 0.85 in time series analysis. It explained well duration and inter-arrival time with 1.3 and 1.0 respectively in drought characteristic analysis. Also, the NDI reflected regional drought condition well in spatial analysis. The accuracy rank of drought onset, termination, duration and inter-arrival time was calculated by using NDI, SPI, SRI and SSI. The result showed that NDI is more precise than the others. The NDI overcomes the limitation of univariate drought indices and can be useful for drought analysis as representative measure of different types of drought such as meteorological, hydrological and agricultural droughts.