• Title/Summary/Keyword: aims

Search Result 26,618, Processing Time 0.05 seconds

A Study on the Construction and Landscape Characteristics of Munam Pavilion in Changnyeong(聞巖亭) (창녕 문암정(聞巖亭)의 조영 및 경관특성에 관한 연구)

  • Lee, Won-Ho;Kim, Dong-Hyun;Kim, Jae-Ung;Ahn, Gye-Bog
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.32 no.2
    • /
    • pp.27-41
    • /
    • 2014
  • This study aims to investigate the history, cultural values prototype through literature analysis, characteristics of construction, location, space structure and landscape characteristics by Arc-GIS on the Munam pavilion(聞巖亭) in Changnyeong. The results were as follows. First, Shin-cho((辛礎, 1549~1618) is the builder of the Munam pavilion and builder's view of nature is to go back to nature. The period of formation of Munam pavilion is between 1608-1618 as referred from document of retire from politics and build a pavilion. Secondly, Munam pavilion is surrounded by mountains and located at the top of steep slope. Pavilion was known as scenic site of the area. But damaged in a past landscape is caused by near the bridge, agricultural facilities, town, the Kye-sung stream of masonry and beams. Thirdly, Munam pavilion is divided into the main space, which is located on the pavilion, space in located on the pavilion east and west and the orient space, which is located on the Youngjeonggak. Of these, original form of Munam pavilion is a simple structure composed of pavilion and Munam rock, thus at the time of the composition seems to be a direct entry is possible, unlike the current entrance. Fourth, Spatial composition of Munam pavilion is divided into vegetation such as Lagerstroemia indica trees in Sa-ri in Changnyeong, ornament such as letters carved on the rocks and pavilion containing structure. The vegetation around the building is classified as precincts and outside of the premises. Planting of precincts was limited. Outside of area consists of front on the pavilion, which is covered with Lagerstroemia Indica forest and Pinus densiflora forest at the back of the pavilion. Ofthese,LargeLagerstroemiaIndicaforestcorrespondstothenaturalheritageasHistoricalrecordsofrarespeciesresourcesthatareassociated withbuilder. Letterscarvedontherocksrepresenttheboundaryof space, which is close to the location of the Munam pavilion and those associated with the builder as ornaments. Letters carved on the rocks front on the pavilion are rare cases that are made sequentially with a constant direction and rules as act of record for families to honor the achievements. Fifth, 'The eight famous spots of Munam' is divided into landscape elements that have nothing to do with bearing 4 places and landscape elements that have to do with bearing 4 places. Unrelated bearings of landscape elements are Lagerstroemia indica trees in Sa-ri in Changnyeong, Pinus densiflora forest at the back of the pavilion, Okcheon valley, Gwanryongsa temple and Daeheungsa temple. Bearing that related element of absolute orientation, which is corresponding to the elements are Daeheungsa temple, Hwawangsan mountain, Kye-sung stream and Yeongchwisan mountain. Relative bearing is Gwanryongsa temple, Yeongchwisan mountain and Kye-sung stream Gongjigi hill. At Lagerstroemia indica trees in Sa-ri in Changnyeong, Pinus densiflora forest at the back of the pavilion, Kye-sung stream and Okcheon valley, elements are exsting. Currently, it is difficult to confirm the rest of the landscape elements. Because, it is a generic element that reliable estimate of the target and locations are impossible for element. Munam pavilion is made for turn to nature by Shin-cho(辛礎). That was remained a record such as Munamzip(聞巖集) and Munamchungueirok(聞巖忠義錄) that is relating to construction of pavilion. Munam pavilion located in a unique form, archival culture through the letters carved on the rocks and Large Lagerstroemia indica forest and through eight famous spots, cultural landscape elements can be assumed that those elements are remained.

A Conceptual Review of the Transaction Costs within a Distribution Channel (유통경로내의 거래비용에 대한 개념적 고찰)

  • Kwon, Young-Sik;Mun, Jang-Sil
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.29-41
    • /
    • 2012
  • This paper undertakes a conceptual review of transaction cost to broaden the understanding of the transaction cost analysis (TCA) approach. More than 40 years have passed since Coase's fundamental insight that transaction, coordination, and contracting costs must be considered explicitly in explaining the extent of vertical integration. Coase (1937) forced economists to identify previously neglected constraints on the trading process to foster efficient intrafirm, rather than interfirm, transactions. The transaction cost approach to economic organization study regards transactions as the basic units of analysis and holds that understanding transaction cost economy is central to organizational study. The approach applies to determining efficient boundaries, as between firms and markets, and to internal transaction organization, including employment relations design. TCA, developed principally by Oliver Williamson (1975,1979,1981a) blends institutional economics, organizational theory, and contract law. Further progress in transaction costs research awaits the identification of critical dimensions in which transaction costs differ and an examination of the economizing properties of alternative institutional modes for organizing transactions. The crucial investment distinction is: To what degree are transaction-specific (non-marketable) expenses incurred? Unspecialized items pose few hazards, since buyers can turn toalternative sources, and suppliers can sell output intended for one order to other buyers. Non-marketability problems arise when specific parties' identities have important cost-bearing consequences. Transactions of this kind are labeled idiosyncratic. The summarized results of the review are as follows. First, firms' distribution decisions often prompt examination of the make-or-buy question: Should a marketing activity be performed within the organization by company employees or contracted to an external agent? Second, manufacturers introducing an industrial product to a foreign market face a difficult decision. Should the product be marketed primarily by captive agents (the company sales force and distribution division) or independent intermediaries (outside sales agents and distribution)? Third, the authors develop a theoretical extension to the basic transaction cost model by combining insights from various theories with the TCA approach. Fourth, other such extensions are likely required for the general model to be applied to different channel situations. It is naive to assume the basic model appliesacross markedly different channel contexts without modifications and extensions. Although this study contributes to scholastic research, it is limited by several factors. First, the theoretical perspective of TCA has attracted considerable recent interest in the area of marketing channels. The analysis aims to match the properties of efficient governance structures with the attributes of the transaction. Second, empirical evidence about TCA's basic propositions is sketchy. Apart from Anderson's (1985) study of the vertical integration of the selling function and John's (1984) study of opportunism by franchised dealers, virtually no marketing studies involving the constructs implicated in the analysis have been reported. We hope, therefore, that further research will clarify distinctions between the different aspects of specific assets. Another important line of future research is the integration of efficiency-oriented TCA with organizational approaches that emphasize specific assets' conceptual definition and industry structure. Finally, research of transaction costs, uncertainty, opportunism, and switching costs is critical to future study.

  • PDF

Performance Improvement on Short Volatility Strategy with Asymmetric Spillover Effect and SVM (비대칭적 전이효과와 SVM을 이용한 변동성 매도전략의 수익성 개선)

  • Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.119-133
    • /
    • 2020
  • Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Study on Developing Customized Bolus using 3D Printers (3D 프린터를 이용한 Customized Bolus 제작에 관한 연구)

  • Jung, Sang Min;Yang, Jin Ho;Lee, Seung Hyun;Kim, Jin Uk;Yeom, Du Seok
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.1
    • /
    • pp.61-71
    • /
    • 2015
  • Purpose : 3D Printers are used to create three-dimensional models based on blueprints. Based on this characteristic, it is feasible to develop a bolus that can minimize the air gap between skin and bolus in radiotherapy. This study aims to compare and analyze air gap and target dose at the branded 1 cm bolus with the developed customized bolus using 3D printers. Materials and Methods : RANDO phantom with a protruded tumor was used to procure images using CT simulator. CT DICOM file was transferred into the STL file, equivalent to 3D printers. Using this, customized bolus molding box (maintaining the 1 cm width) was created by processing 3D printers, and paraffin was melted to develop the customized bolus. The air gap of customized bolus and the branded 1 cm bolus was checked, and the differences in air gap was used to compare $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$ and $V_{95%}$ in treatment plan through Eclipse. Results : Customized bolus production period took about 3 days. The total volume of air gap was average $3.9cm^3$ at the customized bolus. And it was average $29.6cm^3$ at the branded 1 cm bolus. The customized bolus developed by the 3D printer was more useful in minimizing the air gap than the branded 1 cm bolus. In the 6 MV photon, at the customized bolus, $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of GTV were 102.8%, 88.1%, 99.1%, 95.0%, 94.4% and the $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of branded 1cm bolus were 101.4%, 92.0%, 98.2%, 95.2%, 95.7%, respectively. In the proton, at the customized bolus, $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of GTV were 104.1%, 84.0%, 101.2%, 95.1%, 99.8% and the $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of branded 1cm bolus were 104.8%, 87.9%, 101.5%, 94.9%, 99.9%, respectively. Thus, in treatment plan, there was no significant difference between the customized bolus and 1 cm bolus. However, the normal tissue nearby the GTV showed relatively lower radiation dose. Conclusion : The customized bolus developed by 3D printers was effective in minimizing the air gap, especially when it is used against the treatment area with irregular surface. However, the air gap between branded bolus and skin was not enough to cause a change in target dose. On the other hand, in the chest wall could confirm that dose decrease for small the air gap. Customized bolus production period took about 3 days and the development cost was quite expensive. Therefore, the commercialization of customized bolus developed by 3D printers requires low-cost 3D printer materials, adequate for the use of bolus.

  • PDF

A Study on Market Expansion Strategy via Two-Stage Customer Pre-segmentation Based on Customer Innovativeness and Value Orientation (고객혁신성과 가치지향성 기반의 2단계 사전 고객세분화를 통한 시장 확산 전략)

  • Heo, Tae-Young;Yoo, Young-Sang;Kim, Young-Myoung
    • Journal of Korea Technology Innovation Society
    • /
    • v.10 no.1
    • /
    • pp.73-97
    • /
    • 2007
  • R&D into future technologies should be conducted in conjunction with technological innovation strategies that are linked to corporate survival within a framework of information and knowledge-based competitiveness. As such, future technology strategies should be ensured through open R&D organizations. The development of future technologies should not be conducted simply on the basis of future forecasts, but should take into account customer needs in advance and reflect them in the development of the future technologies or services. This research aims to select as segmentation variables the customers' attitude towards accepting future telecommunication technologies and their value orientation in their everyday life, as these factors wilt have the greatest effect on the demand for future telecommunication services and thus segment the future telecom service market. Likewise, such research seeks to segment the market from the stage of technology R&D activities and employ the results to formulate technology development strategies. Based on the customer attitude towards accepting new technologies, two groups were induced, and a hierarchical customer segmentation model was provided to conduct secondary segmentation of the two groups on the basis of their respective customer value orientation. A survey was conducted in June 2006 on 800 consumers aged 15 to 69, residing in Seoul and five other major South Korean cities, through one-on-one interviews. The samples were divided into two sub-groups according to their level of acceptance of new technology; a sub-group demonstrating a high level of technology acceptance (39.4%) and another sub-group with a comparatively lower level of technology acceptance (60.6%). These two sub-groups were further divided each into 5 smaller sub-groups (10 total smaller sub-groups) through two rounds of segmentation. The ten sub-groups were then analyzed in their detailed characteristics, including general demographic characteristics, usage patterns in existing telecom services such as mobile service, broadband internet and wireless internet and the status of ownership of a computing or information device and the desire or intention to purchase one. Through these steps, we were able to statistically prove that each of these 10 sub-groups responded to telecom services as independent markets. We found that each segmented group responds as an independent individual market. Through correspondence analysis, the target segmentation groups were positioned in such a way as to facilitate the entry of future telecommunication services into the market, as well as their diffusion and transferability.

  • PDF

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Effect of the Angle of Ventricular Septal Wall on Left Anterior Oblique View in Multi-Gated Cardiac Blood Pool Scan (게이트 심장 혈액풀 스캔에서 심실중격 각도에 따른 좌전사위상 변화에 대한 연구)

  • You, Yeon Wook;Lee, Chung Wun;Seo, Yeong Deok;Choi, Ho Yong;Kim, Yun Cheol;Kim, Yong Geun;Won, Woo Jae;Bang, Ji-In;Lee, Soo Jin;Kim, Tae-Sung
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.1
    • /
    • pp.13-19
    • /
    • 2016
  • Purpose In order to calculate the left ventricular ejection fraction (LVEF) accurately, it is important to acquire the best septal view of left ventricle in the multi-gated cardiac blood pool scan (GBP). This study aims to acquire the best septal view by measuring angle of ventricular septal wall (${\theta}$) using enhanced CT scan and compare with conventional method using left anterior oblique (LAO) 45 view. Materials and Methods From March to July in 2015, we analyzed the 253 patients who underwent both enhanced chest CT and GBP scan in the department of nuclear medicine at National Cancer Center. Angle (${\theta}$) between ventricular septum and imaginary midline was measured in transverse image of enhanced chest CT scan, and the patients whose difference between the angle of ${\theta}$ and 45 degree was more than 10 degrees were included. GBP scan was acquired using both LAO 45 and LAO ${\theta}$ views, and LVEFs measured by automated and manual region of interest (Auto-ROI and Manual-ROI) modes respectively were analyzed. Results $Mean{\pm}SD$ of ${\theta}$ on total 253 patients was $37.0{\pm}8.5^{\circ}$. Among them, the patients whose difference between 45 and ${\theta}$ degrees were more than ${\pm}10$ degrees were 88 patients ($29.3{\pm}6.1^{\circ}$). In Auto-ROI mode, there was statistically significant difference between LAO 45 and LAO ${\theta}$ (LVEF $45=62.0{\pm}6.6%$ vs. LVEF ${\theta}=64.0{\pm}5.6%$; P = 0.001). In Manual-ROI mode, there was also statistically significant difference between LAO 45 and LAO ${\theta}$ (LVEF $45=66.7{\pm}7.2%$ vs. LVEF ${\theta}=69.0{\pm}6.4%$; P < 0.001). Intraclass correlation coefficients of both methods were more than 95%. In case of comparison between Auto-ROI and Manual ROI of each LAO 45 and LAO ${\theta}$, there was no significant difference statistically. Conclusion We could measure the angle of ventricular septal wall accurately by using transverse image of enhanced chest CT and applied to LAO acquisition in the GBP scan. It might be the alternative method to acquire the best septal view of LAO effectively. We could notify significant difference between conventional LAO 45 and LAO ${\theta}$ view.

  • PDF

Development of Supplemental Equipment to Reduce Movement During Fusion Image Acquisition (융합영상(Fusion image)에서 움직임을 줄이기 위한 보정기구의 개발)

  • Cho, Yong Gwi;Pyo, Sung Jae;Kim, Bong Su;Shin, Chae Ho;Cho, Jin Woo;Kim, Chang Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.2
    • /
    • pp.84-89
    • /
    • 2013
  • Purpose: Patients' movement during long image acquisition time for the fusion image of PET-CT (Positron Emission Tomography-Computed Tomography) results in unconformity, and greatly affects the quality of the image and diagnosis. The arm support fixtures provided by medical device companies are not manufactured considering the convenience and safety of the patients; the arm and head movements (horizontal and vertical) during PET/CT scan cause defects in the brain fundus images and often require retaking. Therefore, this study aims to develop patient-compensation device that would minimize the head and arm movements during PET/CT scan, providing comfort and safety, and to reduce retaking. Materials and Methods: From June to July 2012, 20 patients who had no movement-related problems and another 20 patients who had difficulties in raising arms due to shoulder pain were recruited among the ones who visited nuclear medicine department for PET Torso scan. By using Patient Holding System (PHS), different range of motion (ROM) in the arm ($25^{\circ}$, $27^{\circ}$, $29^{\circ}$, $31^{\circ}$, $33^{\circ}$, $35^{\circ}$) was applied to find the most comfortable angle and posture. The manufacturing company was investigated for the permeability of the support material, and the comfort level of applying bands (velcro type) to fix the patient's head and arms was evaluated. To find out the retake frequency due to movements, the amount of retake cases pre/post patient-compensation were analyzed using the PET Torso scan data collected between January to December 2012. Results: Among the patients without movement disorder, 18 answered that PHS and $29^{\circ}$ arm ROM were the most comfortable, and 2 answered $27^{\circ}$ and $31^{\circ}$, respectively. Among the patients with shoulder pain, 15 picked $31^{\circ}$ as the most comfortable angle, 2 picked $33^{\circ}$, and 3 picked $35^{\circ}$. For this study, the handle was manufactured to be adjustable for vertical movements. The material permeability of the patient-compensation device has been verified, and PHS and the compensation device were band-fixed (velcro type) to prevent device movements. A furrow was cut for head fixation to minimize the head and neck movements, fixing bands were attached for the head, wrist, forearm, and upper arm to limit movements. The retake frequency of PET Torso scan due to patient movements was 11.06% (191 cases/1,808 patients) before using the movement control device, and 2.65% (48 cases/1,732 patients) after using the device; 8.41% of the frequency was reduced. Conclusion: Recent change and innovation in the medical environment are making expensive medical image scans, and providing differentiated services for the customers is essential. To secure patient comfort and safety during PET/CT scans, ergonomic patient-compensation devices need to be provided. Therefore, this study manufactured a patientcompensation device with vertically adjustable ergonomic ROM according to the patient's body shape and condition during PET Torso scan. The defects in the basal ganglia images due to arm movements were reduced, and retaking was decreased.

  • PDF

Relationships on Magnitude and Frequency of Freshwater Discharge and Rainfall in the Altered Yeongsan Estuary (영산강 하구의 방류와 강우의 규모 및 빈도 상관성 분석)

  • Rhew, Ho-Sang;Lee, Guan-Hong
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.16 no.4
    • /
    • pp.223-237
    • /
    • 2011
  • The intermittent freshwater discharge has an critical influence upon the biophysical environments and the ecosystems of the Yeongsan Estuary where the estuary dam altered the continuous mixing of saltwater and freshwater. Though freshwater discharge is controlled by human, the extreme events are mainly driven by the heavy rainfall in the river basin, and provide various impacts, depending on its magnitude and frequency. This research aims to evaluate the magnitude and frequency of extreme freshwater discharges, and to establish the magnitude-frequency relationships between basin-wide rainfall and freshwater inflow. Daily discharge and daily basin-averaged rainfall from Jan 1, 1997 to Aug 31, 2010 were used to determine the relations between discharge and rainfall. Consecutive daily discharges were grouped into independent events using well-defined event-separation algorithm. Partial duration series were extracted to obtain the proper probability distribution function for extreme discharges and corresponding rainfall events. Extreme discharge events over the threshold 133,656,000 $m^3$ count up to 46 for 13.7y years, following the Weibull distribution with k=1.4. The 3-day accumulated rain-falls which occurred one day before peak discharges (1day-before-3day -sum rainfall), are determined as a control variable for discharge, because their magnitude is best correlated with that of the extreme discharge events. The minimum value of the corresponding 1day-before-3day-sum rainfall, 50.98mm is initially set to a threshold for the selection of discharge-inducing rainfall cases. The number of 1day-before-3day-sum rainfall groups after selection, however, exceeds that of the extreme discharge events. The canonical discriminant analysis indicates that water level over target level (-1.35 m EL.) can be useful to divide the 1day-before-3day-sum rainfall groups into discharge-induced and non-discharge ones. It also shows that the newly-set threshold, 104mm, can just separate these two cases without errors. The magnitude-frequency relationships between rainfall and discharge are established with the newly-selected lday-before-3day-sum rainfalls: $D=1.111{\times}10^8+1.677{\times}10^6{\overline{r_{3day}}$, (${\overline{r_{3day}}{\geqq}104$, $R^2=0.459$), $T_d=1.326T^{0.683}_{r3}$, $T_d=0.117{\exp}[0.0155{\overline{r_{3day}}]$, where D is the quantity of discharge, ${\overline{r_{3day}}$ the 1day-before-3day-sum rainfall, $T_{r3}$ and $T_d$, are respectively return periods of 1day-before-3day-sum rainfall and freshwater discharge. These relations provide the framework to evaluate the effect of freshwater discharge on estuarine flow structure, water quality, responses of ecosystems from the perspective of magnitude and frequency.