• Title/Summary/Keyword: Field Measured

Search Result 7,155, Processing Time 0.041 seconds

Performance Evaluation of Siemens CTI ECAT EXACT 47 Scanner Using NEMA NU2-2001 (NEMA NU2-2001을 이용한 Siemens CTI ECAT EXACT 47 스캐너의 표준 성능 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.259-267
    • /
    • 2004
  • Purpose: NEMA NU2-2001 was proposed as a new standard for performance evaluation of whole body PET scanners. in this study, system performance of Siemens CTI ECAT EXACT 47 PET scanner including spatial resolution, sensitivity, scatter fraction, and count rate performance in 2D and 3D mode was evaluated using this new standard method. Methods: ECAT EXACT 47 is a BGO crystal based PET scanner and covers an axial field of view (FOV) of 16.2 cm. Retractable septa allow 2D and 3D data acquisition. All the PET data were acquired according to the NEMA NU2-2001 protocols (coincidence window: 12 ns, energy window: $250{\sim}650$ keV). For the spatial resolution measurement, F-18 point source was placed at the center of the axial FOV((a) x=0, and y=1, (b)x=0, and y=10, (c)x=70, and y=0cm) and a position one fourth of the axial FOV from the center ((a) x=0, and y=1, (b)x=0, and y=10, (c)x=10, and y=0cm). In this case, x and y are transaxial horizontal and vertical, and z is the scanner's axial direction. Images were reconstructed using FBP with ramp filter without any post processing. To measure the system sensitivity, NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves were scanned at the center of transaxial FOV and 10 cm offset from the center. Attenuation free values of sensitivity wire estimated by extrapolating data to the zero wall thickness. NEMA scatter phantom with length of 70 cm was filled with F-18 or C-11solution (2D: 2,900 MBq, 3D: 407 MBq), and coincidence count rates wire measured for 7 half-lives to obtain noise equivalent count rate (MECR) and scatter fraction. We confirmed that dead time loss of the last flame were below 1%. Scatter fraction was estimated by averaging the true to background (staffer+random) ratios of last 3 frames in which the fractions of random rate art negligibly small. Results: Axial and transverse resolutions at 1cm offset from the center were 0.62 and 0.66 cm (FBP in 2D and 3D), and 0.67 and 0.69 cm (FBP in 2D and 3D). Axial, transverse radial, and transverse tangential resolutions at 10cm offset from the center were 0.72 and 0.68 cm (FBP in 2D and 3D), 0.63 and 0.66 cm (FBP in 2D and 3D), and 0.72 and 0.66 cm (FBP in 2D and 3D). Sensitivity values were 708.6 (2D), 2931.3 (3D) counts/sec/MBq at the center and 728.7 (2D, 3398.2 (3D) counts/sec/MBq at 10 cm offset from the center. Scatter fractions were 0.19 (2D) and 0.49 (3D). Peak true count rate and NECR were 64.0 kcps at 40.1 kBq/mL and 49.6 kcps at 40.1 kBq/mL in 2D and 53.7 kcps at 4.76 kBq/mL and 26.4 kcps at 4.47 kBq/mL in 3D. Conclusion: Information about the performance of CTI ECAT EXACT 47 PET scanner reported in this study will be useful for the quantitative analysis of data and determination of optimal image acquisition protocols using this widely used scanner for clinical and research purposes.

Dosimetry of the Low Fluence Fast Neutron Beams for Boron Neutron Capture Therapy (붕소-중성자 포획치료를 위한 미세 속중성자 선량 특성 연구)

  • Lee, Dong-Han;Ji, Young-Hoon;Lee, Dong-Hoon;Park, Hyun-Joo;Lee, Suk;Lee, Kyung-Hoo;Suh, So-Heigh;Kim, Mi-Sook;Cho, Chul-Koo;Yoo, Seong-Yul;Yu, Hyung-Jun;Gwak, Ho-Shin;Rhee, Chang-Hun
    • Radiation Oncology Journal
    • /
    • v.19 no.1
    • /
    • pp.66-73
    • /
    • 2001
  • Purpose : For the research of Boron Neutron Capture Therapy (BNCT), fast neutrons generated from the MC-50 cyclotron with maximum energy of 34.4 MeV in Korea Cancer Center Hospital were moderated by 70 cm paraffin and then the dose characteristics were investigated. Using these results, we hope to establish the protocol about dose measurement of epi-thermal neutron, to make a basis of dose characteristic of epi-thermal neutron emitted from nuclear reactor, and to find feasibility about accelerator-based BNCT. Method and Materials : For measuring the absorbed dose and dose distribution of fast neutron beams, we used Unidos 10005 (PTW, Germany) electrometer and IC-17 (Far West, USA), IC-18, ElC-1 ion chambers manufactured by A-150 plastic and used IC-l7M ion chamber manufactured by magnesium for gamma dose. There chambers were flushed with tissue equivalent gas and argon gas and then the flow rate was S co per minute. Using Monte Carlo N-Particle (MCNP) code, transport program in mixed field with neutron, photon, electron, two dimensional dose and energy fluence distribution was calculated and there results were compared with measured results. Results : The absorbed dose of fast neutron beams was $6.47\times10^{-3}$ cGy per 1 MU at the 4 cm depth of the water phantom, which is assumed to be effective depth for BNCT. The magnitude of gamma contamination intermingled with fast neutron beams was $65.2{\pm}0.9\%$ at the same depth. In the dose distribution according to the depth of water, the neutron dose decreased linearly and the gamma dose decreased exponentially as the depth was deepened. The factor expressed energy level, $D_{20}/D_{10}$, of the total dose was 0.718. Conclusion : Through the direct measurement using the two ion chambers, which is made different wall materials, and computer calculation of isodose distribution using MCNP simulation method, we have found the dose characteristics of low fluence fast neutron beams. If the power supply and the target material, which generate high voltage and current, will be developed and gamma contamination was reduced by lead or bismuth, we think, it may be possible to accelerator-based BNCT.

  • PDF

Performance Characteristics of 3D GSO PET/CT Scanner (Philips GEMINI PET/DT) (3차원 GSO PET/CT 스캐너(Philips GEMINI PET/CT의 특성 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Byeong-Il;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.4
    • /
    • pp.318-324
    • /
    • 2004
  • Purpose: Philips GEMINI is a newly introduced whole-body GSO PET/CT scanner. In this study, performance of the scanner including spatial resolution, sensitivity, scatter fraction, noise equivalent count ratio (NECR) was measured utilizing NEMA NU2-2001 standard protocol and compared with performance of LSO, BGO crystal scanner. Methods: GEMINI is composed of the Philips ALLEGRO PET and MX8000 D multi-slice CT scanners. The PET scanner has 28 detector segments which have an array of 29 by 22 GSO crystals ($4{\times}6{\times}20$ mm), covering axial FOV of 18 cm. PET data to measure spatial resolution, sensitivity, scatter fraction, and NECR were acquired in 3D mode according to the NEMA NU2 protocols (coincidence window: 8 ns, energy window: $409[\sim}664$ keV). For the measurement of spatial resolution, images were reconstructed with FBP using ramp filter and an iterative reconstruction algorithm, 3D RAMLA. Data for sensitivity measurement were acquired using NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves after we confirmed that dead time loss did not exceed 1%. To measure NECR and scatter fraction, 1110 MBq of F-18 solution was injected into a NEMA scatter phantom with a length of 70 cm and dynamic scan with 20-min frame duration was acquired for 7 half-lives. Oblique sinograms were collapsed into transaxial slices using single slice rebinning method, and true to background (scatter+random) ratio for each slice and frame was estimated. Scatter fraction was determined by averaging the true to background ratio of last 3 frames in which the dead time loss was below 1%. Results: Transverse and axial resolutions at 1cm radius were (1) 5.3 and 6.5 mm (FBP), (2) 5.1 and 5.9 mm (3D RAMLA). Transverse radial, transverse tangential, and axial resolution at 10 cm were (1) 5.7, 5.7, and 7.0 mm (FBP), (2) 5.4, 5.4, and 6.4 mm (3D RAMLA). Attenuation free values of sensitivity were 3,620 counts/sec/MBq at the center of transaxial FOV and 4,324 counts/sec/MBq at 10 cm offset from the center. Scatter fraction was 40.6%, and peak true count rate and NECR were 88.9 kcps @ 12.9 kBq/mL and 34.3 kcps @ 8.84 kBq/mL. These characteristics are better than that of ECAT EXACT PET scanner with BGO crystal. Conclusion: The results of this field test demonstrate high resolution, sensitivity and count rate performance of the 3D PET/CT scanner with GSO crystal. The data provided here will be useful for the comparative study with other 3D PET/CT scanners using BGO or LSO crystals.

The Increased Expression of Gelatinolytic Proteases Due to Cigarette Smoking Exposure in the Lung of Guinea Pig (기니픽에서 흡연 노출에 의한 젤라틴 분해 단백 효소의 발현 양상에 관한 연구)

  • Kang, Min-Jong;Lee, Jae-Ho;Yoo, Chul-Gyu;Lee, Choon-Taek;Chung, Hee-Soon;Seo, Jeong-Wook;Kim, Young-Whan;Han, Sung-Koo;Shim, Young-Soo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.50 no.4
    • /
    • pp.426-436
    • /
    • 2001
  • Background : Chronic obstructive pulmonary disease(COPD) is one of the major contributors to morbidity and mortality among the adult population. Cigarette smoking(CS) is undoubtedly the single most important factor in the pathogenesis of COPD. However, its mechanism is unclear. The current hypothesis regarding the pathogenesis of COPD postulates that an imbalance between proteases and antiproteases leads to the destructive changes in the lung parenchyma. This study had two aims. First, to evaluate the effect of CS exposure on histologic changes of the lung parenchyme, and second, to evaluate the effect of CS exposure on the expression of the gelatinolytic enzymes in BAL fluid cells in guinea pigs. Methods : Two groups of five guinea pigs were exposed to the whole smoke of 20 commercial cigarettes per day, 5 hours/day, 5 days/week, for 6weeks, and 12 weeks, respectively, using a smoking apparatus. Five age-matched guinea pigs exposed to room air were used as controls. Five or more sections were microscopically extamined(${\times}400$) and the number of cellular infiltration of the alveolar wall was measured in order to evaluate the effect of CS exposure on the histologic changes of lung parenchyme. The statistical significance was analyzed by a linear regression method. To evaluate the expression of the gelatinolytic enzymes in intraalveolar cells, BAL fluid was obtained and the intraalveolar cells were separated by centrifugation (500 g for 10 min at $4^{\circ}C$). Two sets of culture plates were loaded with $1{\times}10^6$ intraalveolar cells. One plate, contained O.1mM EDTA, a inhibitor of matrix metalloproteases(MMPs), and the other plate had no EDTA. Both plates were incubated for 48 hours at $37^{\circ}C$. After incubation, gelatinolytic protease expression in the supernatants was analyzed by gelatin zymography. Results : At the end of CS exposure, the level of blood carboxy Hb had increased significantly(4.1g/dl in control group, 24g/dl immediately after CS exposure, 18g/dl 30 min after CS exposure, 15g/dl 1 hour after CS exposure). Alveolar inflammatory cells were identified in the CS exposed guinea pigs. The number of alveolar cellular cells observed in a microscopic field ($400{\times}$) was $121.4{\pm}7.2$, $158.0{\pm}20.2$, $196.8{\pm}32.8$, in the control, the 6 weeks, and the 12 weeks group, respectively. The increased extent of inflammatory cellular infiltration of the lung parenchema showed a statistically significant linear relationship with the duration of CS exposure(p=0.001, $r^2=0.675$). Several types of gelatinolytic enzymes in the intraalveolar cells of CS exposed guinea pigs were expressed, of which some were inhibited by EDT A. However, the gelatinolytic enzymes were not expressed in the control groups. Conclusion : CS exposure increases inflammatory cellular infiltration of the alveolar wall and the expression of gelatinolytic proteases in guinea pigs. EDTA inhibits some of the gelatinolytic proteases. These findings suggest a possibility that CS exposure may increase MMP expression in the lungs of guinea pigs.

  • PDF

$Hg^{2+}$-promoted Aquation and Chelation of cis-[Co(en)$_2$(L)Cl]$^{2+}$ (L = Amines) Complexes ($Hg^{2+}$에 의한 cis-[Co(en)$_2$(L)Cl]$^{2+}$ (L = 아민류) 착물의 아쿠아화 및 킬레이트화 반응)

  • Chang Eon Oh;Doo Cheon Yoon;Bok Jo Kim;Myung Ki Doh
    • Journal of the Korean Chemical Society
    • /
    • v.36 no.4
    • /
    • pp.565-578
    • /
    • 1992
  • It has been suggested that Hg$^{2+}$-promoted reaction of a series of cis-[Co(en)$_2$(L)Cl]$^{2+}$ (en = 1,2-diaminoethane) with L = NH$_3$, NH$_2$CH$_3$, glyOC$_2$H$_5$, glyOCH$_3$, dl-alaOC$_2$H$_5$, NH$_2$CH$_2$CONH$_2$, and NH$_2$CH$_2$CN proceeds by dissociative interchange(I$_d$) mechanism from kinetic data, circular dichroism spectra, analyses of products, and the values of m(Grunwald-Winstein plot) using Y (solvent ionizing power) in aqueous solution and in mixed aqueous-organic solvent. It has been found that chloride replacement by water (aquation) for the series with L = NH$_3$ and NH$_2$CH$_3$ and chelation of ligand L to Co(Ⅲ) for the series with L = glyOC$_2$H$_5$, glyOCH$_3$, dl-alaOC$_2$H$_5$, NH$_2$CH$_2$CONH$_2$, and NH$_2$CH$_2$CN occurs, respectively. The rate constants on Hg$^{2+}$-induced reaction of the series except cis-[Co(en)$_2$(NH$_2$CH$_2$CN)Cl]$^{2+}$ were increased with increasing the contents of ethanol in mixed water-ethanol solvents. In mixed water-30${\%}$ organic solvents, the rate constants of the series except cis-[Co(en)$_2$(NH$_2$CH$_2$CN)Cl]$^{2+}$ have also been measured in the order 30${\%}$ 2-propanol-water > 30${\%}$ ethanol-water > water. However, the rate constants of cis-[Co(en)$_2$(NH$_2$CH$_2$CN)Cl]$^{2+}$ were reversed. The rate constants of the series with L= NH$_3$ and NH$_2$CH$_3$ were related to ligand field parameter (${\Delta}$), but those of the series with L = glyOC$_2$H$_5$, glyOCH$_3$, dl-alaOC$_2$H$_5$, NH$_2$CH$_2$CONH$_2$, NH$_2$CH$_2$CN were not. The reaction between the series and Hg2+ in aqueous media containing NO$_3^-$ has been investigated. The results for the reaction do not alter the mechanism, but the rate only was altered.

  • PDF

DC Resistivity method to image the underground structure beneath river or lake bottom (하저 지반특성 규명을 위한 전기비저항 탐사)

  • Kim Jung-Ho;Yi Myeong-Jong;Song Yoonho;Cho Seong-Jun;Lee Seong-Kon;Son Jeongsul
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2002.09a
    • /
    • pp.139-162
    • /
    • 2002
  • Since weak zones or geological lineaments are likely to be eroded, weak zones may develop beneath rivers, and a careful evaluation of ground condition is important to construct structures passing through a river. Dc resistivity surveys, however, have seldomly applied to the investigation of water-covered area, possibly because of difficulties in data aquisition and interpretation. The data aquisition having high quality may be the most important factor, and is more difficult than that in land survey, due to the water layer overlying the underground structure to be imaged. Through the numerical modeling and the analysis of case histories, we studied the method of resistivity survey at the water-covered area, starting from the characteristics of measured data, via data acquisition method, to the interpretation method. We unfolded our discussion according to the installed locations of electrodes, ie., floating them on the water surface, and installing at the water bottom, since the methods of data acquisition and interpretation vary depending on the electrode location. Through this study, we could confirm that the dc resistivity method can provide the fairly reasonable subsurface images. It was also shown that installing electrodes at the water bottom can give the subsurface image with much higher resolution than floating them on the water surface. Since the data acquired at the water-covered area have much lower sensitivity to the underground structure than those at the land, and can be contaminated by the higher noise, such as streaming potential, it would be very important to select the acquisition method and electrode array being able to provide the higher signal-to-noise ratio data as well as the high resolving power. The method installing electrodes at the water bottom is suitable to the detailed survey because of much higher resolving power, whereas the method floating them, especially streamer dc resistivity survey, is to the reconnaissance survey owing of very high speed of field work.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Excavation of Kim Jeong-gi and Korean Archeology (창산 김정기의 유적조사와 한국고고학)

  • Lee, Ju-heun
    • Korean Journal of Heritage: History & Science
    • /
    • v.50 no.4
    • /
    • pp.4-19
    • /
    • 2017
  • Kim Jeong-gi (pen-name: Changsan, Mar. 31, 1930 - Aug. 26, 2015) made a major breakthrough in the history of cultural property excavation in Korea: In 1959, he began to develop an interest in cultural heritage after starting work as an employee of the National Museum of Korea. For about thirty years until he retired from the National Research Institute of Cultural Heritage in 1987, he devoted his life to the excavation of our country's historical relics and artifacts and compiled countless data about them. He continued striving to identify the unique value and meaning of our cultural heritage in universities and excavation organizations until he passed away in 2015. Changsan spearheaded all of Korea's monumental archeological excavations and research. He is widely known at home and abroad as a scholar of Korean archeology, particularly in the early years of its existence as an academic discipline. As such, he has had a considerable influence on the development of Korean archeology. Although his multiple activities and roles are meaningful in terms of the country's archaeological history, there are limits to his contributions nevertheless. The Deoksugung Palace period (1955-1972), when the National Museum of Korea was situated in Deoksugung Palace, is considered to be a time of great significance for Korean archeology, as relics with diverse characteristics were researched during this period. Changsan actively participated in archeological surveys of prehistoric shell mounds and dwellings, conducted surveys of historical relics, measured many historical sites, and took charge of photographing and drawing such relics. He put to good use all the excavation techniques that he had learned in Japan, while his countrywide archaeological surveys are highly regarded in terms of academic history as well. What particularly sets his perspectives apart in archaeological terms is the fact that he raised the possibility of underwater tombs in ancient times, and also coined the term "Haemi Culture" as part of a theory of local culture aimed at furthering understanding of Bronze Age cultures in Korea. His input was simply breathtaking. In 1969, the National Research Institute of Cultural Heritage (NRICH) was founded and Changsan was appointed as its head. Despite the many difficulties he faced in running the institute with limited financial and human resources, he gave everything he had to research and field studies of the brilliant cultural heritages that Korea has preserved for so long. Changsan succeeded in restoring Bulguksa Temple, and followed this up with the successful excavation of the Cheonmachong Tomb and the Hwangnamdaechong Tomb in Gyeongju. He then explored the Hwangnyongsa Temple site, Bunhwangsa Temple, and the Mireuksa Temple site in order to systematically evaluate the Buddhist culture and structures of the Three Kingdoms Period. We can safely say that the large excavation projects that he organized and carried out at that time not only laid the foundations for Korean archeology but also made significant contributions to studies in related fields. Above all, in terms of the developmental process of Korean archeology, the achievements he generated with his exceptional passion during the period are almost too numerous to mention, but they include his systematization of various excavation methods, cultivation of archaeologists, popularization of archeological excavations, formalization of survey records, and promotion of data disclosure. On the other hand, although this "Excavation King" devoted himself to excavations, kept precise records, and paid keen attention to every detail, he failed to overcome the limitations of his era in the process of defining the nature of cultural remains and interpreting historical sites and structures. Despite his many roles in Korean archeology, the fact that he left behind a controversy over the identity of the occupant of the Hwangnamdaechong Tomb remains a sore spot in his otherwise perfect reputation.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

School Experiences and the Next Gate Path : An analysis of Univ. Student activity log (대학생의 학창경험이 사회 진출에 미치는 영향: 대학생활 활동 로그분석을 중심으로)

  • YI, EUNJU;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.149-171
    • /
    • 2020
  • The period at university is to make decision about getting an actual job. As our society develops rapidly and highly, jobs are diversified, subdivided, and specialized, and students' job preparation period is also getting longer and longer. This study analyzed the log data of college students to see how the various activities that college students experience inside and outside of school might have influences on employment. For this experiment, students' various activities were systematically classified, recorded as an activity data and were divided into six core competencies (Job reinforcement competency, Leadership & teamwork competency, Globalization competency, Organizational commitment competency, Job exploration competency, and Autonomous implementation competency). The effect of the six competency levels on the employment status (employed group, unemployed group) was analyzed. As a result of the analysis, it was confirmed that the difference in level between the employed group and the unemployed group was significant for all of the six competencies, so it was possible to infer that the activities at the school are significant for employment. Next, in order to analyze the impact of the six competencies on the qualitative performance of employment, we had ANOVA analysis after dividing the each competency level into 2 groups (low and high group), and creating 6 groups by the range of first annual salary. Students with high levels of globalization capability, job search capability, and autonomous implementation capability were also found to belong to a higher annual salary group. The theoretical contributions of this study are as follows. First, it connects the competencies that can be extracted from the school experience with the competencies in the Human Resource Management field and adds job search competencies and autonomous implementation competencies which are required for university students to have their own successful career & life. Second, we have conducted this analysis with the competency data measured form actual activity and result data collected from the interview and research. Third, it analyzed not only quantitative performance (employment rate) but also qualitative performance (annual salary level). The practical use of this study is as follows. First, it can be a guide when establishing career development plans for college students. It is necessary to prepare for a job that can express one's strengths based on an analysis of the world of work and job, rather than having a no-strategy, unbalanced, or accumulating excessive specifications competition. Second, the person in charge of experience design for college students, at an organizations such as schools, businesses, local governments, and governments, can refer to the six competencies suggested in this study to for the user-useful experiences design that may motivate more participation. By doing so, one event may bring mutual benefits for both event designers and students. Third, in the era of digital transformation, the government's policy manager who envisions the balanced development of the country can make a policy in the direction of achieving the curiosity and energy of college students together with the balanced development of the country. A lot of manpower is required to start up novel platform services that have not existed before or to digitize existing analog products, services and corporate culture. The activities of current digital-generation-college-students are not only catalysts in all industries, but also for very benefit and necessary for college students by themselves for their own successful career development.