• Title/Summary/Keyword: Graph

Search Result 4,698, Processing Time 0.035 seconds

An Analysis on the Level of Evidence used in Gifted Elementary Students' Debate (초등과학 영재의 논증활동에서 사용된 증거의 수준 분석)

  • Cho, Hyun-Jun;Yang, Il-Ho;Lee, Hyo-Nyong;Song, Yun-Mi
    • Journal of The Korean Association For Science Education
    • /
    • v.28 no.5
    • /
    • pp.495-505
    • /
    • 2008
  • The purpose of this study was to analyze the level of evidence used in gifted elementary students' argumentation. The subjects were 15, 5th and 6th grade students selected in the Science Education Institute for Gifted Youth in K University. After the argumentation task was given to students 2 weeks ago, the students grouped themselves in the affirmative and negative and took part in a debate for 2 hours. Their argumentation process was observed, recorded and transcribed for analysis. Transcribed data was given a Protocol Number according to priority and was examined to find out what were the characteristics when students participated in the task. The evidence used in argumentation was graded from level 1 to level 6 according to Perella's Hierarchy of Evidence and the rate of frequency classified by the level was expressed in graph. Students used Level 1- Level 2 evidence above 50% without for or against task. They had weak argumentation making use of low-level evidence such as individual experience, opinion and another person's experience rather than objective evidences. On the other hand, students commented on the lack of opponent's evidence when they could not trust an opponent's evidence. If one team asked the other to present more evidence but could not, they disregarded the question and turned to another topic. And in cases where the opponent team refuted with evidences of high level, the other team just repeated their claim or evaded the rebuttal. The students tended to complete the argument without the same conclusions with some interruptions. The results show that we need an educational programs including scientific argumentation for science-gifted elementary school students.

A Study of Psychometric Function Curve for Korean Standard Monosyllabic Word Lists for Preschoolers (KS-MWL-P) (한국표준 학령전기용 단음절어표 (Korean Standard Monosyllabic Word Lists for Preschoolers, KS-MWL-P)의 심리음향기능곡선 연구)

  • Shin, Hyun-Wook;Kim, Jin-Sook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.534-541
    • /
    • 2009
  • Word recognition test (WRT) for the children can be useful for diagnosing the degree of communication disability, prescribing hearing instruments, planning aural rehabilitation and speech therapy, and determination of site of lesions. The Korean standard monosyllabic word lists for preschoolers (KS-MWL-P) were developed considering the criteria given by the literatures. However, the authors of KS-MWL-P suggested more children should be included to verify homogeneity of the lists using psychometric function curve since only 8 children participated in the developing process. The purpose of this study was to explore the homogeneity of KS-MWL-P for supplementing the limitations of the lists employing psychometric analysis. To 23 preschoolers who have normal-hearing, 100 monosyllabic KS-MWL-P words were examined with the pictures. Psychometric function curve with linear slopes of 20% and 80%'s correct rates through accounting recognition scores of each monosyllabic word at variable intensities from -10 to 40 dBHL was obtained and analyzed. As a result, s-shaped psychometric function curve was presented with increasing correct rate depending on intensity and showed no statistical significant differences among each word and list. The congruous graph shapes among lists also indicated good homogeneity and the list 1,2,3,4's average slopes were 4.48, 3.86, 4.65, 4.50. It was verified that the homogeneity was suitable because the analysis of variance showed no statistical significance among lists (p>0.05). However, KS-MWL-P's order of slope according to the order of the number of items, $1{\sim}10$, $1{\sim}20$, $1{\sim}25$ showed no difference with the p-value of 0.93, 0.59, 0.91, 0.70 for the lists 1,2,3, and 4, respectively. Although KS-MWL-P was assumed that the lower-numbered items were easy for testing younger ages, this study's results could not agree with the author's conclusion. Considering this matter, rearranging of the number of items should be performed according to the analysis of slope suggested by this study for testing younger children with easier items. Other than this, in conclusion, KS-MWL-P was proved to be useful for clinical and rehabilitative evaluating and training tools for preschoolers.

Establishment of Test Conditions and Interlaboratory Comparison Study of Neuro-2a Assay for Saxitoxin Detection (Saxitoxin 검출을 위한 Neuro-2a 시험법 조건 확립 및 실험실 간 변동성 비교 연구)

  • Youngjin Kim;Jooree Seo;Jun Kim;Jeong-In Park;Jong Hee Kim;Hyun Park;Young-Seok Han;Youn-Jung Kim
    • Journal of Marine Life Science
    • /
    • v.9 no.1
    • /
    • pp.9-21
    • /
    • 2024
  • Paralytic shellfish poisoning (PSP) including Saxitoxin (STX) is caused by harmful algae, and poisoning occurs when the contaminated seafood is consumed. The mouse bioassay (MBA), a standard test method for detecting PSP, is being sanctioned in many countries due to its low detection limit and the animal concerns. An alternative to the MBA is the Neuro-2a cell-based assay. This study aimed to establish various test conditions for Neuro-2a assay, including cell density, culture conditions, and STX treatment conditions, to suit the domestic laboratory environment. As a result, the initial cell density was set to 40,000 cells/well and the incubation time to 24 hours. Additionally, the concentration of Ouabain and Veratridine (O/V) was set to 500/50 μM, at which most cells died. In this study, we identified eight concentrations of STX, ranging from 368 to 47,056 fg/μl, which produced an S-shaped dose-response curve when treated with O/V. Through inter-laboratory variability comparison of the Neuro-2a assay, we established five Quality Control Criteria to verify the appropriateness of the experiments and six Data Criteria (Top and Bottom OD, EC50, EC20, Hill slop, and R2 of graph) to determine the reliability of the experimental data. The Neuro-2a assay conducted under the established conditions showed an EC50 value of approximately 1,800~3,500 fg/μl. The intra- & inter-lab variability comparison results showed that the coefficients of variation (CVs) for the Quality Control and Data values ranged from 1.98% to 29.15%, confirming the reproducibility of the experiments. This study presented Quality Control Criteria and Data Criteria to assess the appropriateness of the experiments and confirmed the excellent repeatability and reproducibility of the Neuro-2a assay. To apply the Neuro-2a assay as an alternative method for detecting PSP in domestic seafood, it is essential to establish a toxin extraction method from seafood and toxin quantification methods, and perform correlation analysis with MBA and instrumental analysis methods.

Distributional Characteristics of Fault Segments in Cretaceous and Tertiary Rocks from Southeastern Gyeongsang Basin (경상분지 남동부 일대의 백악기 및 제3기 암류에서 발달하는 단층분절의 분포특성)

  • Park, Deok-Won
    • The Journal of the Petrological Society of Korea
    • /
    • v.27 no.3
    • /
    • pp.109-120
    • /
    • 2018
  • The distributional characteristics of fault segments in Cretaceous and Tertiary rocks from southeastern Gyeongsang Basin were derived. The 267 sets of fault segments showing linear type were extracted from the curved fault lines delineated on the regional geological map. First, the directional angle(${\theta}$)-length(L) chart for the whole fault segments was made. From the related chart, the general d istribution pattern of fault segments was derived. The distribution curve in the chart was divided into four sections according to its overall shape. NNE, NNW and WNW directions, corresponding to the peaks of the above sections, indicate those of the Yangsan, Ulsan and Gaeum fault systems. The fault segment population show near symmetrical distribution with respect to $N19^{\circ}E$ direction corresponding to the maximum peak. Second, the directional angle-frequency(N), mean length(Lm), total length(Lt) and density(${\rho}$) chart was made. From the related chart, whole domain of the above chart was divided into 19 domains in terms of the phases of the distribution curve. The directions corresponding to the peaks of the above domains suggest the directions of representative stresses acted on rock body. Third, the length-cumulative frequency graphs for the 18 sub-populations were made. From the related chart, the value of exponent(${\lambda}$) increase in the clockwise direction($N10{\sim}20^{\circ}E{\rightarrow}N50{\sim}60^{\circ}E$) and counterclockwise direction ($N10{\sim}20^{\circ}W{\rightarrow}N50{\sim}60^{\circ}W$). On the other hand, the width of distribution of lengths and mean length decrease. The chart for the above sub-populations having mutually different evolution characteristics, reveals a cross section of evolutionary process. Fourth, the general distribution chart for the 18 graphs was made. From the related chart, the above graphs were classified into five groups(A~E) according to the distribution area. The lengths of fault segments increase in order of group E ($N80{\sim}90^{\circ}E{\cdot}N70{\sim}80^{\circ}E{\cdot}N80{\sim}90^{\circ}W{\cdot}N50{\sim}60^{\circ}W{\cdot}N30{\sim}40^{\circ}W{\cdot}N40{\sim}50^{\circ}W$) < D ($N70{\sim}80^{\circ}W{\cdot}N60{\sim}70^{\circ}W{\cdot}N60{\sim}70^{\circ}E{\cdot}N50{\sim}60^{\circ}E{\cdot}N40{\sim}50^{\circ}E{\cdot}N0{\sim}10^{\circ}W$) < C ($N20{\sim}30^{\circ}W{\cdot}N10{\sim}20^{\circ}W$) < B ($N0{\sim}10^{\circ}E{\cdot}N30{\sim}40^{\circ}E$) < A ($N20{\sim}30^{\circ}E{\cdot}N10{\sim}20^{\circ}E$). Especially the forms of graph gradually transition from a uniform distribution to an exponential one. Lastly, the values of the six parameters for fault-segment length were divided into five groups. Among the six parameters, mean length and length of the longest fault segment decrease in the order of group III ($N10^{\circ}W{\sim}N20^{\circ}E$) > IV ($N20{\sim}60^{\circ}E$) > II ($N10{\sim}60^{\circ}W$) > I ($N60{\sim}90^{\circ}W$) > V ($N60{\sim}90^{\circ}E$). Frequency, longest length, total length, mean length and density of fault segments, belonging to group V, show the lowest values. The above order of arrangement among five groups suggests the interrelationship with the relative formation ages of fault segments.

Factors influencing the axes of anterior teeth during SWA on masse sliding retraction with orthodontic mini-implant anchorage: a finite element study (교정용 미니 임플랜트 고정원과 SWA on masse sliding retraction 시 전치부 치축 조절 요인에 관한 유한요소해석)

  • Jeong, Hye-Sim;Moon, Yoon-Shik;Cho, Young-Soo;Lim, Seung-Min;Sung, Sang-Jin
    • The korean journal of orthodontics
    • /
    • v.36 no.5
    • /
    • pp.339-348
    • /
    • 2006
  • Objective: With development of the skeletal anchorage system, orthodontic mini-implant (OMI) assisted on masse sliding retraction has become part of general orthodontic treatment. But compared to the emphasis on successful anchorage preparation, the control of anterior teeth axis has not been emphasized enough. Methods: A 3-D finite element Base model of maxillary dental arch and a Lingual tipping model with lingually inclined anterior teeth were constructed. To evaluate factors influencing the axis of anterior teeth when OMI was used as anchorage, models were simulated with 2 mm or 5 mm retraction hooks and/or by the addition of 4 mm of compensating curve (CC) on the main archwire. The stress distribution on the roots and a 25000 times enlarged axis graph were evaluated. Results: Intrusive component of retraction force directed postero-superiorly from the 2 mm height hook did not reduce the lingual tipping of anterior teeth. When hook height was increased to 5 mm, lateral incisor showed crown-labial and root-lingual torque and uncontrolled tipping of the canine was increased.4 mm of CC added to the main archwire also induced crown-labial and root-lingual torque of the lateral incisor but uncontrolled tipping of the canine was decreased. Lingual tipping model showed very similar results compared with the Base model. Conclusion: The results of this study showed that height of the hook and compensating curve on the main archwire can influence the axis of anterior teeth. These data can be used as guidelines for clinical application.

The Plan of Dose Reduction by Measuring and Evaluating Occupationally Exposed Dose in vivo Tests of Nuclear Medicine (핵의학 체내검사 업무 단계 별 피폭선량 측정 및 분석을 통한 피폭선량 감소 방안)

  • Kil, Sang-Hyeong;Lim, Yeong-Hyeon;Park, Kwang-Youl;Jo, Kyung-Nam;Kim, Jung-Hun;Oh, Ji-Eun;Lee, Sang-Hyup;Lee, Su-Jung;Jun, Ji-Tak;Jung, Eui-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.26-32
    • /
    • 2010
  • Purpose: It is to find the way to minimize occupationally exposed dose for workers in vivo tests in each working stage within the range of the working environment which does not ruin the examination and the performance efficiency. Materials and Methods: The process of the nuclear tests in vivo using a radioactive isotope consists of radioisotope distribution, a radioisotope injection ($^{99m}Tc$, $^{18}F$-FDG), and scanning and guiding patients. Using a measuring instrument of RadEye-G10 gamma survey meter (Thermo SCIENTIFIC), the exposure doses in each working stage are measured and evaluated. Before the radioisotope injection the patients are explained about the examination and educated about matters that require attention. It is to reduce the meeting time with the patients. In addition, workers are also educated about the outside exposure and have to put on the protected devices. When the radioisotope is injected to the patients the exposure doses are measured due to whether they are in the protected devices or not. It is also measured due to whether there are the explanation about the examination and the education about matters that require attention or not. The total exposure dose is visualized into the graph in using Microsoft office excel 2007. The difference of this doses are analyzed by wilcoxon signed ranks test in using SPSS (statistical package for the social science) program 12.0. In this case of p<0.01, this study is reliable in the statistics. Results: It was reliable in the statistics that the exposure dose of injecting $^{99m}Tc$-DPD 20 mCi in wearing the protected devices showed 88% smaller than the dose of injecting it without the protected devices. However, it was not reliable in the statistics that the exposure dose of injecting $^{18}F$-FDG 10 mCi with wearing protected devices had 26% decrease than without them. Training before injecting $^{99m}Tc$-DPD 20 mCi to patient made the exposure dose drop to 63% comparing with training after the injection. The dose of training before injecting $^{18}F$-FDG 10 mCi had 52% less then the training after the injection. Both of them were reliable in the statistics. Conclusion: In the examination of using the radioisotope $^{99m}Tc$, wearing the protected devices are more effective to reduce the exposure dose than without wearing them. In the case of using $^{18}F$-FDG, reducing meeting time with patients is more effective to drop the exposure dose. Therefore if we try to protect workers from radioactivity according to each radioisotope characteristic it could be more effective and active radiation shield from radioactivity.

  • PDF

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Comparison and Evaluation of the Effectiveness between Respiratory Gating Method Applying The Flow Mode and Additional Gated Method in PET/CT Scanning. (PET/CT 검사에서 Flow mode를 적용한 Respiratory Gating Method 촬영과 추가 Gating 촬영의 비교 및 유용성 평가)

  • Jang, Donghoon;Kim, Kyunghun;Lee, Jinhyung;Cho, Hyunduk;Park, Sohyun;Park, Youngjae;Lee, Inwon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.54-59
    • /
    • 2017
  • Purpose The present study aimed at assessing the effectiveness of the respiratory gating method used in the flow mode and additional localized respiratory-gated imaging, which differs from the step and go method. Materials and Methods Respiratory gated imaging was performed in the flow mode to twenty patients with lung cancer (10 patients with stable signals and 10 patients with unstable signals), who underwent PET/CT scanning of the torso using Biograph mCT Flow PET/CT at Bundang Seoul University Hospital from June 2016 to September 2016. Additional images of the lungs were obtained by using the respiratory gating method. SUVmax, SUVmean, and Tumor Volume ($cm^3$) of non-gating images, gating images, and additional lung gating images were found with Syngo,bia (Siemens, Germany). A paired t-test was performed with GraphPad Prism6, and changes in the width of the amplitude range were compared between the two types of gating images. Results The following results were obtained from all patients when the respiratory gating method was applied: $SUV_{max}=9.43{\pm}3.93$, $SUV_{mean}=1.77{\pm}0.89$, and $Tumor\;Volume=4.17{\pm}2.41$ for the non-gating images, $SUV_{max}=10.08{\pm}4.07$, $SUV_{mean}=1.75{\pm}0.81$, and $Tumor\;Volume=3.56{\pm}2.11$ for the gating images, and $SUV_{max}=10.86{\pm}4.36$, $SUV_{mean}=1.77{\pm}0.85$, $Tumor\;Volume=3.36{\pm}1.98$ for the additional lung gating images. No statistically significant difference in the values of $SUV_{mean}$ was found between the non-gating and gating images, and between the gating and lung gating images (P>0.05). A significant difference in the values of $SUV_{max}$ and Tumor Volume were found between the aforementioned groups (P<0.05). The width of the amplitude range was smaller for lung gating images than gating images for 12 from 20 patients (3 patients with stable signals, 9 patients with unstable signals). Conclusion In PET/CT scanning using the respiratory gating method in the flow mode, any lesion movements caused by respiration were adjusted; therefore, more accurate measurements of $SUV_{max}$, and Tumor Volume could be obtained from the gating images than the non-gating images in this study. In addition, the width of the amplitude range decreased according to the stability of respiration to a more significant degree in the additional lung gating images than the gating images. We found that gating images provide information that is more useful for diagnosis than the one provided by non-gating images. For patients with irregular signals, it may be helpful to perform localized scanning additionally if time allows.

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.