• Title/Summary/Keyword: Dynamic Graph

Search Result 279, Processing Time 0.023 seconds

Design Graphs for Asphalt Concrete Track with Wide Sleepers Using Performance Parameters (성능요소를 반영한 광폭 침목형 아스팔트콘크리트 궤도 설계그래프)

  • Lee, SeongHyeok;Lim, Yujin;Song, Geunwoo;Cho, Hojin
    • Journal of the Korean Society for Railway
    • /
    • v.19 no.3
    • /
    • pp.331-340
    • /
    • 2016
  • Wheel load, design velocity, traffic amount (MGT), stiffness and thickness of sub-layers of asphalt concrete track are selected as performance design parameters in this study. A pseudo-static wheel load computed considering the dynamic amplification factor (DAF) based on the design velocity of the KTX was applied to the top of asphalt concrete track for full three dimensional structural analysis using the ABAQUS program. Tensile strains at the bottom of the asphalt concrete layer and vertical strains at the top of the subgrade were computed from the structural FEA with different combinations of performance parameter values for one asphalt concrete track section. Utilizing the computed structural analysis results such as the tensile strains and the vertical strains, it was possible to develop design graphs to investigate proper track sections for different combination of the performance parameters including wheel load, design velocity, traffic amount(MGT), stiffness and thickness of asphalt concrete layers for any given design life. By analyzing the proposed design graphs for asphalt concrete track, it was possible to propose simple design tables that can be used by engineers for the effective and fast design of track.

Comparative Analysis of Influential Factors on Computer-Based Mathematics Assessment between Korea and Singapore (우리나라와 싱가포르의 컴퓨터 기반 수학 평가 결과에 대한 영향 요인 비교 분석)

  • Rim, Haemee;Jung, Hyekyun
    • Journal of Educational Research in Mathematics
    • /
    • v.27 no.2
    • /
    • pp.157-170
    • /
    • 2017
  • Mathematics was the main domain of PISA 2012, and both paper-based and computer-based assessment of mathematics (CBAM) were conducted. PISA 2012 was the first large-scale computer-based mathematics assessment in Korea, and it is meaningful in that it evaluated students' mathematical literacy in problem situations using dynamic geometry, graph, and spreadsheet. Although Korea ranked third in CBAM, the use of ICT in mathematics lessons appeared to be low. On the other hand, this study focused on Singapore, which ranked first in CBAM. The Singapore Ministry of Education developed online programs such as AlgeTools and AlgeDisc, and implemented the programs in classes by specifying them in mathematics curriculum and textbooks. Thus, this study investigated influential factors on computer-based assessment of mathematics by comparing the results of Korea and Singapore, and aimed to provide meaningful evidence on the direction of Korea's ICT-based mathematics education. The results showed that ICT use at home for school related tasks, attitudes towards computers as a tool for school learning, and openness and perseverance of problem solving were positively associated with computer-based mathematics performance, whereas the use of ICT in mathematics class by teacher demonstration was negatively related. Efforts are needed to improve computer use and enhance teaching techniques related to ICT use in Korean math classes. Future research is recommended to examine how effectively teachers use ICT in mathematics class in Singapore.

A Collision detection from division space for performance improvement of MMORPG game engine (MMORPG 게임엔진의 성능개선을 위한 분할공간에서의 충돌검출)

  • Lee, Sung-Ug
    • The KIPS Transactions:PartB
    • /
    • v.10B no.5
    • /
    • pp.567-574
    • /
    • 2003
  • Application field of third dimension graphic is becoming diversification by the fast development of hardware recently. Various theory of details technology necessary to design game such as 3D MMORPG (Massive Multi-play Online Role Flaying Game) that do with third dimension. Cyber city should be absorbed. It is the detection speed that this treatise is necessary in game engine design. 3D MMORPG game engine has much factor that influence to speed as well as rendering processing because it express huge third dimension city´s grate many building and individual fast effectively by real time. This treatise nay get concept about the collision in 3D MMORPG and detection speed elevation of game engine through improved detection method. Space division is need to process fast dynamically wide outside that is 3D MMORPG´s main detection target. 3D is constructed with tree construct individual that need collision using processing geometry dataset that is given through new graph. We may search individual that need in collision detection and improve the collision detection speed as using hierarchical bounding box that use it with detection volume. Octree that will use by division octree is used mainly to express rightly static object but this paper use limited OSP by limited space division structure to use this in dynamic environment. Limited OSP space use limited space with method that divide square to classify typically complicated 3D space´s object. Through this detection, this paper propose follow contents, first, this detection may judge collision detection at early time without doing all polygon´s collision examination. Second, this paper may improve detection efficiency of game engine through and then reduce detection time because detection time of bounding box´s collision detection.

Fermentative Water Purification based on Bio-hydrogen (생물학적 수소 발효를 통한 수처리 시스템)

  • Lee, Jung-Yeol;Chen, Xue-Jiao;Min, Kyung-Sok
    • Journal of Korean Society on Water Environment
    • /
    • v.27 no.6
    • /
    • pp.926-931
    • /
    • 2011
  • Among various techniques for hydrogen production from organic wastewater, a dark fermentation is considered to be the most feasible process due to the rapid hydrogen production rate. However, the main drawback of it is the low hydrogen production yield due to intermediate products such as organic acids. To improve the hydrogen production yield, a co-culture system of dark and photo fermentation bacteria was applied to this research. The maximum specific growth rate of R. sphaeroides was determined to be $2.93h^{-1}$ when acetic acid was used as a carbon source. It was quite high compared to that of using a mixture of volatile fatty acids (VFAs). Acetic acid was the most attractive to the cell growth of R. sphaeroides, however, not less efficient in the hydrogen production. In the co-culture system with glucose, hydrogen could be steadily produced without any lag-phase. There were distinguishable inflection points in the accumulation of hydrogen production graph that resulted from the dynamic production of VFAs or consumption of it by the interaction between the dark and photo fermentation bacteria. Lastly, the hydrogen production rate of a repeated fed-batch run was $15.9mL-H_2/L/h$, which was achievable in the sustainable hydrogen production.

Image Quality Analysis According to the of a Linear Transducer (선형 탐촉자에서 관심 시각 영역 변화에 따른 화질 분석)

  • Ji-Na, Park;Jae-Bok, Han;Jong-Gil, Kwak;Jong-Nam, Song
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.7
    • /
    • pp.975-984
    • /
    • 2022
  • Since a linear transducer has an area of interest equal to the length of the transducer, the area of interest can be expanded using the virtual convex function installed in the device.However, it was thought that the change in the direction of the ultrasonic sound velocity according to the change in the visual area of interest would affect the image quality, so this was objectively confirmed. For this study, image evaluation and SNR·CNR of the phantom for ultrasound quality control were measured. As a result, in the phantom image evaluation, both images were able to identify structures in functional resolution, grayscale, and dynamic range. However, it was confirmed that the standard image was excellent in the reproducibility of the size and shape of the structure. As a result of SNR·CNR evaluation, SNR·CNR of most trapezoidal images was low, except for structures at specific locations. In addition, through the statistical analysis graph, it was further confirmed that the SNR and CNR for each depth decreased as the size of the cystic structure decreased. Through this study, it was confirmed that the use of the function has the advantage of providing a wide visual area of interest, but it has an effect on the image quality. Therefore, when using the virtual convex function, it is judged that the examiner should use it in an appropriate situation and conduct various studies to acquire high-quality images and to improve the understanding and proficiency of the equipment.

Analysis Program for Offshore Wind Energy Substructures Embedded in AutoCAD (오토캐드 환경에서 구현한 해상풍력 지지구조 해석 프로그램)

  • James Ban;Chuan Ma;Sorrasak Vachirapanyakun;Pasin Plodpradit;Goangseup Zi
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.4
    • /
    • pp.33-44
    • /
    • 2023
  • Wind power is one of the most efficient and reliable energy sources in the transition to a low-carbon society. In particular, offshore wind power provides a high-quality and stable wind resource compared to onshore wind power while both present a higher installed capacity than other renewables. In this paper, we present our new program, the X-WIND program well suitable for the assessment of the substructure of offshore wind turbines. We have developed this program to increase the usability of analysis programs for offshore wind energy substructures by addressing the shortcomings of existing programs. Unlike the existing programs which cannot solely perform the substructure analyses or lack pre-post processors, our X-WIND program can complete the assessment analysis for the offshore wind turbines alone. The X-WIND program is embedded in AutoCAD so that both design and analysis are performed on a single platform. This also performs static and dynamic analysis for wind, wave, and current loads, essential for offshore wind power structures, and includes pre/post processors for designs, mesh developments, graph plotting, and code checking. With this expertise, our program enhances the usability of analysis programs for offshore wind energy substructures, promoting convenience and efficiency.

A Dynamic exploration of Constructivism Research based on Citespace Software in the Filed of Education (교육학 분야에서 CiteSpace에 기초한 구성주의 연구 동향 탐색)

  • Jiang, Yuxin;Song, Sun-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.5
    • /
    • pp.576-584
    • /
    • 2022
  • As an important branch of cognitive psychology, "constructivism" is called a "revolution" in contemporary educational psychology, which has a profound influence on the field of pedagogy and psychology. Based on "WOS" database, this study selects "WOS Core database" and "KCI database", uses CiteSpace visualization software as analysis tool, and makes knowledge map analysis on the research literature of "constructivism" in the field of education in recent 35 years. Analysis directions include annual analysis, network connection analysis by country(region) branch, author, institution or University, and keyword analysis. The purpose of the analysis is to grasp the subject areas, research hotspots and future trends of the research on constructivism, and to provide theoretical reference for the research on constructivism. There are three conclusions from the study. 1. Studies on the subject of constructivism have continued from the 1980s to the present. It is now in a period of steady development. 2. Countries concerned with the subject of constructivism mainly include the United States, Canada, Britain, Australia and the Netherlands. The main research institutions and authors are mainly located in these countries. 3. Currently, the keywords constructivism research focus on the clusters of "instructional strategies", and the development of science and technology is affecting individual learning. In the future, instructional strategies will become the focus of structural constructivism research. With the development of instructional technology, it is necessary to conduct research related to the development of new teaching models.

Rheological properties of dental resin cements during polymerization (치과용 레진 시멘트의 유변학적 성질)

  • Lee, Jae-Rim;Lee, Jai-Bong;Han, Jung-Suk;Kim, Sung-Hun;Yeo, In-Sung;Ha, Seung-Ryong;Kim, Hee-Kyung
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.52 no.2
    • /
    • pp.82-89
    • /
    • 2014
  • Purpose: The purpose of this study was to observe the change of viscoelastic properties of dental resin cements during polymerization. Materials and methods: Six commercially available resin cement materials (Clearfil SA luting, Panavia F 2.0, Zirconite, Variolink N, RelyX Unicem clicker, RelyX U200) were investigated in this study. A dynamic oscillation-time sweep test was performed with AR1500 stress controlled rheometer at $32^{\circ}C$. The changes in shear storage modulus (G'), shear loss modulus (G"), loss tangent (tan ${\delta}$) and displacement were measured for twenty minutes and repeated three times for each material. The data were analyzed using one-way ANOVA and Tukey's post hoc test (${\alpha}$=0.05). Results: After mixing, all materials demonstrated an increase in G' with time, reaching the plateau in the end. RelyX U200 demonstrated the highest G' value, while RelyX Unicem (clicker type) and Variolink N demonstrated the lowest G' value at the end of experimental time. Tan ${\delta}$was maintained at some level and reached the zero at the starting point where G' began to increase. The tan ${\delta}$and displacement of the tested materials showed similar pattern in the graph within change of time. The displacement of all 6 materials approached to zero within 6 minutes. Conclusion: Compared to other resin cements used in this study, RelyX U200 maintained plastic property for a longer period of time. When it completed the curing process, RelyX U200 had the highest stiffness. It is convenient for clinicians to cement multiple units of dental prostheses simultaneously.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.