• Title/Summary/Keyword: data complexity

Search Result 2,379, Processing Time 0.031 seconds

Analyzing an elementary school teacher's difficulties and mathematical modeling knowledge improvement in the process of modifying a mathematics textbook task to a mathematical modeling task: Focused on an experienced teacher (수학 교과서 과제의 수학적 모델링 과제로의 변형 과정에서 겪는 초등학교 교사의 어려움과 수학적 모델링 과제 개발을 위한 지식의 변화: 한 경력 교사의 사례를 중심으로)

  • Jung, Hye-Yun
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.363-380
    • /
    • 2023
  • This study analyzed the difficulties and mathematical modeling knowledge improvement that an elementary school teacher experienced in modifying a mathematics textbook task to a mathematical modeling task. To this end, an elementary school teacher with 10 years of experience participated in teacher-researcher community's repeated discussions and modified the average task in the data and pattern domain of the 5th grade. The results are as followings. First, in the process of task modification, the teacher had difficulties in reflecting reality, setting the appropriate cognitive level of mathematical modeling tasks, and presenting detailed tasks according to the mathematical modeling process. Second, through repeated task modifications, the teacher was able to develop realistic tasks considering the mathematical content knowledge and students' cognitive level, set the cognitive level of the task by adjusting the complexity and openness of the task, and present detailed tasks through thought experiments on students' task-solving process, which shows that teachers' mathematical modeling knowledge, including the concept of mathematical modeling and the characteristics of the mathematical modeling task, has improved. The findings of this study suggest that, in terms of the mathematical modeling teacher education, it is necessary to provide teachers with opportunities to improve their mathematical modeling task development competency through textbook task modification rather than direct provision of mathematical modeling tasks, experience mathematical modeling theory and practice together, and participate in teacher-researcher communities.

FunRank: Finding 1-Day Vulnerability with Call-Site and Data-Flow Analysis (FunRank: 함수 호출 관계 및 데이터 흐름 분석을 통한 공개된 취약점 식별)

  • Jaehyu Lee;Jihun Baek;Hyungon Moon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.305-318
    • /
    • 2023
  • The complexity of software products led many manufacturers to stitch open-source software for composing a product. Using open-source help reduce the development cost, but the difference in the different development life cycles makes it difficult to keep the product up-to-date. For this reason, even the patches for known vulnerabilities are not adopted quickly enough, leaving the entire product under threat. Existing studies propose to use binary differentiation techniques to determine if a product is left vulnerable against a particular vulnerability. Despite their effectiveness in finding real-world vulnerabilities, they often fail to locate the evidence of a vulnerability if it is a small function that usually is inlined at compile time. This work presents our tool FunRank which is designed to identify the short functions. Our experiments using synthesized and real-world software products show that FunRank can identify the short, inlined functions that suggest that the program is left vulnerable to a particular vulnerability.

Development of Geotechnical Information Input System Based on GIS on Standization of Geotechnical Investigation Result-format and Metadata (지반조사성과 양식 및 메타데이터 표준화를 통한 GIS기반의 지반정보 입력시스템 개발)

  • Jang, YongGu;Lee, SangHoon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.4D
    • /
    • pp.545-551
    • /
    • 2008
  • The MOCT(Ministry of Construction & Transportation) gave a order named as "The guideline for computerization and application of geotechnical investigation result" to an affiliated organization in March 2007. Today, pilot project of construction of geotechnical information database is in process to be stable for its system after applying this guideline, and discipline how to input investigated data for related users. We have developed standard for geotechnical investigation result-format, metadata for distribution of geotechnical information and to coordinate based on world geodetic system. Also, We had a introduce to status with respect to use the input system, collect a statistics of input contents. At a result, improvement items of input system is proposed. It was analyzed that most users put to practical use easily as a result of education for making use of on the spot of the developed GIIS. But There were problems with the GIIS as well as complexity of metadata formation, such as error of moving part of information window, and a part of recognition error of install program in accordance with computer OS circumstances. Particularly, to improve some parts of GIIS is needed, because of use of or KNHC (Korea National Housing Corporation)-specific format and difference of input process followed by MOCT's guideline. In this study, it is planning to make up for occurred problems, and improvements when operating and managing the Geotechnical Information DB center in 2008.

Quality Visualization of Quality Metric Indicators based on Table Normalization of Static Code Building Information (정적 코드 내부 정보의 테이블 정규화를 통한 품질 메트릭 지표들의 가시화를 위한 추출 메커니즘)

  • Chansol Park;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.199-206
    • /
    • 2023
  • The current software becomes the huge size of source codes. Therefore it is increasing the importance and necessity of static analysis for high-quality product. With static analysis of the code, it needs to identify the defect and complexity of the code. Through visualizing these problems, we make it guild for developers and stakeholders to understand these problems in the source codes. Our previous visualization research focused only on the process of storing information of the results of static analysis into the Database tables, querying the calculations for quality indicators (CK Metrics, Coupling, Number of function calls, Bad-smell), and then finally visualizing the extracted information. This approach has some limitations in that it takes a lot of time and space to analyze a code using information extracted from it through static analysis. That is since the tables are not normalized, it may occur to spend space and time when the tables(classes, functions, attributes, Etc.) are joined to extract information inside the code. To solve these problems, we propose a regularized design of the database tables, an extraction mechanism for quality metric indicators inside the code, and then a visualization with the extracted quality indicators on the code. Through this mechanism, we expect that the code visualization process will be optimized and that developers will be able to guide the modules that need refactoring. In the future, we will conduct learning of some parts of this process.

An Institutionalization and Legislation Productivity of Korean Metropolitan Councils: Panel Data Analysis (광역의회제도화와 입법생산성: 패널데이터 분석)

  • Jung, SungEun
    • Korean Journal of Legislative Studies
    • /
    • v.26 no.1
    • /
    • pp.105-145
    • /
    • 2020
  • This study analyzes the effect of institutionalization of Korean metropolitan councils on legislation productivity. Based on the theory of institutionalization of legislatures, three independent variables (stability, complexity and adaptability) were selected to measure the level of institutionalization of a metropolitan council and nine sub-analysis indicators. The main results of the analysis of the effect of the institutionalization of the metropolitan council on legislation productivity are as follows: First, the factors that determine the number of reported bills were the ratio of first-term lawmakers, average number of elected of the chairmen, number of special committees, number of legislative experts, actual age of metropolitan councils, and number of voters per lawmaker. Second, the factors that determine the rate of reported bills were the average number of elected of the chairmen, the number of special committees, the number of legislative experts, the actual age of metropolitan councils, and the number of voters per lawmaker. Third, the factors that determine the number of reported bills per lawmaker is the average number of elected of the chairmen, the actual age of metropolitan councils, and the number of voters per lawmaker. The above result points out that legislation productivity differences of past metropolitan councils can be understood as differences arising from legislative institutionalization levels and several policy considerations can be made to enhance legislation productivity of metropolitan councils.

Application of Multiple Linear Regression Analysis and Tree-Based Machine Learning Techniques for Cutter Life Index(CLI) Prediction (커터수명지수 예측을 위한 다중선형회귀분석과 트리 기반 머신러닝 기법 적용)

  • Ju-Pyo Hong;Tae Young Ko
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.594-609
    • /
    • 2023
  • TBM (Tunnel Boring Machine) method is gaining popularity in urban and underwater tunneling projects due to its ability to ensure excavation face stability and minimize environmental impact. Among the prominent models for predicting disc cutter life, the NTNU model uses the Cutter Life Index(CLI) as a key parameter, but the complexity of testing procedures and rarity of equipment make measurement challenging. In this study, CLI was predicted using multiple linear regression analysis and tree-based machine learning techniques, utilizing rock properties. Through literature review, a database including rock uniaxial compressive strength, Brazilian tensile strength, equivalent quartz content, and Cerchar abrasivity index was built, and derived variables were added. The multiple linear regression analysis selected input variables based on statistical significance and multicollinearity, while the machine learning prediction model chose variables based on their importance. Dividing the data into 80% for training and 20% for testing, a comparative analysis of the predictive performance was conducted, and XGBoost was identified as the optimal model. The validity of the multiple linear regression and XGBoost models derived in this study was confirmed by comparing their predictive performance with prior research.

Spatial Characteristics of Media Cluster in Seoul: Co-Evolution and Changes in Film and Broadcast TV Production (서울 영상산업 클러스터의 공간적 특성: 영화산업과 방송산업의 성장과 집적지 변화)

  • Kyung Won Lee;U-Seok Seo
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.26 no.3
    • /
    • pp.202-222
    • /
    • 2023
  • This study traces the growth and changes in the spatial distribution and characteristics of media cluster in Seoul by focusing on the co-evolution of film and TV production. To identify the spatial distribution and aggregation of film and broadcast TV production, we measure their spatial auto-correlation based on Moran's I and LISA, using the data from the Census on Establishments of the National Statistical Office. In addition, the eleven semi-structured interviews conducted with workers in the media industries, such as film crews and TV drama producers, help to clarify the complexity and dynamics of diverse factors that affect spatial distribution of media cluster. This multi-method study shows the increasing polycentricity of media cluster in the last decade. Gangnam, Mapo, Yeouido, Gangseo-Yeongdeungpo, and Seongsu have emerged as key hubs for media industries, particularly in light of changes in the transportation system and the real estate market. The finding indicates the co-evolution of film and broadcast TV production, demonstrating how the characteristics of the creative industry and metropolitan changes are intertwined with each other in shaping the geographical pattern of the media cluster.

Long-term and multidisciplinary research networks on biodiversity and terrestrial ecosystems: findings and insights from Takayama super-site, central Japan

  • Hiroyuki Muraoka;Taku M. Saitoh;Shohei Murayama
    • Journal of Ecology and Environment
    • /
    • v.47 no.4
    • /
    • pp.228-240
    • /
    • 2023
  • Growing complexity in ecosystem structure and functions, under impacts of climate and land-use changes, requires interdisciplinary understandings of processes and the whole-system, and accurate estimates of the changing functions. In the last three decades, observation networks for biodiversity, ecosystems, and ecosystem functions under climate change, have been developed by interested scientists, research institutions and universities. In this paper we will review (1) the development and on-going activities of those observation networks, (2) some outcomes from forest carbon cycle studies at our super-site "Takayama site" in Japan, and (3) a few ideas how we connect in-situ and satellite observations as well as fill observation gaps in the Asia-Oceania region. There have been many intensive research and networking efforts to promote investigations for ecosystem change and functions (e.g., Long-Term Ecological Research Network), measurements of greenhouse gas, heat, and water fluxes (flux network), and biodiversity from genetic to ecosystem level (Biodiversity Observation Network). Combining those in-situ field research data with modeling analysis and satellite remote sensing allows the research communities to up-scale spatially from local to global, and temporally from the past to future. These observation networks oftern use different methodologies and target different scientific disciplines. However growing needs for comprehensive observations to understand the response of biodiversity and ecosystem functions to climate and societal changes at local, national, regional, and global scales are providing opportunities and expectations to network these networks. Among the challenges to produce and share integrated knowledge on climate, ecosystem functions and biodiversity, filling scale-gaps in space and time among the phenomena is crucial. To showcase such efforts, interdisciplinary research at 'Takayama super-site' was reviewed by focusing on studies on forest carbon cycle and phenology. A key approach to respond to multidisciplinary questions is to integrate in-situ field research, ecosystem modeling, and satellite remote sensing by developing cross-scale methodologies at long-term observation field sites called "super-sites". The research approach at 'Takayama site' in Japan showcases this response to the needs of multidisciplinary questions and further development of terrestrial ecosystem research to address environmental change issues from local to national, regional and global scales.

Algorithm for Maximum Degree Vertex Partition of Cutwidth Minimization Problem (절단 폭 최소화 문제의 최대차수 정점 분할 알고리즘)

  • Sang-Un Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.37-42
    • /
    • 2024
  • This paper suggests polynomial time algorithm for cutwidth minimization problem that classified as NP-complete because the polynomial time algorithm to find the optimal solution has been unknown yet. To find the minimum cutwidth CWf(G)=max𝜈VCWf(𝜈)for given graph G=(V,E),m=|V|, n=|E|, the proposed algorithm divides neighborhood NG[𝜈i] of the maximum degree vertex 𝜈i in graph G into left and right and decides the vertical cut plane with minimum number of edges pass through the vertex 𝜈i firstly. Then, we split the left and right NG[𝜈i] into horizontal sections with minimum pass through edges. Secondly, the inner-section vertices are connected into line graph and the inter-section lines are connected by one line layout. Finally, we perform the optimization process in order to obtain the minimum cutwidth using vertex moving method. Though the proposed algorithm requires O(n2) time complexity, that can be obtains the optimal solutions for all of various experimental data

Effects of Cognitive Attention on Human Multitasking Behaviors (인지적 주의가 다중 작업 행위에 미치는 영향)

  • Minsoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.501-506
    • /
    • 2024
  • Humans have been shown to engage in multitasking behavior when searching for information on two or more topics or searching an information system at the same time. When processing multiple information tasks, priorities must be established as there are cognitive and physical limitations in processing multiple information tasks at once. The level of cognitive attention involved in multitasking behavior can vary depending on the complexity and importance of the information task. The objectives of this study are to understand: (a) the relationship between attention and information task prioritization behavior when people interact with information retrieval systems to find information for multiple tasks; (b) The effect of the degree of attention on information task prioritization behavior when people interact with an IR system to find information for multiple tasks. A review of the relevant literature shows that when people interact with information retrieval systems to find information for multiple tasks, their level of attention affects how they prioritize multiple information tasks. It should be noticed that people pay more attention to things they find interesting or important. Human-centered system design based on a conceptual understanding of multitasking is discussed.