• Title/Summary/Keyword: Complexity.

Search Result 10,073, Processing Time 0.029 seconds

Predicting CEFR Levels in L2 Oral Speech, Based on Lexical and Syntactic Complexity

  • Hu, Xiaolin
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.1
    • /
    • pp.35-45
    • /
    • 2021
  • With the wide spread of the Common European Framework of Reference (CEFR) scales, many studies attempt to apply them in routine teaching and rater training, while more evidence regarding criterial features at different CEFR levels are still urgently needed. The current study aims to explore complexity features that distinguish and predict CEFR proficiency levels in oral performance. Using a quantitative/corpus-based approach, this research analyzed lexical and syntactic complexity features over 80 transcriptions (includes A1, A2, B1 CEFR levels, and native speakers), based on an interview test, Standard Speaking Test (SST). ANOVA and correlation analysis were conducted to exclude insignificant complexity indices before the discriminant analysis. In the result, distinctive differences in complexity between CEFR speaking levels were observed, and with a combination of six major complexity features as predictors, 78.8% of the oral transcriptions were classified into the appropriate CEFR proficiency levels. It further confirms the possibility of predicting CEFR level of L2 learners based on their objective linguistic features. This study can be helpful as an empirical reference in language pedagogy, especially for L2 learners' self-assessment and teachers' prediction of students' proficiency levels. Also, it offers implications for the validation of the rating criteria, and improvement of rating system.

The Study of Convergence on Lexical Complexity, Syntax Complexity, and Correlation among Language Variables (한국어 학습자의 어휘복잡성, 구문복잡성 및 언어능력 변인들 간의 상관에 관한 융합 연구)

  • Kyung, Lee-MI;Noh, Byungho;Kang, Anyoung
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.4
    • /
    • pp.219-229
    • /
    • 2017
  • The study was conducted to find out lexical complexity and syntactic complexity for Korean learners by telling stories to see pictures. The results were as follows. First, there was no meaningful difference according to nationality. Second, we checked the differences on lexical complexity and syntactic complexity according to Korean studying period, only number of difference words showed meaningful difference among lexical complexity sub variables, but there was no difference among syntactic complexity sub variables. Third, we also checked correlation among staying period of Korea, Korean studying period, and other language related variables. It showed meaningful correlation staying period in Korea and other language related variable except Korean studying period and TTR. The directions for teaching Korean learners were suggested on the point of converge view according to results.

Adaptive De-interlacing Algorithm using Method Selection based on Degree of Local Complexity (지역 복잡도 기반 방법 선택을 이용한 적응적 디인터레이싱 알고리듬)

  • Hong, Sung-Min;Park, Sang-Jun;Jeong, Je-Chang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.4C
    • /
    • pp.217-225
    • /
    • 2011
  • In this paper, we propose an adaptive de-interlacing algorithm that is based on the degree of local complexity. The conventional intra field de-interlacing algorithms show the different performance according to the ways which find the edge direction. Furthermore, FDD (Fine Directional De-interlacing) algorithm has the better performance than other algorithms but the computational complexity of FDD algorithm is too high. In order to alleviate these problems, the proposed algorithm selects the most efficient de-interacing algorithm among LA (Line Average), MELA (Modified Edge-based Line Average), and LCID (Low-Complexity Interpolation Method for De-interlacing) algorithms which have low complexity and good performance. The proposed algorithm is trained by the DoLC (Degree of Local Complexity) for selection of the algorithms mentioned above. Simulation results show that the proposed algorithm not only has the low complexity but also performs better objective and subjective image quality performances compared with the conventional intra-field methods.

Distributed video coding complexity balancing method by phase motion estimation algorithm (단계적 움직임 예측을 이용한 분산비디오코딩(DVC)의 복잡도 분배 방법)

  • Kim, Chul-Keun;Kim, Min-Geon;Suh, Doug-Young;Park, Jong-Bin;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.15 no.1
    • /
    • pp.112-121
    • /
    • 2010
  • Distributed video coding is a coding paradigm that allows complexity to be shared between encoder and decoder, in contrast with conventional video coding. We propose that complexity balancing method of encoder/decoder by phase motion estimation algorithm. The encoder performs partial motion estimation. The result of the partial motion estimation is transferred to the decoder, and the decoder performs motion estimation within the narrow range. When the encoder can afford some complexity, complexity balancing is possible. The method proposed is able to know relativity between complexity balancing and coding efficiency. The coding efficiency increase rate by the encoder complexity increases is higher than that by the decoder complexity increases. The proposed method can control the complexity and coding efficiency according to devices' resources and channel conditions.

Regression Analysis of the Relationships between Complexity Metrics and Faults on the Telecommunication Program (통신 소프트웨어의 프로그램 결함과 복잡도의 관련성 분석을 위한 회귀분석 모델)

  • Lee, Gyeong-Hwan;Jeong, Chang-Sin;Hwang, Seon-Myeong;Jo, Byeong-Gyu;Park, Ji-Hun;Kim, Gang-Tae
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.11
    • /
    • pp.1282-1287
    • /
    • 1999
  • 통신 프로그램은 고도의 신뢰성과 기능성, 확장성, 그리고 유지 보수성이 필요하다. 프로그램 테스트의 결과와 McCabe의 Complexity를 측정한 데이타를 가지고 회귀모델을 만들고 그 신뢰성을 분석함으로서 프로그램의 결함과 복잡도의 관련성을 평가한다.본 연구에서 사용한 통신 프로그램은 500개 블록이 59가지 기능을 수행하는 교환 기능 중에서 복잡도가 너무 많아서 통계 처리의 bias가 될 블록을 제외하고 394 블록을 선정하여 SAS에 의해서 통계 분석을 하고 회귀 분석 모델을 설계하였다. t 분포에 의하여 방정식의 유의성 수준을 검증하고 프로그램의 결함수에 가장 큰 영향을 주고 있는 복잡도가 McCabe의 복잡도와 설계 복잡도 임을 밝혀냈다. 이 연구 결과에 의해서 설계 정보 및 유지 보수 정보를 얻을 수 있다. Abstract Switching software requires high reliability, functionality, extendability and maintainability. For doing, software quality model based on MaCabe's complexity measure is investigated. It is experimentally shown using regression analysis the program fault density depends on the complexity and size of the function unit. The software should be verified and tested if it satisfies its requirements with automated analysis tools. In this paper we propose the regression model with the test data.The sample program for the regression model consists of more than 500 blocks, where each block compose of 10 files, which has 59 functions of switching activity.Among them we choose 394 blocks and analyzed for 59 functions by testing tools and SAS package. We developed Regression Analysis Model and evaluated significant of the equation based on McCabe's cyclomatic complexity, block design complexity, design complexity, and integration complexity.The results of our experimental study are that number of fault are under the influence of McCabe's complexity number and design complexity.

Measurement of Classes Complexity in the Object-Oriented Analysis Phase (객체지향 분석 단계에서의 클래스 복잡도 측정)

  • Kim, Yu-Kyung;Park, Jai-Nyun
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.10
    • /
    • pp.720-731
    • /
    • 2001
  • Complexity metrics have been developed for the structured paradigm of software development are not suitable for use with the object-oriented(OO) paradigm, because they do not support key object-oriented concepts such as inheritance, polymorphism. message passing and encapsulation. There are many researches on OO software metrics such as program complexity or design metrics. But metrics measuring the complexity of classes at the OO analysis phase are needed because they provide earlier feedback to the development project. and earlier feedback means more effective developing and less costly maintenance. In this paper, we propose the new metrics to measure the complexity of analysis classes which draw out in the analysis based on RUP(Rational Unified Process). By the collaboration complexity, is denoted by CC, we mean the maximum number of the collaborations can be achieved with each of the collaborator and determine the potential complexity. And the interface complexity, is denoted by IC, shows the difficulty related to understand the interface of collaborators each other. We verify theoretically the suggested metrics for Weyuker's nine properties. Moreover, we show the computation results for analysis classes of the system which automatically respond to questions of the user using the text mining technique. As a result of the comparison of CC and CBO and WMC suggested by Chidamber and Kemerer, the class that have highly the proposed metric value maintain the high complexity at the design phase too. And the complexity can be represented by CC and IC more than CBO and WMC. We can expect that our metrics may provide us the earlier feedback and hence possible to predict the efforts, costs and time required to remainder processes. As a result, we expect to develop the cost-effective OO software by reviewing the complexity of analysis classes in the first stage of SDLC(Software Development Life Cycle).

  • PDF

COMPLEXITY, HEEGAARD DIAGRAMS AND GENERALIZED DUNWOODY MANIFOLDS

  • Cattabriga, Alessia;Mulazzani, Michele;Vesnin, Andrei
    • Journal of the Korean Mathematical Society
    • /
    • v.47 no.3
    • /
    • pp.585-598
    • /
    • 2010
  • We deal with Matveev complexity of compact orientable 3-manifolds represented via Heegaard diagrams. This lead us to the definition of modified Heegaard complexity of Heegaard diagrams and of manifolds. We define a class of manifolds which are generalizations of Dunwoody manifolds, including cyclic branched coverings of two-bridge knots and links, torus knots, some pretzel knots, and some theta-graphs. Using modified Heegaard complexity, we obtain upper bounds for their Matveev complexity, which linearly depend on the order of the covering. Moreover, using homology arguments due to Matveev and Pervova we obtain lower bounds.

DISTRIBUTED ALGORITHMS SOLVING THE UPDATING PROBLEMS

  • Park, Jung-Ho;Park, Yoon-Young;Choi, Sung-Hee
    • Journal of applied mathematics & informatics
    • /
    • v.9 no.2
    • /
    • pp.607-620
    • /
    • 2002
  • In this paper, we consider the updating problems to reconstruct the biconnected-components and to reconstruct the weighted shortest path in response to the topology change of the network. We propose two distributed algorithms. The first algorithm solves the updating problem that reconstructs the biconnected-components after the several processors and links are added and deleted. Its bit complexity is O((n'+a+d)log n'), its message complexity is O(n'+a+d), the ideal time complexity is O(n'), and the space complexity is O(e long n+e' log n'). The second algorithm solves the updating problem that reconstructs the weighted shortest path. Its message complexity and ideal-time complexity are $O(u^2+a+n')$ respectively.

NEW COMPLEXITY ANALYSIS OF PRIMAL-DUAL IMPS FOR P* LAPS BASED ON LARGE UPDATES

  • Cho, Gyeong-Mi;Kim, Min-Kyung
    • Bulletin of the Korean Mathematical Society
    • /
    • v.46 no.3
    • /
    • pp.521-534
    • /
    • 2009
  • In this paper we present new large-update primal-dual interior point algorithms for $P_*$ linear complementarity problems(LAPS) based on a class of kernel functions, ${\psi}(t)={\frac{t^{p+1}-1}{p+1}}+{\frac{1}{\sigma}}(e^{{\sigma}(1-t)}-1)$, p $\in$ [0, 1], ${\sigma}{\geq}1$. It is the first to use this class of kernel functions in the complexity analysis of interior point method(IPM) for $P_*$ LAPS. We showed that if a strictly feasible starting point is available, then new large-update primal-dual interior point algorithms for $P_*$ LAPS have $O((1+2+\kappa)n^{{\frac{1}{p+1}}}lognlog{\frac{n}{\varepsilon}})$ complexity bound. When p = 1, we have $O((1+2\kappa)\sqrt{n}lognlog\frac{n}{\varepsilon})$ complexity which is so far the best known complexity for large-update methods.

The Use of VFG for Measuring the Slice Complexity (슬라이스 복잡도 측정을 위한 VFG의 사용)

  • 문유미;최완규;이성주
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.1
    • /
    • pp.183-191
    • /
    • 2001
  • We develop the new data s]ice representation, called the data value flow graph(VFG), for modeling the information flow oil data slices. Then we define a slice complexity measure by using the existing flow complexity measure in order to measure the complexity of the information flow on VFG. We show relation of the slice complexity on a slice with the slice complexity on a procedure. We also demonstrate the measurement scale factors through a set of atomic modifications and a concatenation operator on VFG.

  • PDF