• Title/Summary/Keyword: Kernel-ART

Search Result 39, Processing Time 0.028 seconds

The Origin and Formation of Korean Public Art Theories in the 1980s (1980년대 민중미술론의 기원과 형성)

  • Choi, Youl
    • The Journal of Art Theory & Practice
    • /
    • no.7
    • /
    • pp.37-64
    • /
    • 2009
  • The theories of Korean Public Art originated by the artists who were against dictatorship and they associated with democratic politicians. They criticized the Fine art that were supported by the dictatorship and gave their efforts for restoration of 'resistance paintings(against dictatorship)', 'proletarian painting', 'realism painting'. In addition, they participated new social ideology(democracy) movement and demonstrated for their rights in arts. These became the main kernel the public art theory was initiated. The public artists splitted into several different parts and participated in the democratic social movement as well as the art movement for freedom. They opened various art exhibitions within different genre, diverse space for various art section such as an exhibition hall, a factories, a university, or a congregation square. Furthermore, the public art theorists published their divergent views through newspaper/broadcasting or unauthorized printed materials. Most of the public artist and the theorists kept their relationship strongly until 1985, the time when 'National Arts Association' started. In 1983 and 1984, they were clearly separated into two parts; artists(move only in art museums) and activists(move in public spaces like school, convention square etc). Their ideological separation also took out national problems. The division; professional artists and armatures, became the social issue as a social stratification matter. And in creating method, there are also other conflicts; critical realism, and public realism as well as western painting and traditional one. These kinds of separation and conflicts made different Public artists associations, under divergent names; 'Reality and Speak'(R&S), 'KwangJu Art Association', 'Durung', 'Dang(Land)', and 'Local Youth Students Association'. In addition, their ideology and pursuit toward art movements were very difference. However, the differences and conflicts weakened When the oppression of democratic education from new dictatorship(Pres. Jun, Doo Hwan) came out. In August. 1985 the government opened to the public so called, 'The draft of School stabilization law'(Hankwon Anjung Bup) to control the teachers' rights and that initiated bigger street demonstration and conflicts between police and educators. In November.1985, assembly meeting of National Arts Association in democracy opened as 'ONE' combined organization. In this presentation, I'd like to summarize the stream of art movement until 1984, and clarify the main art theories that lead the Public Art Movements in 1980s. The main theories in 1980s are crucial because they become the origin of public art theories. This presentation started with O,youn's "Hyunsil Dong In the first declaration" and explained the absent of practice in 1970s. In addition, Won, Dong Suk 's theory was mentioned as all over struggles in theories before 1980s. GA and R&S 's founding declarations in 1970s were the start of public art theorists' activities and this article reported the activities after the declarations. First, realism base on the consciousness of reality. Second, practice art democratization based on the ideology. Third, the subject of public art movement based on understanding people's social stratification structure. Fourth, the matters of national forms and creative ways in arts based on showing reality. Fifth, the strong points in arts that the practitioners accepted. About the public art theories around 1984, I discussed the dividing point of public art theories that were shown in 'generation theory', 'organization theory', and 'popularization theory' by the practitioners. The public realism theory that subjects the contradiction of reality and point out the limits of critical realism not only showing the new creative ways but also giving the feeling of solidarity to the public art activist groups. After that, public art movements expressed 'Dismentlement of Capitalism' and 'Public revolution'. In addition, the direction of public art movements were established strongly. There were various opinions and views during the start and formation of the public art theories. The foundation of theorists activities derived from the practitioners who had the concept based on stratification and nationalism. The strong trend of group division spreaded out by practitioners who opened art work together in factories, universities, squares and rural areas. Now many lively active practitioners are gone to the other field not related with arts, and others join into professional art field not public art one with unknown reason. The theorists have the same situation with the practitioners. It means to me that theory always have to be based on the practice.

  • PDF

Non-uniform Deblur Algorithm using Gyro Sensor and Different Exposure Image Pair (자이로 센서와 노출시간이 다른 두 장의 영상을 이용한 비균일 디블러 기법)

  • Ryu, Ho-hyeong;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.200-209
    • /
    • 2016
  • This paper proposes a non-uniform de-blur algorithm using IMU sensor and a long/short exposure-time image pair to efficiently remove the blur phenomenon. Conventional blur kernel estimation algorithms using sensor information do not provide acceptable performance due to limitation of sensor performance. In order to overcome such a limitation, we present a kernel refinement step based on images having different exposure times which improves accuracy of the estimated kernel. Also, in order to figure out the phenomenon that conventional non-uniform de-blur algorithms suffer from severe degradation of visual quality in case of large blur kernels, this paper a homography-based residual de-convolution which can minimize quality degradation such as ringing artifacts during de-convolution. Experimental results show that the proposed algorithm is superior to the state-of-the-art methods in terms of subjective as well as objective visual quality.

Iterative mesh partitioning strategy for improving the efficiency of parallel substructure finite element computations

  • Hsieh, Shang-Hsien;Yang, Yuan-Sen;Tsai, Po-Liang
    • Structural Engineering and Mechanics
    • /
    • v.14 no.1
    • /
    • pp.57-70
    • /
    • 2002
  • This work presents an iterative mesh partitioning approach to improve the efficiency of parallel substructure finite element computations. The proposed approach employs an iterative strategy with a set of empirical rules derived from the results of numerical experiments on a number of different finite element meshes. The proposed approach also utilizes state-of-the-art partitioning techniques in its iterative partitioning kernel, a cost function to estimate the computational cost of each submesh, and a mechanism that adjusts element weights to redistribute elements among submeshes during iterative partitioning to partition a mesh into submeshes (or substructures) with balanced computational workloads. In addition, actual parallel finite element structural analyses on several test examples are presented to demonstrate the effectiveness of the approach proposed herein. The results show that the proposed approach can effectively improve the efficiency of parallel substructure finite element computations.

Enhance Health Risks Prediction Mechanism in the Cloud Using RT-TKRIBC Technique

  • Konduru, Venkateswara Raju;Bharamgoudra, Manjula R
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.166-174
    • /
    • 2021
  • A large volume of patient data is generated from various devices used in healthcare applications. With increase in the volume of data generated in the healthcare industry, more wellness monitoring is required. A cloud-enabled analysis of healthcare data that predicts patient risk factors is required. Machine learning techniques have been developed to address these medical care problems. A novel technique called the radix-trie-based Tanimoto kernel regressive infomax boost classification (RT-TKRIBC) technique is introduced to analyze the heterogeneous health data in the cloud to predict the health risks and send alerts. The infomax boost ensemble technique improves the prediction accuracy by finding the maximum mutual information, thereby minimizing the mean square error. The performance evaluation of the proposed RT-TKRIBC technique is realized through extensive simulations in the cloud environment, which provides better prediction accuracy and less prediction time than those provided by the state-of-the-art methods.

A Novel Cross Channel Self-Attention based Approach for Facial Attribute Editing

  • Xu, Meng;Jin, Rize;Lu, Liangfu;Chung, Tae-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2115-2127
    • /
    • 2021
  • Although significant progress has been made in synthesizing visually realistic face images by Generative Adversarial Networks (GANs), there still lacks effective approaches to provide fine-grained control over the generation process for semantic facial attribute editing. In this work, we propose a novel cross channel self-attention based generative adversarial network (CCA-GAN), which weights the importance of multiple channels of features and archives pixel-level feature alignment and conversion, to reduce the impact on irrelevant attributes while editing the target attributes. Evaluation results show that CCA-GAN outperforms state-of-the-art models on the CelebA dataset, reducing Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) by 15~28% and 25~100%, respectively. Furthermore, visualization of generated samples confirms the effect of disentanglement of the proposed model.

Providing scalable single-operating-system NUMA abstraction of physically discrete resources

  • Baik Song An;Myung Hoon Cha;Sang-Min Lee;Won Hyuk Yang;Hong Yeon Kim
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.501-512
    • /
    • 2024
  • With an explosive increase of data produced annually, researchers have been attempting to develop solutions for systems that can effectively handle large amounts of data. Single-operating-system (OS) non-uniform memory access (NUMA) abstraction technology is an important technology that ensures the compatibility of single-node programming interfaces across multiple nodes owing to its higher cost efficiency compared with scale-up systems. However, existing technologies have not been successful in optimizing user performance. In this paper, we introduce a single-OS NUMA abstraction technology that ensures full compatibility with the existing OS while improving the performance at both hypervisor and guest levels. Benchmark results show that the proposed technique can improve performance by up to 4.74× on average in terms of execution time compared with the existing state-of-the-art opensource technology.

Recent Research Trends of Process Monitoring Technology: State-of-the Art (공정 모니터링 기술의 최근 연구 동향)

  • Yoo, ChangKyoo;Choi, Sang Wook;Lee, In-Beum
    • Korean Chemical Engineering Research
    • /
    • v.46 no.2
    • /
    • pp.233-247
    • /
    • 2008
  • Process monitoring technology is able to detect the faults and the process changes which occur in a process unpredictably, which makes it possible to find the reasons of the faults and get rid of them, resulting in a stable process operation, high-quality product. Statistical process monitoring method based on data set has a main merit to be a tool which can easily supervise a process with the statistics and can be used in the analysis of process data if a high quality of data is given. Because a real process has the inherent characteristics of nonlinearity, non-Gaussianity, multiple operation modes, sensor faults and process changes, however, the conventional multivariate statistical process monitoring method results in inefficient results, the degradation of the supervision performances, or often unreliable monitoring results. Because the conventional methods are not easy to properly supervise the process due to their disadvantages, several advanced monitoring methods are developed recently. This review introduces the theories and application results of several remarkable monitoring methods, which are a nonlinear monitoring with kernel principle component analysis (KPCA), an adaptive model for process change, a mixture model for multiple operation modes and a sensor fault detection and reconstruction, in order to tackle the weak points of the conventional methods.

Lightweight Attention-Guided Network with Frequency Domain Reconstruction for High Dynamic Range Image Fusion

  • Park, Jae Hyun;Lee, Keuntek;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.205-208
    • /
    • 2022
  • Multi-exposure high dynamic range (HDR) image reconstruction, the task of reconstructing an HDR image from multiple low dynamic range (LDR) images in a dynamic scene, often produces ghosting artifacts caused by camera motion and moving objects and also cannot deal with washed-out regions due to over or under-exposures. While there has been many deep-learning-based methods with motion estimation to alleviate these problems, they still have limitations for severely moving scenes. They also require large parameter counts, especially in the case of state-of-the-art methods that employ attention modules. To address these issues, we propose a frequency domain approach based on the idea that the transform domain coefficients inherently involve the global information from whole image pixels to cope with large motions. Specifically we adopt Residual Fast Fourier Transform (RFFT) blocks, which allows for global interactions of pixels. Moreover, we also employ Depthwise Overparametrized convolution (DO-conv) blocks, a convolution in which each input channel is convolved with its own 2D kernel, for faster convergence and performance gains. We call this LFFNet (Lightweight Frequency Fusion Network), and experiments on the benchmarks show reduced ghosting artifacts and improved performance up to 0.6dB tonemapped PSNR compared to recent state-of-the-art methods. Our architecture also requires fewer parameters and converges faster in training.

  • PDF

Detection of Multiple Salient Objects by Categorizing Regional Features

  • Oh, Kang-Han;Kim, Soo-Hyung;Kim, Young-Chul;Lee, Yu-Ra
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.272-287
    • /
    • 2016
  • Recently, various and effective contrast based salient object detection models to focus on a single target have been proposed. However, there is a lack of research on detection of multiple objects, and also it is a more challenging task than single target process. In the multiple target problem, we are confronted by new difficulties caused by distinct difference between properties of objects. The characteristic of existing models depending on the global maximum distribution of data point would become a drawback for detection of multiple objects. In this paper, by analyzing limitations of the existing methods, we have devised three main processes to detect multiple salient objects. In the first stage, regional features are extracted from over-segmented regions. In the second stage, the regional features are categorized into homogeneous cluster using the mean-shift algorithm with the kernel function having various sizes. In the final stage, we compute saliency scores of the categorized regions using only spatial features without the contrast features, and then all scores are integrated for the final salient regions. In the experimental results, the scheme achieved superior detection accuracy for the SED2 and MSRA-ASD benchmarks with both a higher precision and better recall than state-of-the-art approaches. Especially, given multiple objects having different properties, our model significantly outperforms all existing models.

Fully nonlinear time-domain simulation of a backward bent duct buoy floating wave energy converter using an acceleration potential method

  • Lee, Kyoung-Rok;Koo, Weoncheol;Kim, Moo-Hyun
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.5 no.4
    • /
    • pp.513-528
    • /
    • 2013
  • A floating Oscillating Water Column (OWC) wave energy converter, a Backward Bent Duct Buoy (BBDB), was simulated using a state-of-the-art, two-dimensional, fully-nonlinear Numerical Wave Tank (NWT) technique. The hydrodynamic performance of the floating OWC device was evaluated in the time domain. The acceleration potential method, with a full-updated kernel matrix calculation associated with a mode decomposition scheme, was implemented to obtain accurate estimates of the hydrodynamic force and displacement of a freely floating BBDB. The developed NWT was based on the potential theory and the boundary element method with constant panels on the boundaries. The mixed Eulerian-Lagrangian (MEL) approach was employed to capture the nonlinear free surfaces inside the chamber that interacted with a pneumatic pressure, induced by the time-varying airflow velocity at the air duct. A special viscous damping was applied to the chamber free surface to represent the viscous energy loss due to the BBDB's shape and motions. The viscous damping coefficient was properly selected using a comparison of the experimental data. The calculated surface elevation, inside and outside the chamber, with a tuned viscous damping correlated reasonably well with the experimental data for various incident wave conditions. The conservation of the total wave energy in the computational domain was confirmed over the entire range of wave frequencies.