• Title/Summary/Keyword: 영상 제거

Search Result 2,943, Processing Time 0.028 seconds

Low-complexity Local Illuminance Compensation for Bi-prediction mode (양방향 예측 모드를 위한 저복잡도 LIC 방법 연구)

  • Choi, Han Sol;Byeon, Joo Hyung;Bang, Gun;Sim, Dong Gyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.463-471
    • /
    • 2019
  • This paper proposes a method for reducing the complexity of LIC (Local Illuminance Compensation) for bi-directional inter prediction. The LIC performs local illumination compensation using neighboring reconstruction samples of the current block and the reference block to improve the accuracy of the inter prediction. Since the weight and offset required for local illumination compensation are calculated at both sides of the encoder and decoder using the reconstructed samples, there is an advantage that the coding efficiency is improved without signaling any information. Since the weight and the offset are obtained in the encoding prediction step and the decoding step, encoder and decoder complexity are increased. This paper proposes two methods for low complexity LIC. The first method is a method of applying illumination compensation with offset only in bi-directional prediction, and the second is a method of applying LIC after weighted average step of reference block obtained by bidirectional prediction. To evaluate the performance of the proposed method, BD-rate is compared with BMS-2.0.1 using B, C, and D classes of MPEG standard experimental image under RA (Random Access) condition. Experimental results show that the proposed method reduces the average of 0.29%, 0.23%, 0.04% for Y, U, and V in terms of BD-rate performance compared to BMS-2.0.1 and encoding/decoding time is almost same. Although the BD-rate was lost, the calculation complexity of the LIC was greatly reduced as the multiplication operation was removed and the addition operation was halved in the LIC parameter derivation process.

Rendering Quality Improvement Method based on Depth and Inverse Warping (깊이정보와 역변환 기반의 포인트 클라우드 렌더링 품질 향상 방법)

  • Lee, Heejea;Yun, Junyoung;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.714-724
    • /
    • 2021
  • The point cloud content is immersive content recorded by acquiring points and colors corresponding to the real environment and objects having three-dimensional location information. When a point cloud content consisting of three-dimensional points having position and color information is enlarged and rendered, the gap between the points widens and an empty hole occurs. In this paper, we propose a method for improving the quality of point cloud contents through inverse transformation-based interpolation using depth information for holes by finding holes that occur due to the gap between points when expanding the point cloud. The points on the back are rendered between the holes created by the gap between the points, acting as a hindrance to applying the interpolation method. To solve this, remove the points corresponding to the back side of the point cloud. Next, a depth map at the point in time when an empty hole is generated is extracted. Finally, inverse transform is performed to extract pixels from the original data. As a result of rendering content by the proposed method, the rendering quality improved by 1.2 dB in terms of average PSNR compared to the conventional method of increasing the size to fill the blank area.

The Visual Aesthetics of Drone Shot and Hand-held Shot based on the Representation of Place and Space : focusing on World Travel' Peninsula de Yucatán' Episode (장소와 공간의 재현적 관점에서 본 드론 쇼트와 핸드헬드 쇼트의 영상 미학 : <세계테마기행> '유카탄 반도'편을 중심으로)

  • Ryu, Jae-Hyung
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.3
    • /
    • pp.251-265
    • /
    • 2020
  • The Drone shot is moving images captured by a remotely controlled unmanned aerial vehicle, takes usually bird's eye view. The hand-held shot is moving images recorded by literal handheld shooting which is specialized to on-the-spot filming. It takes a walker's viewpoint through vivid realism of its self-reflexive camera movements. The purpose of this study is to analyze comparatively aesthetic functions of the drone shot and the hand-held shot. For this, the study understood Certeau's concepts of 'place' and 'space,' chose World Travel 'Peninsula de Yucatan' episode as a research object, and analytically applied two concepts to the scenes clearly presenting two shots' aesthetic characteristics. As a result, the drone shot took the authoritative viewpoint providing the general information and atmosphere as it overlooked the city with silent movements removing the self-reflexivity. This aesthetic function was reinforced the narration and subtitles mediating prior-knowledge about proper rules and orders of the place. The drone shot tended to project the location as a place. Conversely, the hand-held shot practically experienced the space with free walking which is free from rules and orders inherent in the city. The aesthetics of hand-held images represented the tactic resisting against the strategy of a subject of will and power in that the hand-held shot practiced anthropological walking by means of noticing everyday lives of the small town and countryside than main tourist attraction. In opposition to the drone shot, the hand-held shot tended to reflect the location as a space.

Digital color practice using Adobe AI intelligence research on application method - Focusing on color practice through Adobe Sensei - (어도비 AI 지능을 활용한 디지털 색채 실습에 관한 적용방식 연구 -쎈쎄이(Adobe Sensei)을 통한 색채 실습을 중심으로-)

  • Cho, Hyun Kyung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.801-806
    • /
    • 2022
  • In the modern era, the necessity of color capability in the digital era is the demand of the era, and research on improving color practice on the subdivided digital four areas that are not in the existing practice is needed. For digital majors who are difficult to solve in existing paint color practice, classes in digital color practice in four more specialized areas are needed, and the use of efficient artificial intelligence was studied for classes in digitized color and color sense. In this paper, we tried to show the expansion of the color practice area by suggesting digital color practice and color matching method based on Photoshop artificial intelligence and big data technology that existing color and color matching were practice that only CMYK could do. In addition, based on the color quantification data of individual users provided by the latest Adobe Sceney program artificial intelligence, the purpose of the practice was to improve learners' predictions of actual color combinations and random colors using filter effects. In conclusion, it is a study on the use of programs that eliminate ambiguity in the mixing process of existing paint practice, secure digital color details, and propose a practical method that can provide effective learning methods for beginners and intermediates to develop their senses through artificial intelligence support. The Adobe program practice method necessary for coloration and main color through theoretical consideration and improvement of teaching skills that are better than existing paint practice were presented.

Automatic 3D data extraction method of fashion image with mannequin using watershed and U-net (워터쉐드와 U-net을 이용한 마네킹 패션 이미지의 자동 3D 데이터 추출 방법)

  • Youngmin Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.825-834
    • /
    • 2023
  • The demands of people who purchase fashion products on Internet shopping are gradually increasing, and attempts are being made to provide user-friendly images with 3D contents and web 3D software instead of pictures and videos of products provided. As a reason for this issue, which has emerged as the most important aspect in the fashion web shopping industry, complaints that the product is different when the product is received and the image at the time of purchase has been heightened. As a way to solve this problem, various image processing technologies have been introduced, but there is a limit to the quality of 2D images. In this study, we proposed an automatic conversion technology that converts 2D images into 3D and grafts them to web 3D technology that allows customers to identify products in various locations and reduces the cost and calculation time required for conversion. We developed a system that shoots a mannequin by placing it on a rotating turntable using only 8 cameras. In order to extract only the clothing part from the image taken by this system, markers are removed using U-net, and an algorithm that extracts only the clothing area by identifying the color feature information of the background area and mannequin area is proposed. Using this algorithm, the time taken to extract only the clothes area after taking an image is 2.25 seconds per image, and it takes a total of 144 seconds (2 minutes and 4 seconds) when taking 64 images of one piece of clothing. It can extract 3D objects with very good performance compared to the system.

A Case of Metastatic Ampulla of Vater Cancer Treated with Chemotherapy Followed by Pylorus Preserving Pancreaticoduodenectomy (항암화학요법과 수술을 통해 완전 관해를 획득한 진행성 십이지장 유두암 증례)

  • Hae Ryong Yun;Moon Jae Chung;Seungmin Bang;Seung Woo Park;Si Young Song
    • Journal of Digestive Cancer Research
    • /
    • v.2 no.2
    • /
    • pp.75-77
    • /
    • 2014
  • Ampulla of Vater (AOV) cancer is rare malignant tumor which arises within the vicinity of the AOV. Metastatic AOV adenocarcinoma has poor prognosis, with an overall survival rate at 2 years ranging from 5 to 10%. The Surveillance, Epidemiology and End Results Program of the National Cancer Institute indicated that lymph node metastasis was present in as many as half of patients which were associated with poor prognosis and liver was the second most common site of distant metastasis in AOV cancer. In this case report, we describe a case of complete resolution of AOV cancer, which was already spread to retroperitoneal lymph node and liver. The patient underwent gemcitabine plus cisplatin chemotherapy for palliative aim. After 12 month of chemotherapy, image study showed partial remission, so intraoperative radiofrequency ablation therapy and pylorus preserving pancreaticoduodenectomy was done. AOV cancer was completely resected and the patient was followed up without recurrence for 7 months.

  • PDF

Exergetic Analysis of Ammonia-fueled Solid Oxide Fuel Cell Systems for Power Generation (암모니아 활용 고체산화물 연료전지 발전시스템의 엑서지 분석)

  • Thai-Quyen Quach;Young Gyun Bae;Kook Young Ahn;Sun Youp Lee;Young Sang Kim
    • Journal of the Korean Institute of Gas
    • /
    • v.27 no.3
    • /
    • pp.27-34
    • /
    • 2023
  • Using ammonia as fuel for solid oxide fuel (SOFC) cells has become an attractive topic nowadays due to its high efficiency, environmental friendliness, and ease of storage and transportation. Several configurations of ammonia-fed SOFC systems have been proposed and investigated, demonstrating high electrical efficiency. However, to further enhance efficiency, it is crucial to understand the inefficient components of the system. The exergy concept is well-suited for this purpose, making exergetic analysis essential for ammonia-fed SOFC systems. This study conducts an exergetic analysis for three selected systems: a simple fuel cell system (FC), an anode off-gas recirculation system (RC-FC), and a recirculation system with water removal (RC-WR-FC). The results reveal that the exergetic efficiencies of the FC, RC-FC, and RC-WR-FC are 48.7%, 51.6%, and 58.4%, respectively. In all three systems, the SOFC stack is the main source of exergy destruction. However, other components with relatively low exergetic efficiency, such as the burner, air heat exchanger, and cooler/condenser, offer greater opportunities for improvement.

Improving target recognition of active sonar multi-layer processor through deep learning of a small amounts of imbalanced data (소수 불균형 데이터의 심층학습을 통한 능동소나 다층처리기의 표적 인식성 개선)

  • Young-Woo Ryu;Jeong-Goo Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.225-233
    • /
    • 2024
  • Active sonar transmits sound waves to detect covertly maneuvering underwater objects and detects the signals reflected back from the target. However, in addition to the target's echo, the active sonar's received signal is mixed with seafloor, sea surface reverberation, biological noise, and other noise, making target recognition difficult. Conventional techniques for detecting signals above a threshold not only cause false detections or miss targets depending on the set threshold, but also have the problem of having to set an appropriate threshold for various underwater environments. To overcome this, research has been conducted on automatic calculation of threshold values through techniques such as Constant False Alarm Rate (CFAR) and application of advanced tracking filters and association techniques, but there are limitations in environments where a significant number of detections occur. As deep learning technology has recently developed, efforts have been made to apply it in the field of underwater target detection, but it is very difficult to acquire active sonar data for discriminator learning, so not only is the data rare, but there are only a very small number of targets and a relatively large number of non-targets. There are difficulties due to the imbalance of data. In this paper, the image of the energy distribution of the detection signal is used, and a classifier is learned in a way that takes into account the imbalance of the data to distinguish between targets and non-targets and added to the existing technique. Through the proposed technique, target misclassification was minimized and non-targets were eliminated, making target recognition easier for active sonar operators. And the effectiveness of the proposed technique was verified through sea experiment data obtained in the East Sea.

Development of a Multi-Camera Inline System using Machine Vision System for Quality Inspection of Pharmaceutical Containers (의약 용기의 품질 검사를 위한 머신비전을 적용한 다중 카메라 인라인 검사 시스템 개발)

  • Tae-Yoon Lee;Seok-Moon Yoon;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.28 no.3
    • /
    • pp.469-473
    • /
    • 2024
  • In this paper proposes a study on the development of a multi-camera inline inspection system using machine vision for quality inspection of pharmaceutical containers. The proposed technique captures the pharmaceutical containers from multiple angles using several cameras, allowing for more accurate quality assessment. Based on the captured data, the system inspects the dimensions and defects of the containers and, upon detecting defects, notifies the user and automatically removes the defective containers, thereby enhancing inspection efficiency. The development of the multi-camera inline inspection system using machine vision is divided into four stages. First, the design and production of a control unit that fixes or rotates the containers via suction. Second, the design and production of the main system body that moves, captures, and ejects defective products. Third, the design and development of control logic for the embedded board that controls the entire system. Finally, the design and development of a user interface (GUI) that detects defects in the pharmaceutical containers using image processing of the captured images. The system's performance was evaluated through experiments conducted by a certified testing agency. The results showed that the dimensional measurement error range of the pharmaceutical containers was between -0.30 to 0.28 mm (outer diameter) and -0.11 to 0.57 mm (overall length), which is superior to the global standard of 1 mm. The system's operational stability was measured at 100%, demonstrating its reliability. Therefore, the efficacy of the proposed multi-camera inline inspection system using machine vision for the quality inspection of pharmaceutical containers has been validated.

Development of Radiosynthetic Methods of 18F-THK5351 for tau PET Imaging (타우 PET영상을 위한 18F-THK5351의 표지방법 개발)

  • Park, Jun-Young;Son, Jeong-Min;Chun, Joong-Hyun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.51-54
    • /
    • 2018
  • Purpose $^{18}F-THK5351$ is the newly developed PET probe for tau imaging in alzheimer's disease. The purpose of study was to establish the automated production of $^{18}F-THK5351$ on a commercial module. Materials and Methods Two different approaches were evaluated for the synthesis of $^{18}F-THK5351$. The first approach (method I) included the nucleophilic $^{18}F$-fluorination of the tosylate precursor, subsequently followed by pre-HPLC purification of crude reaction mixture with SPE cartridge. In the second approach (method II), the crude reaction mixture was directly introduced to a semi-preparative HPLC without SPE purification. The radiosynthesis of $^{18}F-THK5351$ was performed on a commercial GE $TRACERlab^{TM}$ $FX-_{FN}$ module. Quality control of $^{18}F-THK5351$ was carried out to meet the criteria guidelined in USP for PET radiopharmaceuticals. Results The overall radiochemical yield of method I was $23.8{\pm}1.9%$ (n=4) as the decay-corrected yield (end of synthesis, EOS) and the total synthesis time was $75{\pm}3min$. The radiochemical yield of method II was $31.9{\pm}6.7%$ (decay-corrected, n=10) and the total preparation time was $70{\pm}2min$. The radiochemical purity was>98%. Conclusion This study shows that method II provides higher radiochemical yield and shorter production time compared to the pre-SPE purification described in method I. The $^{18}F-THK5351$ synthesis by method II will be ideal for routine clinical application, considering short physical half-life of fluorine-18 ($t_{1/2}=110min$).