• Title/Summary/Keyword: 3D applications

Search Result 2,704, Processing Time 0.037 seconds

New Yellow Aromatic Imine Derivatives Based on Organic Semiconductor Compounds for Image Sensor Color Filters (이미지 센서 컬러 필터용 유기반도체 화합물 기반의 신규 황색 아로마틱 이민 유도체)

  • Sunwoo Park;Joo Hwan Kim;Sangwook Park;Godi Mahendra;Jaehyun Lee;Jongwook Park
    • Applied Chemistry for Engineering
    • /
    • v.34 no.6
    • /
    • pp.590-595
    • /
    • 2023
  • Novel aromatic imine derivatives with yellow were designed and synthesized for their potential application in color filters for image sensors. The synthesized compounds possessed chemical structures using aromatic imine groups. This innovative material was evaluated thoroughly, considering its optical and thermal properties under conditions similar to commercial device manufacturing processes. Following a rigorous performance evaluation, it was found that (E)-3-methyl-4-((3-methyl-5-oxo-1-phenyl-1H-pyrazol-4(5H)-ylidene)methyl)-1-phenyl-1H-pyrazol-5(4H)-one, abbreviated as MOPMPO, exhibited an impressive solubility of 0.5 wt% in propylene glycol monomethyl ether acetate, predominantly utilized as the solvent in the industry. Furthermore, MOPMPO showed exceptional performance as a color filter material for image sensors, having a high decomposition temperature of 290 ℃. These data unequivocally establish MOPMPO as a viable yellow dye additive for coloring materials in image sensor applications.

Forage Yields of Corn-Oats Cropping System and Soil Properties as Affected by Liquid Cattle Manure (옥수수-연맥조합의 사초수량과 토양특성에 미치는 소 액상분뇨)

  • Shin, D.E.;Kim, D.A.;Park, G.J.;Kim, J.D.;Park, H.S.;Kim, S.G.
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.19 no.4
    • /
    • pp.325-332
    • /
    • 1999
  • A manure management plan is important for all dairy operations. This experiment was conducted to determine the effect of different nitrogen(N) application rates of liquid cattle manure on the forage quality, N recovery, and total forage yields of corn-oats cropping system and soil properties at the National Livestock Research Institute, RDA, Suweon in 1997. Eight treatments consisting of no fertilizer, chemical fertilizer $320kg\;N\;ha^{-1}$ as urea, the continuous applications of 320, 640 and $960kg\;N\;ha^{-1}$ as liquid cattle manure(LCM), the residual effects of 200, 400 and $600kg\;N\;ha^{-1}$ as liquid cattle manure were arranged in a randomized complete block design with three replications. Mean plant height of fall sown oats was 70 and 61cm at the continuous application and the residual effect plots, respectively. Mean dry matter percent of fall sown oats at the residual effect plots was higher by 0.9% than that of oats at the continuous application plots, but there were no differences among all treatments. Mean crude protein(CP), acid detergent fiber(ADF), and neutral detergent fiber (NDF) contents of fall sown oats at the continuous application plots were higher by 1.0, 1.6, and 3.1%, respectively, than those of the residual effect plots. Also, there were significant differences among treatments (P<0.05). Total forage dry matter yields of corn and oats cropping system were ranged from 11,365 to $25,668kg\;ha^{-1}$ among the treatments. The yield was orderly ranked as LCM $960kg\;N\;ha^{-1}$ > LCM $600kg\;N\;ha^{-1}$ > LCM $640kg\;N\;ha^{-1}$ > LCM $400kg\;N\;ha^{-1}$ (P<0.05). Compared with the control, manurial value(MV) was 158 and 139% for the plot of the LCM $960kg\;N\;ha^{-1}$ and that of the LCM $600kg\;N\;ha^{-1}$, respectively. N recovery percent of fall sown oats was the highest at the plot of the LCM $200kg\;N\;ha^{-1}$ by 50%, and then was higher in order of the LCM $400kg\;N\;ha^{-1}$, LCM $600kg\;N\;ha^{-1}$, and LCM $320kg\;N\;ha^{-1}$. Contents of exchangeable cation in the soil of the residual effect plots was higher than that of the continuous application plots. These results suggest that the LCM $600kg\;N\;ha^{-1}$ may be the most effective in total forage dry matter yields, manurial value, N recovery, and utilizing liquid manure N under the corn and oats double cropping system.

  • PDF

The Pharmacological Activity of Coffee Fermented Using Monascus purpureus Mycelium Solid-state Culture Depends on the Cultivation Area and Green Coffees Variety (원산지 및 품종에 따라 조제된 홍국균 균사체-고체발효 원두커피의 생리활성)

  • Kim, Hoon;Yu, Kwang-Won;Lee, Jun-Soo;Baek, Gil-Hun;Shin, Ji-Young
    • Korean Journal of Food Science and Technology
    • /
    • v.46 no.1
    • /
    • pp.79-86
    • /
    • 2014
  • In previous work, we fermented coffee beans using solid-state culture with various fungal mycelia to enhance the physiological activity of the coffee. The coffee fermented with Monascus sp. showed a higher physiological activity than non-fermented coffee or other coffees fermented with mushroom mycelium. The aim of this study was to characterize the various fermented coffees with respect to their area of cultivation and their variety using Monascus purpureus (MP) mycelium solid-state culture. Thirty types of green coffee beans, which varied in terms of their cultivation area or variety, were purchased from different suppliers and fermented with MP under optimal conditions. Each MP-fermented coffee was medium roasted and extracted further using hot water (HW) under the same conditions. Of the HW extracts, those derived from MP-Mandheling coffees had the highest yield (13.6-15.5%), and MP-Robusta coffee showed a significantly higher polyphenolic content (3.03 mg gallic acid equivalent/100 mg) and 2,2'-azino-bis(3-ethylbenzthiazoline-6-sulphonic acid) free radical scavenging activity (27.11 mg ascorbic acid equivalent antioxidant capacity/100 mg). Furthermore, in comparison to other MP-fermented coffees at $1,000{\mu}g/mL$, MP-Robusta coffee showed not only the most effective inhibition of tumor necrosis factor-${\alpha}$ (TNF-${\alpha}$) production in LPS-stimulated RAW 264.7 cells (67.1% of that in LPS-stimulated control cells), but also an effective inhibition of lipogenesis in 3T3-L1 adipose cells (22.2% of that in differentiated control cells). In conclusion, these results suggest that Vietnam Robusta coffee beans solid-state fermented with MP mycelium are amenable to industrial applications as a functional coffee beverage or material.

Progress of Composite Fabrication Technologies with the Use of Machinery

  • Choi, Byung-Keun;Kim, Yun-Hae;Ha, Jin-Cheol;Lee, Jin-Woo;Park, Jun-Mu;Park, Soo-Jeong;Moon, Kyung-Man;Chung, Won-Jee;Kim, Man-Soo
    • International Journal of Ocean System Engineering
    • /
    • v.2 no.3
    • /
    • pp.185-194
    • /
    • 2012
  • A Macroscopic combination of two or more distinct materials is commonly referred to as a "Composite Material", having been designed mechanically and chemically superior in function and characteristic than its individual constituent materials. Composite materials are used not only for aerospace and military, but also heavily used in boat/ship building and general composite industries which we are seeing increasingly more. Regardless of the various applications for composite materials, the industry is still limited and requires better fabrication technology and methodology in order to expand and grow. An example of this is that the majority of fabrication facilities nearby still use an antiquated wet lay-up process where fabrication still requires manual hand labor in a 3D environment impeding productivity of composite product design advancement. As an expert in the advanced composites field, I have developed fabrication skills with the use of machinery based on my past composite experience. In autumn 2011, the Korea government confirmed to fund my project. It is the development of a composite sanding machine. I began development of this semi-robotic prototype beginning in 2009. It has possibilities of replacing or augmenting the exhaustive and difficult jobs performed by human hands, such as sanding, grinding, blasting, and polishing in most often, very awkward conditions, and is also will boost productivity, improve surface quality, cut abrasive costs, eliminate vibration injuries, and protect workers from exposure to dust and airborne contamination. Ease of control and operation of the equipment in or outside of the sanding room is a key benefit to end-users. It will prove to be much more economical than normal robotics and minimize errors that commonly occur in factories. The key components and their technologies are a 360 degree rotational shoulder and a wrist that is controlled under PLC controller and joystick manual mode. Development on both of the key modules is complete and are now operational. The Korean government fund boosted my development and I expect to complete full scale development no later than 3rd quarter 2012. Even with the advantages of composite materials, there is still the need to repair or to maintain composite products with a higher level of technology. I have learned many composite repair skills on composite airframe since many composite fabrication skills including repair, requires training for non aerospace applications. The wind energy market is now requiring much larger blades in order to generate more electrical energy for wind farms. One single blade is commonly 50 meters or longer now. When a wind blade becomes damaged from external forces, on-site repair is required on the columns even under strong wind and freezing temperature conditions. In order to correctly obtain polymerization, the repair must be performed on the damaged area within a very limited time. The use of pre-impregnated glass fabric and heating silicone pad and a hot bonder acting precise heating control are surely required.

Initial results from spatially averaged coherency, frequency-wavenumber, and horizontal to vertical spectrum ratio microtremor survey methods for site hazard study at Launceston, Tasmania (Tasmania 의 Launceston 시의 위험 지역 분석을 위한 공간적 평균 일관성, 주파수-파수, 수평과 수직 스펙트럼의 비율을 이용한 상신 진동 탐사법의 일차적 결과)

  • Claprood, Maxime;Asten, Michael W.
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.1
    • /
    • pp.132-142
    • /
    • 2009
  • The Tamar rift valley runs through the City of Launceston, Tasmania. Damage has occurred to city buildings due to earthquake activity in Bass Strait. The presence of the ancient valley, the Tamar valley, in-filled with soft sediments that vary rapidly in thickness from 0 to 250mover a few hundreds metres, is thought to induce a 2D resonance pattern, amplifying the surface motions over the valley and in Launceston. Spatially averaged coherency (SPAC), frequency-wavenumber (FK) and horizontal to vertical spectrum ratio (HVSR) microtremor survey methods are combined to identify and characterise site effects over the Tamar valley. Passive seismic array measurements acquired at seven selected sites were analysed with SPAC to estimate shear wave velocity (slowness) depth profiles. SPAC was then combined with HVSR to improve the resolution of these profiles in the sediments to an approximate depth of 125 m. Results show that sediments thicknesses vary significantly throughout Launceston. The top layer is composed of as much as 20m of very soft Quaternary alluvial sediments with a velocity from 50 m/s to 125 m/s. Shear-wave velocities in the deeper Tertiary sediment fill of the Tamar valley, with thicknesses from 0 to 250m vary from 400 m/s to 750 m/s. Results obtained using SPAC are presented at two selected sites (GUN and KPK) that agree well with dispersion curves interpreted with FK analysis. FK interpretation is, however, limited to a narrower range of frequencies than SPAC and seems to overestimate the shear wave velocity at lower frequencies. Observed HVSR are also compared with the results obtained by SPAC, assuming a layered earth model, and provide additional constraints on the shear wave slowness profiles at these sites. The combined SPAC and HVSR analysis confirms the hypothesis of a layered geology at the GUN site and indicates the presence of a 2D resonance pattern across the Tamar valley at the KPK site.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

System Development and IC Implementation of High-quality and High-performance Image Downscaler Using 2-D Phase-correction Digital Filters (2차원 위상 교정 디지털 필터를 이용한 고성능/고화질의 영상 축소기 시스템 개발 및 IC 구현)

  • 강봉순;이영호;이봉근
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.3
    • /
    • pp.93-101
    • /
    • 2001
  • In this paper, we propose an image downscaler used in multimedia video applications, such as DTV, TV-PIP, PC-video, camcorder, videophone and so on. The proposed image downscaler provides a scaled image of high-quality and high-performance. This paper will explain the scaling theory using two-dimensional digital filters. It is the method that removes an aliasing noise and decreases the hardware complexity, compared with Pixel-drop and Upsamling. Also, this paper will prove it improves scaling precisians and decreases the loss of data, compared with the Scaler32, the Bt829 of Brooktree, and the SAA7114H of Philips. The proposed downscaler consists of the following four blocks: line memory, vertical scaler, horizontal scaler, and FIFO memory. In order to reduce the hardware complexity, the using digital filters are implemented by the multiplexer-adder type scheme and their all the coefficients can be simply implemented by using shifters and adders. It also decreases the loss of high frequency data because it provides the wider BW of 6MHz as adding the compensation filter. The proposed downscaler is modeled by using the Verilog-HDL and the model is verified by using the Cadence simulator. After the verification is done, the model is synthesized into gates by using the Synopsys. The synthesized downscaler is Placed and routed by the Mentor with the IDEC-C632 0.65${\mu}{\textrm}{m}$ library for further IC implementation. The IC master is fixed in size by 4,500${\mu}{\textrm}{m}$$\times$4,500${\mu}{\textrm}{m}$. The active layout size of the proposed downscaler is 2,528${\mu}{\textrm}{m}$$\times$3,237${\mu}{\textrm}{m}$.

  • PDF

Usefulness of Image Registration in Brain Perfusion SPECT (Brain Perfusion SPECT에서 Image Registration의 유용성)

  • Song, Ho-June;Lim, Jung-Jin;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.60-64
    • /
    • 2011
  • Purpose: The brain perfusion SPECT is the examination which is able to know adversity information related brain disorder. But brain perfusion SPECT has also high failure rates by patient's motions. In this case, we have to use two days method and patients put up with many disadvantages. We think that we don't use two days method in brain perfusion SPECT, if we can use registration method. So this study has led to look over registration method applications in brain perfusion SPECT. Materials and Methods: Jaszczak, Hoffman and cylindrical phantoms were used for acquiring SPECT image data on varying degree in x, y, z axes. The phantoms were filled with $^{99m}Tc$ solution that consisted of a radioactive concentration of 111 MBq/mL. Phantom images were acquired through scanning for 5 sec long per frame by using Triad XLT9 triple head gamma camera (TRIONIX, USA). We painted the ROI of registration image in brain data. So we calculated the ROIratio which was different original image counts and registration image counts. Results: When carring out the experiments under the same condition, total counts differential was from 3.5% to 5.7% (mean counts was from 3.4% to 6.8%) in phantom and patients data. In addition, we also run the experiments in the double activity condition. Total counts differential was from 2.6% to 4.9% (mean counts was from 4.1% to 4.9%) in phantom and patients data. Conclusion: We can know that original and registration data are little different in image analysis. If we use the image registration method, we can improve disadvantage of two days method in brain perfusion SPECT. But we must consider image registration about the distance differences in x, y, z axes.

  • PDF

Pose Transformation of a Frontal Face Image by Invertible Meshwarp Algorithm (역전가능 메쉬워프 알고리즘에 의한 정면 얼굴 영상의 포즈 변형)

  • 오승택;전병환
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.153-163
    • /
    • 2003
  • In this paper, we propose a new technique of image based rendering(IBR) for the pose transformation of a face by using only a frontal face image and its mesh without a three-dimensional model. To substitute the 3D geometric model, first, we make up a standard mesh set of a certain person for several face sides ; front. left, right, half-left and half-right sides. For the given person, we compose only the frontal mesh of the frontal face image to be transformed. The other mesh is automatically generated based on the standard mesh set. And then, the frontal face image is geometrically transformed to give different view by using Invertible Meshwarp Algorithm, which is improved to tolerate the overlap or inversion of neighbor vertexes in the mesh. The same warping algorithm is used to generate the opening or closing effect of both eyes and a mouth. To evaluate the transformation performance, we capture dynamic images from 10 persons rotating their heads horizontally. And we measure the location error of 14 main features between the corresponding original and transformed facial images. That is, the average difference is calculated between the distances from the center of both eyes to each feature point for the corresponding original and transformed images. As a result, the average error in feature location is about 7.0% of the distance from the center of both eyes to the center of a mouth.

Depth Upsampling Method Using Total Generalized Variation (일반적 총변이를 이용한 깊이맵 업샘플링 방법)

  • Hong, Su-Min;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.957-964
    • /
    • 2016
  • Acquisition of reliable depth maps is a critical requirement in many applications such as 3D videos and free-viewpoint TV. Depth information can be obtained from the object directly using physical sensors, such as infrared ray (IR) sensors. Recently, Time-of-Flight (ToF) range camera including KINECT depth camera became popular alternatives for dense depth sensing. Although ToF cameras can capture depth information for object in real time, but are noisy and subject to low resolutions. Recently, filter-based depth up-sampling algorithms such as joint bilateral upsampling (JBU) and noise-aware filter for depth up-sampling (NAFDU) have been proposed to get high quality depth information. However, these methods often lead to texture copying in the upsampled depth map. To overcome this limitation, we formulate a convex optimization problem using higher order regularization for depth map upsampling. We decrease the texture copying problem of the upsampled depth map by using edge weighting term that chosen by the edge information. Experimental results have shown that our scheme produced more reliable depth maps compared with previous methods.