• Title/Summary/Keyword: Level Set method

Search Result 1,490, Processing Time 0.032 seconds

Effects of the Salting of Chinese cabbage on Taste and Fermentation of Kimchi (배추 절임 방법이 김치의 맛과 숙성에 미치는 영향)

  • 송주은;김명선;한재숙
    • Korean journal of food and cookery science
    • /
    • v.11 no.3
    • /
    • pp.226-232
    • /
    • 1995
  • This study is a result of the experiments to determine the optimum conditions of salting of Chinese cabbages for making tasty Kimchi. For the experiments, various methods of salting of Baechu were set up and the best method of salting had prepared Kimchi to investigated the best salt kind. In the two best results by method of salting salt kind, were investigated for salting time of Kimchi. There were certain amounts of Kimchi had prepared for each case of the combinations of the three conditions. Every case of Kimchi was refrigerated and was evaluated by sensory tests. In the salt concentration, pH and acidity of the Kimchi were measured. The results are as follows; In three cases of Kimchi, for salt concentration, level of 2.4-3.0$^{\circ}$ was maintained in Kimchi solid from the beginning to the end, while it was high in the beginning and gradually lowered in Kimchi liquid. The level of pH in both solid and liquid of Kimchi quickly dropped at the beginning of fermentation period and turned slow as time passed. And the level of acidity was increased little bit in the beginning, but it suddenly multipied until the third day of preservation. Since then, it had increased gradually thereafter as it was before. This experiments show that Kimchi can be the best taste in the case of Chinese cabbage soak in brine- free natural salt are kept for five hours.

  • PDF

Ligand Based Pharmacophore Identification and Molecular Docking Studies for Grb2 Inhibitors

  • Arulalapperumal, Venkatesh;Sakkiah, Sugunadevi;Thangapandian, Sundarapandian;Lee, Yun-O;Meganathan, Chandrasekaran;Hwang, Swan;Lee, Keun-Woo
    • Bulletin of the Korean Chemical Society
    • /
    • v.33 no.5
    • /
    • pp.1707-1714
    • /
    • 2012
  • Grb2 is an adapter protein involved in the signal transduction and cell communication. The Grb2 is responsible for initiation of kinase signaling by Ras activation which leads to the modification in transcription. Ligand based pharmacophore approach was applied to built the suitable pharmacophore model for Grb2. The best pharmacophore model was selected based on the statistical values and then validated by Fischer's randomization method and test set. Hypo1 was selected as a best pharmacophore model based on its statistical values like high cost difference (182.22), lowest RMSD (1.273), and total cost (80.68). It contains four chemical features, one hydrogen bond acceptor (HBA), two hydrophobic (HY), and one ring aromatic (RA). Fischer's randomization results also shows that Hypo1 have a 95% significant level. The correlation coefficient of test set was 0.97 which was close to the training set value (0.94). Thus Hypo1 was used for virtual screening to find the potent inhibitors from various chemical databases. The screened compounds were filtered by Lipinski's rule of five, ADMET and subjected to molecular docking studies. Totally, 11 compounds were selected as a best potent leads from docking studies based on the consensus scoring function and critical interactions with the amino acids in Grb2 active site.

Designation the Gray Region and Evaluating Concentration of Radionuclide in Kori-1 by Using Derived Concentration Guideline Level (고리 1호기의 잔류방사능 유도농도(DCGL)를 적용한 회색영역 설정과 핵종농도평가)

  • Jeon, Yeo Ryeong;Park, Sang June;Ahn, Seokyoung;Kim, Yongmin
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.3
    • /
    • pp.297-304
    • /
    • 2018
  • U.S. nuclear power plant decommissioning guidelines(MARSSIM and MARLAP) are recommends to use DQOs when planning and conducting site surveys. The DQOs which is constructed in the site survey planning stage provide a way to make the best use of data. It helps we can get the important information and data to make decisions as well. From fifth to seventh steps of DQOs are the process of designing a site survey by using the collected data and information in the previous step to make reasonable and reliable decisions. The gray region that is set up during this process is defined as the range of concentrations where the consequences of type II decision errors are relatively small. The gray region can be set using DCGL and the average concentration of radionuclide in the sample collected at the survey unit. By setting up the gray region, site survey plan can be made most resource-efficient and the consequences on decision errors can be minimized. In this study, we set up the gray region by using the DCGL of Kori-1 which was derived from the previous research. In addition, we proposed a method to assess the concentration of radionuclide in samples for making decisions correctly.

An analysis of errors in problem solving of the function unit in the first grade highschool (고등학교 1학년 함수단원 문제해결에서의 오류에 대한 분석)

  • Mun, Hye-Young;Kim, Yung-Hwan
    • Journal of the Korean School Mathematics Society
    • /
    • v.14 no.3
    • /
    • pp.277-293
    • /
    • 2011
  • The purpose of mathematics education is to develop the ability of transforming various problems in general situations into mathematics problems and then solving the problem mathematically. Various teaching-learning methods for improving the ability of the mathematics problem-solving can be tried. However, it is necessary to choose an appropriate teaching-learning method after figuring out students' level of understanding the mathematics learning or their problem-solving strategies. The error analysis is helpful for mathematics learning by providing teachers more efficient teaching strategies and by letting students know the cause of failure and then find a correct way. The following subjects were set up and analyzed. First, the error classification pattern was set up. Second, the errors in the solving process of the function problems were analyzed according to the error classification pattern. For this study, the survey was conducted to 90 first grade students of ${\bigcirc}{\bigcirc}$high school in Chung-nam. They were asked to solve 8 problems in the function part. The following error classification patterns were set up by referring to the preceding studies about the error and the error patterns shown in the survey. (1)Misused Data, (2)Misinterpreted Language, (3)Logically Invalid Inference, (4)Distorted Theorem or Definition, (5)Unverified Solution, (6)Technical Errors, (7)Discontinuance of solving process The results of the analysis of errors due to the above error classification pattern were given below First, students don't understand the concept of the function completely. Even if they do, they lack in the application ability. Second, students make many mistakes when they interpret the mathematics problem into different types of languages such as equations, signals, graphs, and figures. Third, students misuse or ignore the data given in the problem. Fourth, students often give up or never try the solving process. The research on the error analysis should be done further because it provides the useful information for the teaching-learning process.

  • PDF

A Study on the Sustainability of New SMEs through the Analysis of Altman Z-Score: Focusing on New and Renewable Energy Industry in Korea (알트만 Z-스코어를 이용한 신생 중소기업의 지속가능성 분석: 신재생에너지산업을 중심으로)

  • Oh, Nak-Kyo;Yoon, Sung-Soo;Park, Won-Koo
    • Journal of Technology Innovation
    • /
    • v.22 no.2
    • /
    • pp.185-220
    • /
    • 2014
  • The purpose of this study is to get a whole picture of financial conditions of the new and renewable energy sector which have been growing rapidly and predict bankruptcy risk quantitatively. There have been many researches on the methodologies for company failure prediction, such as financial ratios as predictors of failure, analysis of corporate governance, risk factors and survival analysis, and others. The research method for this study is Altman Z-score which has been widely used in the world. Data Set was composed of 121 companies with financial statements from KIS-Value. Covering period for the analysis of the data set is from the year 2006 to 2011. As a result of this study, we found that 38 percent of the data set belongs to "Distress" Zone (on alert) while 38% (on watch), summed into 76%, whose level could be interpreted to doubt about the sustainability. The average of the SMEs in wind energy sector was worse than that of SMEs in solar energy sector. And the average of the SMEs in the "Distress" Zone (on alert) was worse than that of the companies of large group in the "Distress" Zone (on alert). In conclusion, Altman Z-score was well proved to be effective for New & Renewable Energy Industry in Korea as a result of this study. The importance of this study lies on the result to demonstrate empirically that the majority of solar and wind enterprises are facing the risk of bankruptcy. And it is also meaningful to have studied the relationship between SMEs and large companies in addition to advancing research on new start-up companies.

Hierarchical Browsing Interface for Geo-Referenced Photo Database (위치 정보를 갖는 사진집합의 계층적 탐색 인터페이스)

  • Lee, Seung-Hoon;Lee, Kang-Hoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.4
    • /
    • pp.25-33
    • /
    • 2010
  • With the popularization of digital photography, people are now capturing and storing far more photos than ever before. However, the enormous number of photos often discourages the users to identify desired photos. In this paper, we present a novel method for fast and intuitive browsing through large collections of geo-referenced photographs. Given a set of photos, we construct a hierarchical structure of clusters such that each cluster includes a set of spatially adjacent photos and its sub-clusters divide the photo set disjointly. For each cluster, we pre-compute its convex hull and the corresponding polygon area. At run-time, this pre-computed data allows us to efficiently visualize only a fraction of the clusters that are inside the current view and have easily recognizable sizes with respect to the current zoom level. Each cluster is displayed as a single polygon representing its convex hull instead of every photo location included in the cluster. The users can quickly transfer from clusters to clusters by simply selecting any interesting clusters. Our system automatically pans and zooms the view until the currently selected cluster fits precisely into the view with a moderate size. Our user study demonstrates that these new visualization and interaction techniques can significantly improve the capability of navigating over large collections of geo-referenced photos.

Prediction Model of Real Estate ROI with the LSTM Model based on AI and Bigdata

  • Lee, Jeong-hyun;Kim, Hoo-bin;Shim, Gyo-eon
    • International journal of advanced smart convergence
    • /
    • v.11 no.1
    • /
    • pp.19-27
    • /
    • 2022
  • Across the world, 'housing' comprises a significant portion of wealth and assets. For this reason, fluctuations in real estate prices are highly sensitive issues to individual households. In Korea, housing prices have steadily increased over the years, and thus many Koreans view the real estate market as an effective channel for their investments. However, if one purchases a real estate property for the purpose of investing, then there are several risks involved when prices begin to fluctuate. The purpose of this study is to design a real estate price 'return rate' prediction model to help mitigate the risks involved with real estate investments and promote reasonable real estate purchases. Various approaches are explored to develop a model capable of predicting real estate prices based on an understanding of the immovability of the real estate market. This study employs the LSTM method, which is based on artificial intelligence and deep learning, to predict real estate prices and validate the model. LSTM networks are based on recurrent neural networks (RNN) but add cell states (which act as a type of conveyer belt) to the hidden states. LSTM networks are able to obtain cell states and hidden states in a recursive manner. Data on the actual trading prices of apartments in autonomous districts between January 2006 and December 2019 are collected from the Actual Trading Price Disclosure System of the Ministry of Land, Infrastructure and Transport (MOLIT). Additionally, basic data on apartments and commercial buildings are collected from the Public Data Portal and Seoul Metropolitan Government's data portal. The collected actual trading price data are scaled to monthly average trading amounts, and each data entry is pre-processed according to address to produce 168 data entries. An LSTM model for return rate prediction is prepared based on a time series dataset where the training period is set as April 2015~August 2017 (29 months), the validation period is set as September 2017~September 2018 (13 months), and the test period is set as December 2018~December 2019 (13 months). The results of the return rate prediction study are as follows. First, the model achieved a prediction similarity level of almost 76%. After collecting time series data and preparing the final prediction model, it was confirmed that 76% of models could be achieved. All in all, the results demonstrate the reliability of the LSTM-based model for return rate prediction.

A Review on Improvements of Climate Change Vulnerability Analysis Methods : Focusing on Sea Level Rise Disasters (도시 기후변화 재해취약성분석 방법의 개선방안 검토 : 해수면상승 재해를 중심으로)

  • Kim, Ji-Sook;Kim, Ho-Yong;Lee, Sung-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.1
    • /
    • pp.50-60
    • /
    • 2014
  • The purpose of this study is to identify characteristics and improvements of the climate change vulnerability analysis methods to build a safe city from disasters. For this, an empirical analysis on sea level rise disasters was performed focusing on Heaundae-gu in Busan. For the analysis, Census output areas and Dongs were set as analysis unit and their disaster vulnerability was analyzed. Improvements were reviewed through the comparison and review of analysis process and results. According to analysis results, Modifiable Areal Unit Problem(MAUP) which gives different results according to aggregate unit occurs. Improvements were induced by analysis process, and it was found that in spatial unit setting stage that becomes the base of analysis, analysis unit adjustment, score computation method adjustment, and clearer analysis method for each disaster type would be needed. In analysis execution stage, it was thought that weighting according to variables, diversification of variables, and exclusion of subjective analysis selection method would be needed. It is expected that accurate the total disaster vulnerability analysis will be the base for the improvement of efficiency in urban resilience responding to future weather changes.

Image Fusion Watermarks Using Multiresolution Wavelet Transform (다해상도 웨이블릿 변환을 이용한 영상 융합 워터마킹 기법)

  • Kim Dong-Hyun;Ahn Chi-Hyun;Jun Kye-Suk;Lee Dae-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.6
    • /
    • pp.83-92
    • /
    • 2005
  • This paper presents a watermarking approach that the 1-level Discrete Wavelet Transform(DWT) coefficients of a $64{\ast}64$ binary logo image as watermarks are inserted in LL band and other specific frequency bands of the host image using Multi-Resolution Analysis(MRA) Wavelet transform for copyright protection of image data. The DWT coefficients of the binary logo image are inserted in blocks of LL band and specific bands of the host image that the 3-level DWT has been performed in the same orientation. We investigate Significant Coefficients(SCs) in each block of the frequency areas in order to prevent the quality deterioration of the host image and the watermark is inserted by SCs. When the host image is distorted by difference of the distortion degree in each frequency, we set the thresholds of SCs on each frequency and completely insert the watermark in each frequency of the host image. In order to be invisibility of the watermark, the Human Visual System(HVS) is applied to the watermark. We prove the proper embedding method by experiment. Thereby, we rapidly detect the watermark using this watermarking method and because the small size watermarks are inserted by HVS and SCs, the results confirm the superiority of the proposed method on invisibility and robustness.

A Study on the Improvement Repeatability and Accuracy of the Analysis Method for SF6 of Trace Level (극미량 수준의 SF6 측정법에 따른 재현성 및 정확도 향상에 관한 연구)

  • Yoo, Heejung;Choe, Hongwoo;Lee, Sepyo;Kim, Jongho;Han, Sangok;Ryoo, Sangboom
    • Journal of the Korean Society of Urban Environment
    • /
    • v.18 no.4
    • /
    • pp.523-530
    • /
    • 2018
  • Kyoto Protocol, adopted in 1997, set the obligation to reduce $CO_2$, $CH_4$, $N_2O$, HFCs, PFCs, and $SF_6$ in developed countries during 1st promised period. $SF_6$ has been drawing a lot of attention since the Kyoto Protocol because once it is released into the atmosphere, it not only stays in the atmosphere for more than 3,200 years but also emits 22,800 times stronger global warming potential at the same concentrations as $CO_2$ if remains in the atmosphere for 100 years. This study introduces 12 methods for $SF_6$ of measuring trace. $SF_6$ of trace level in the atmosphere correctly, the measurement method was changed and as a result, when the back flush method was applied to the pre-concentration system that used low-temperature concentration and high-temperature desorption system, which used Carboxen-1000 adsorption trap, the effect was the best.