• Title/Summary/Keyword: 정로

Search Result 54,988, Processing Time 0.085 seconds

Development and Validation of the Analytical Method for Oxytetracycline in Agricultural Products using QuEChERS and LC-MS/MS (QuEChERS법 및 LC-MS/MS를 이용한 농산물 중 Oxytetracycline의 잔류시험법 개발 및 검증)

  • Cho, Sung Min;Do, Jung-Ah;Lee, Han Sol;Park, Ji-Su;Shin, Hye-Sun;Jang, Dong Eun;Cho, Myong-Shik;Jung, ong-hyun;Lee, Kangbong
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.3
    • /
    • pp.227-234
    • /
    • 2019
  • An analytical method was developed for the determination of oxytetracycline in agricultural products using the QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method by liquid chromatography-tandem mass spectrometry (LC-MS/MS). After the samples were extracted with methanol, the extracts were adjusted to pH 4 by formic acid and sodium chloride was added to remove water. Dispersive solid phase extraction (d-SPE) cleanup was carried out using $MgSO_4$ (anhydrous magnesium sulfate), PSA (primary secondary amine), $C_{18}$ (octadecyl) and GCB (graphitized carbon black). The analytes were quantified and confirmed with LC-MS/MS using ESI (electrospray ionization) in positive ion MRM (multiple reaction monitoring) mode. The matrix-matched calibration curves were constructed using six levels ($0.001{\sim}0.25{\mu}g/mL$) and coefficient of determination ($r^2$) was above 0.99. Recovery results at three concentrations (LOQ, $10{\times}LOQ$, and $50{\times}LOQ$, n=5) were from 80.0 to 108.2% with relative standard deviations (RSDs) less than of 11.4%. For inter-laboratory validation, the average recovery was in the range of 83.5~103.2% and the coefficient of variation (CV) was below 14.1%. All results satisfied the criteria ranges requested in the Codex guidelines (CAC/GL 40-1993, 2003) and the Food Safety Evaluation Department guidelines (2016). The proposed analytical method was accurate, effective and sensitive for oxytetracycline determination in agricultural commodities. This study could be useful for safety management of oxytetracycline residues in agricultural products.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Color-related Query Processing for Intelligent E-Commerce Search (지능형 검색엔진을 위한 색상 질의 처리 방안)

  • Hong, Jung A;Koo, Kyo Jung;Cha, Ji Won;Seo, Ah Jeong;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.109-125
    • /
    • 2019
  • As interest on intelligent search engines increases, various studies have been conducted to extract and utilize the features related to products intelligencely. In particular, when users search for goods in e-commerce search engines, the 'color' of a product is an important feature that describes the product. Therefore, it is necessary to deal with the synonyms of color terms in order to produce accurate results to user's color-related queries. Previous studies have suggested dictionary-based approach to process synonyms for color features. However, the dictionary-based approach has a limitation that it cannot handle unregistered color-related terms in user queries. In order to overcome the limitation of the conventional methods, this research proposes a model which extracts RGB values from an internet search engine in real time, and outputs similar color names based on designated color information. At first, a color term dictionary was constructed which includes color names and R, G, B values of each color from Korean color standard digital palette program and the Wikipedia color list for the basic color search. The dictionary has been made more robust by adding 138 color names converted from English color names to foreign words in Korean, and with corresponding RGB values. Therefore, the fininal color dictionary includes a total of 671 color names and corresponding RGB values. The method proposed in this research starts by searching for a specific color which a user searched for. Then, the presence of the searched color in the built-in color dictionary is checked. If there exists the color in the dictionary, the RGB values of the color in the dictioanry are used as reference values of the retrieved color. If the searched color does not exist in the dictionary, the top-5 Google image search results of the searched color are crawled and average RGB values are extracted in certain middle area of each image. To extract the RGB values in images, a variety of different ways was attempted since there are limits to simply obtain the average of the RGB values of the center area of images. As a result, clustering RGB values in image's certain area and making average value of the cluster with the highest density as the reference values showed the best performance. Based on the reference RGB values of the searched color, the RGB values of all the colors in the color dictionary constructed aforetime are compared. Then a color list is created with colors within the range of ${\pm}50$ for each R value, G value, and B value. Finally, using the Euclidean distance between the above results and the reference RGB values of the searched color, the color with the highest similarity from up to five colors becomes the final outcome. In order to evaluate the usefulness of the proposed method, we performed an experiment. In the experiment, 300 color names and corresponding color RGB values by the questionnaires were obtained. They are used to compare the RGB values obtained from four different methods including the proposed method. The average euclidean distance of CIE-Lab using our method was about 13.85, which showed a relatively low distance compared to 3088 for the case using synonym dictionary only and 30.38 for the case using the dictionary with Korean synonym website WordNet. The case which didn't use clustering method of the proposed method showed 13.88 of average euclidean distance, which implies the DBSCAN clustering of the proposed method can reduce the Euclidean distance. This research suggests a new color synonym processing method based on RGB values that combines the dictionary method with the real time synonym processing method for new color names. This method enables to get rid of the limit of the dictionary-based approach which is a conventional synonym processing method. This research can contribute to improve the intelligence of e-commerce search systems especially on the color searching feature.

The Evaluation of Non-Coplanar Volumetric Modulated Arc Therapy for Brain stereotactic radiosurgery (뇌 정위적 방사선수술 시 Non-Coplanar Volumetric Modulated Arc Therapy의 유용성 평가)

  • Lee, Doo Sang;Kang, Hyo Seok;Choi, Byoung Joon;Park, Sang Jun;Jung, Da Ee;Lee, Geon Ho;Ahn, Min Woo;Jeon, Myeong Soo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.9-16
    • /
    • 2018
  • Purpose : Brain Stereotactic Radiosurgery can treat non-invasive diseases with high rates of complications due to surgical operations. However, brain stereotactic radiosurgery may be accompanied by radiation induced side effects such as fractionation radiation therapy because it uses radiation. The effects of Coplanar Volumetric Modulated Arc Therapy(C-VMAT) and Non-Coplanar Volumetric Modulated Arc Therapy(NC-VMAT) on surrounding normal tissues were analyzed in order to reduce the side effects caused fractionation radiation therapy such as head and neck. But, brain stereotactic radiosurgery these contents were not analyzed. In this study, we evaluated the usefulness of NC-VMAT by comparing and analyzing C-VMAT and NC-VMAT in patients who underwent brain stereotactic radiosurgery. Methods and materials : With C-VMAT and NC-VMAT, 13 treatment plans for brain stereotactic radiosurgery were established. The Planning Target Volume ranged from a minimum of 0.78 cc to a maximum of 12.26 cc, Prescription doses were prescribed between 15 and 24 Gy. Treatment machine was TrueBeam STx (Varian Medical Systems, USA). The energy used in the treatment plan was 6 MV Flattening Filter Free (6FFF) X-ray. The C-VMAT treatment plan used a half 2 arc or full 2 arc treatment plan, and the NC-VMAT treatment plan used 3 to 7 Arc 40 to 190 degrees. The angle of the couch was planned to be 3-7 angles. Results : The mean value of the maximum dose was $105.1{\pm}1.37%$ in C-VMAT and $105.8{\pm}1.71%$ in NC-VMAT. Conformity index of C-VMAT was $1.08{\pm}0.08$ and homogeneity index was $1.03{\pm}0.01$. Conformity index of NC-VMAT was $1.17{\pm}0.1$ and homogeneity index was $1.04{\pm}0.01$. $V_2$, $V_8$, $V_{12}$, $V_{18}$, $V_{24}$ of the brain were $176{\pm}149.36cc$, $31.50{\pm}25.03cc$, $16.53{\pm}12.63cc$, $8.60{\pm}6.87cc$ and $4.03{\pm}3.43cc$ in the C-VMAT and $135.55{\pm}115.93cc$, $24.34{\pm}17.68cc$, $14.74{\pm}10.97cc$, $8.55{\pm}6.79cc$, $4.23{\pm}3.48cc$. Conclusions : The maximum dose, conformity index, and homogeneity index showed no significant difference between C-VMAT and NC-VMAT. $V_2$ to $V_{18}$ of the brain showed a difference of at least 0.5 % to 48 %. $V_{19}$ to $V_{24}$ of the brain showed a difference of at least 0.4 % to 4.8 %. When we compare the mean value of $V_{12}$ that Radione-crosis begins to generate, NC-VMAT has about 12.2 % less amount than C-VMAT. These results suggest that if NC-VMAT is used, the volume of $V_2$ to $V_{18}$ can be reduced, which can reduce Radionecrosis.

  • PDF

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Supplementary Woodblocks of the Tripitaka Koreana at Haeinsa Temple: Focus on Supplementary Woodblocks of the Maha Prajnaparamita Sutra (해인사 고려대장경 보각판(補刻板) 연구 -『대반야바라밀다경』 보각판을 중심으로-)

  • Shin, Eunje;Park, Hyein
    • MISULJARYO - National Museum of Korea Art Journal
    • /
    • v.98
    • /
    • pp.104-129
    • /
    • 2020
  • Designated as a national treasure of Korea and inscribed on the UNESCO World Heritage List, the Tripitaka Koreana at Haeinsa Temple is the world's oldest and most comprehensive extant version of the Tripitaka in Hanja script (i.e., Chinese characters). The set consists of 81,352 carved woodblocks, some of which have two or more copies, which are known as "duplicate woodblocks." These duplicates are supplementary woodblocks (bogakpan) that were carved some time after the original production, likely to replace blocks that had been eroded or damaged by repeated printings. According to the most recent survey, the number of supplementary woodblocks is 118, or approximately 0.14% of the total set, which attests to the outstanding preservation of the original woodblocks. Research on the supplementary woodblocks can reveal important details about the preservation and management of the Tripitaka Koreana woodblocks. Most of the supplementary woodblocks were carved during the Joseon period (1392-1910) or Japanese colonial period (1910-1945). Although the details of the woodblocks from the Japanese colonial period have been recorded and organized to a certain extent, no such efforts have been made with regards to the woodblocks from the Joseon period. This paper analyzes the characteristics and production date of the supplementary woodblocks of the Tripitaka Koreana. The sutra with the most supplementary woodblocks is the Maha Prajnaparamita Sutra (Perfection of Transcendental Wisdom), often known as the Heart Sutra. In fact, 76 of the total 118 supplementary woodblocks (64.4%) are for this sutra. Hence, analyses of printed versions of the Maha Prajnaparamita Sutra should illuminate trends in the carving of supplementary woodblocks for the Tripitaka Koreana, including the representative characteristics of different periods. According to analysis of the 76 supplementary woodblocks of the Maha Prajnaparamita Sutra, 23 were carved during the Japanese colonial period: 12 in 1915 and 11 in 1937. The remaining 53 were carved during the Joseon period at three separate times. First, 14 of the woodblocks bear the inscription "carved in the mujin year by Haeji" ("戊辰年更刻海志"). Here, the "mujin year" is estimated to correspond to 1448, or the thirtieth year of the reign of King Sejong. On many of these 14 woodblocks, the name of the person who did the carving is engraved outside the border. One of these names is Seonggyeong, an artisan who is known to have been active in 1446, thus supporting the conclusion that the mujin year corresponds to 1448. The vertical length of these woodblocks (inside the border) is 21 cm, which is about 1 cm shorter than the original woodblocks. Some of these blocks were carved in the Zhao Mengfu script. Distinguishing features include the appearance of faint lines on some plates, and the rough finish of the bottoms. The second group of supplementary woodblocks was carved shortly after 1865, when the monks Namho Yeonggi and Haemyeong Jangung had two copies of the Tripitaka Koreana printed. At the time, some of the pages could not be printed because the original woodblocks were damaged. This is confirmed by the missing pages of the extant copy that is now preserved at Woljeongsa Temple. As a result, the supplementary woodblocks are estimated to have been produced immediately after the printing. Evidently, however, not all of the damaged woodblocks could be replaced at this time, as only six woodblocks (comprising eight pages) were carved. On the 1865 woodblocks, lines can be seen between the columns, no red paint was applied, and the prayers of patrons were also carved into the plates. The third carving of supplementary woodblocks occurred just before 1899, when the imperial court of the Korean Empire sponsored a new printing of the Tripitaka Koreana. Government officials who were dispatched to supervise the printing likely inspected the existing blocks and ordered supplementary woodblocks to be carved to replace those that were damaged. A total of 33 supplementary woodblocks (comprising 56 pages) were carved at this time, accounting for the largest number of supplementary woodblocks for the Maha Prajnaparamita Sutra. On the 1899 supplementary woodblocks, red paint was applied to each plate and one line was left blank at both ends.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Improvement and Validation of an Analytical Method for Quercetin-3-𝑜-gentiobioside and Isoquercitrin in Abelmoschus esculentus L. Moench (오크라 분말의 Quercetin-3-𝑜-Gentiobioside 및 Isoquercitrin의 분석법 개선 및 검증)

  • Han, Xionggao;Choi, Sun-Il;Men, Xiao;Lee, Se-jeong;Jin, Heegu;Oh, Hyun-Ji;Cho, Sehaeng;Lee, Boo-Yong;Lee, Ok-Hwan
    • Journal of Food Hygiene and Safety
    • /
    • v.37 no.2
    • /
    • pp.39-45
    • /
    • 2022
  • This study aimed to investigate the validation and modify the analytical method to determine quercetin-3-𝑜-gentiobioside and isoquercitrin in Abelmoschus esculentus L. Moench for the standardization of ingredients in development of functional health products. The analytical method was validated based on the ICH (International Conference for Harmonization) guidelines to verify the reliability and validity there of on the specificity, linearity, accuracy, precision, detection limit and quantification limit. For the HPLC analysis method, the peak retention time of the index component of the standard solution and the peak retention time of the index component of A. esculentus L. Moench powder sample were consistent with the spectra thereof, confirming the specificity. The calibration curves of quercetin-3-𝑜-gentiobioside and isoquercitrin showed a linearity with a near-one correlation coefficient (0.9999 and 0.9999), indicating the high suitability thereof for the analysis. A. esculentus L. Moench powder sample of a known concentration were prepared with low, medium, and high concentrations of standard substances and were calculated for the precision and accuracy. The precision of quercetin-3-𝑜-gentiobioside and isoquercitrin was confirmed for intra-day and daily. As a result, the intra-day precision was found to be 0.50-1.48% and 0.77-2.87%, and the daily precision to be 0.07-3.37% and 0.58-1.37%, implying an excellent precision at level below 5%. As a result of accuracy measurement, the intra-day accuracy of quercetin-3-𝑜-gentiobioside and isoquercitrin was found to be 104.87-109.64% and the daily accuracy thereof was found to be 106.85-109.06%, reflecting high level of accuracy. The detection limits of quercetin-3-𝑜-gentiobioside and isoquercitrin were 0.24 ㎍/mL and 0.16 ㎍/mL, respectively, whereas the quantitation limits were 0.71 ㎍/mL and 0.49 ㎍/mL, confirming that detection was valid at the low concentrations as well. From the analysis, the established analytical method was proven to be excellent with high level of results from the verification on the specificity, linearity, precision, accuracy, detection limit and quantitation limit thereof. In addition, as a result of analyzing the content of A. esculentus L. Moench powder samples using a validated analytical method, quercetin-3-𝑜-gentiobioside was analyzed to contain 1.49±0.01 mg/dry weight g, while isoquercitrin contained 1.39±0.01 mg/dry weight g. The study was conducted to verify that the simultaneous analysis on quercetin-3-𝑜-gentiobioside and isoquercitrin, the indicators of A. esculentus L. Moench, is a scientifically reliable and suitable analytical method.

Development and Validation of an Analytical Method for Fungicide Fluoxastrobin Determination in Agricultural Products (농산물 중 살균제 Fluoxastrobin의 시험법 개발 및 유효성 검증)

  • So Eun, Lee;Su Jung, Lee;Sun Young, Gu;Chae Young, Park;Hye-Sun, Shin;Sung Eun, Kang;Jung Mi, Lee;Yun Mi, Chung;Gui Hyun, Jang;Guiim, Moon
    • Journal of Food Hygiene and Safety
    • /
    • v.37 no.6
    • /
    • pp.373-384
    • /
    • 2022
  • Fluoxastrobin a fungicide developed from Strobilurus species mushroom extracts, can be used as an effective pesticide to control fungal diseases. In this study, we optimized the extraction and purification of fluoxastrobin according to its physical and chemical properties using the QuEChERS method and developed an LC-MS/MS-based analysis method. For extraction, we used acetonitrile as the extraction solvent, along with MgSO4 and PSA. The limit of quantitation of fluoxastrobin was 0.01 mg/kg. We used 0.01, 0.1, and 0.5 mg/kg of five representative agricultural products and treated them with fluoxastrobin. The coefficients of determination (R2) of fluoxastrobin and fluoxastrobin Z isomer were > 0.998. The average recovery rates of fluoxastrobin (n=5) and fluoxastrobin Z isomer were 75.5-100.3% and 75.0-103.9%, respectively. The relative standard deviations (RSDs) were < 5.5% and < 4.3% for fluoxastrobin and fluoxastrobin Z isomer, respectively. We also performed an interlaboratory validation at Gwangju Regional Food and Drug Administration and compared the recovery rates and RSDs obtained for fluoxastrobin and fluoxastrobin Z isomer at the external lab with our results to validate our analysis method. In the external lab, the average recovery rates and RSDs of fluoxastrobin and fluoxastrobin Z isomer at each concentration were 79.5-100.5% and 78.8-104.7% and < 18.1% and < 10.2%, respectively. In all treatment groups, the concentrations were less than those described by the 'Codex Alimentarius Commission' and the 'Standard procedure for preparing test methods for food, etc.'. Therefore, fluoxastrobin is safe for use as a pesticide.

A Study on the Characteristics and Management Plan of Old Big Trees in the Sacred Natural Sites of Handan City, China (중국 한단시 자연성지 내 노거수의 특성과 관리방안)

  • Xi, Su-Ting;Shin, Hyun-Sil
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.41 no.2
    • /
    • pp.35-45
    • /
    • 2023
  • First, The spatial distribution characteristics of old big trees were analyzed using ArcGIS figures by combining basic information such as species and ages of old big trees in Handan City, which were compiled by the local bureau of landscaping. The types of species, distribution by ages of trees, ownership status, growth status, and diversity status were comprehensively analyzed. Statistically, Styphnolobium, Acacia, Gleditsia, and Albizia of Fabaceae accounted for the majority, of which Sophora japonica accounted for the highest proportion. Sophora japonica is widely and intensively distributed to each prefecture and district in Handan city. According to the age and distribution, the old big trees over 1000 years old were mainly Sophora japonica, Zelkova serrata, Juniperus chinensis, Morus australis Koidz., Dalbergia hupeana Hance, Ceratonia siliqua L., and Pistacia chinensis, and Platycladus orientalis. Second, as found in each type of old big tree status, various types of old big tree status were investigated, the protection management system, protection management process, and protection management benefits were studied, and the protection of old big tree was closely related to the growth environment. Currently, the main driving force behind the protection of old big trees is the worship of old big trees. By depositing its sacredness to the old big tree and sublimating the natural character that nature gave to the old big tree into a guiding consciousness of social activities, nature's "beauty" and personality's "goodness" are well combined. The protection state of the old big tree is closely related to the degree of interaction with the surrounding environment and the participation of various cultures and subjects. In the process of continuously interacting with the surrounding environment during the long-term growth of old big trees, it seems that a natural sanctuary was formed around old big trees in the process of voluntarily establishing a "natural-cultural-scape" system involving bottom-up and top-down cross-regions, multicultural and multi-subjects. Third, China focused on protecting and recovering old big trees, but the protection management system is poor due to a lack of comprehensive consideration of historical and cultural values, plant diversity significance, and social values of old big trees in the management process. Three indicators of space's regional characteristics, property and protection characteristics, and value characteristics can be found in the evaluation of the natural characteristics of old giant trees, which are highly valuable in terms of traditional consciousness management, resource protection practice, faith system construction, and realization of life community values. A systematic management system should be supported as to whether they can be protected and developed for a long time. Fourth, as the perception of protected areas is not yet mature in China, "natural sanctuary" should be treated as an important research content in the process of establishing a nature reserve system. The form of natural sanctuary management, which focuses on bottom-up community participation, is a strong supplement to the current type of top-down nature reserve management in China. Based on this, the protection of old giant trees should be included in the form of a nature reserve called a natural monument in the nature reserve system. In addition, residents of the area around the nature reserve should be one of the main agents of biodiversity conservation.