• Title/Summary/Keyword: Artificial

Search Result 18,131, Processing Time 0.042 seconds

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

A Study on the Tree Surgery Problem and Protection Measures in Monumental Old Trees (천연기념물 노거수 외과수술 문제점 및 보존 관리방안에 관한 연구)

  • Jung, Jong Soo
    • Korean Journal of Heritage: History & Science
    • /
    • v.42 no.1
    • /
    • pp.122-142
    • /
    • 2009
  • This study explored all domestic and international theories for maintenance and health enhancement of an old and big tree, and carried out the anatomical survey of the operation part of the tree toward he current status of domestic surgery and the perception survey of an expert group, and drew out following conclusion through the process of suggesting its reform plan. First, as a result of analyzing the correlation of the 67 subject trees with their ages, growth status. surroundings, it revealed that they were closely related to positional characteristic, damage size, whereas were little related to materials by fillers. Second, the size of the affected part was the most frequent at the bough sheared part under $0.09m^2$, and the hollow size by position(part) was the biggest at 'root + stem' starting from the behind of the main root and stem As a result of analyzing the correlation, the same result was elicited at the group with low correlation. Third, the problem was serious in charging the fillers (especially urethane) in the big hollow or exposed root produced at the behind of the root and stem part, or surface-processing it. The benefit by charging the hollow part was analyzed as not so much. Fourth, the surface-processing of fillers currently used (artificial bark) is mainly 'epoxy+woven fabric+cork', but it is not flexible, so it has brought forth problems of frequent cracks and cracked surface at the joint part with the treetextured part. Fifth, the correlation with the external status of the operated part was very high with the closeness, surface condition, formation of adhesive tissue and internal survey result. Sixth, the most influential thing on flushing by the wrong management of an old and big tree was banking, and a wrong pruning was the source of the ground part damage. In pruning a small bough can easily recover itself from its damage as its formation of adhesive tissue when it is cut by a standard method. Seventh, the parameters affecting the times of related business handling of an old and big tree are 'the need of the conscious reform of the manager and related business'. Eighth, a reform plan in an institutional aspect can include the arrangement of the law and organization of the old and big tree management and preservation at an institutional aspect. This study for preparing a reform plan through the status survey of the designated old and big tree, has a limit inducing a reform plan based on the status survey through individual research, and a weak point suggesting grounds by any statistical data. This can be complemented by subsequent studies.

A Study on Plant Symbolism Expressed in Korean Sokwha (Folk Painting) (한국 속화(俗畵)(민화(民畵))에 표현된 식물의 상징성에 관한 연구)

  • Gil, Geum-Sun;Kim, Jae-Sik
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.29 no.2
    • /
    • pp.81-89
    • /
    • 2011
  • The results of tracking the symbolism of plants in the introduction factors of Sokhwa(folk painting) are as the following. 1. The term Sokhwa(俗畵) is not only a type of painting with a strong local customs, but also carries a symbolic meaning and was discovered in "Donggukisanggukjip" of Lee, Gyu-Bo(1268~1241) in the Goryo era as well as the various usage in the "Sok Dongmunseon" in the early Chosun era, "Sasukjaejip" of Gang, Hee-mang(1424~1483), "Ilseongrok(1786)" in the late Chosun era, "Jajeo(自著)" of Yoo, Han-joon(1732~1811), and "Ojuyeonmunjangjeonsango(五洲衍文長箋散稿)" of Lee, Gyu-gyung(1788~?). Especially, according to the Jebyungjoksokhwa allegation〈題屛簇俗畵辯證說〉in the Seohwa of the Insa Edition of Ojuyeonmunjangjeonsango, there is a record that the "people called them Sokhwa." 2. Contemporarily, the Korean Sokhwa underwent the prehistoric age that primitively reflected the natural perspective on agricultural culture, the period of Three States that expressed the philosophy of the eternal spirits and reflected the view on the universe in colored pictures, the Goryo Era that religiously expressed the abstract shapes and supernatural patterns in spacein symbolism, and the Chosun Era that established the traditional Korean identity of natural perspective, aesthetic values and symbolism in a complex integration in the popular culture over time. 3. The materials that were analyzed in 1,009 pieces of Korean Sokhwa showed 35 species of plants, 37 species of animals, 6 types of natural objects and other 5 types with a total of 83 types. 4. The shape aesthetics according to the aesthetic analysis of the plants in Sokhwa reflect the primitive world view of Yin/yang and the Five Elements in the peony paintings and dynamic refinement and biological harmonies in the maehwado; the composition aesthetics show complex multi-perspective composition with a strong noteworthiness in the bookshelf paintings, a strong contrast of colors with reverse perspective drawing in the battlefield paintings, and the symmetric beauty of simple orderly patterns in nature and artificial objects with straight and oblique lines are shown in the leisurely reading paintings. In terms of color aesthetics, the five colors of directions - east, west, south, north and the center - or the five basic colors - red, blue, yellow, white and black - are often utilized in ritual or religious manners or symbolically substitute the relative relationships with natural laws. 5. The introduction methods in the Korean Sokhwa exceed the simple imitation of the natural shapes and have been sublimated to the symbolism that is related to nature based on the colloquial artistic characteristics with the suspicion of the essence in the universe. Therefore, the symbolism of the plants and animals in the Korean Sokhwas is a symbolic recognition system, not a scientific recognition system with a free and unique expression with a complex interaction among religious, philosophical, ecological and ideological aspects, as a identity of the group culture of Koreans where the past and the future coexist in the present. This is why the Koran Sokhwa or the folk paintings can be called a cultural identity and can also be interpreted as a natural and folk meaningful scenic factor that has naturally integrated into our cultural lifestyle. However, the Sokhwa(folk paintings) that had been closely related to our lifestyle drastically lost its meaning and emotions through the transitions over time. As the living lifestyle predominantly became the apartment culture and in the historical situations where the confusion of the identity has deepened, the aesthetic and the symbolic values of the Sokhwa folk paintings have the appropriateness to be transmitted as the symbolic assets that protect our spiritual affluence and establish our identity.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Evaluation of Ovary Dose of Childbearing age Woman with Breast cancer in Radiation therapy (가임기 여성의 방사선 치료 시 난소 선량 평가)

  • Park, Sung Jun;Lee, Yeong Cheol;Kim, Seon Myeong;Kim, Young Bum
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.33
    • /
    • pp.145-153
    • /
    • 2021
  • Purpose: The purpose of this study is to evaluate the ovarian dose during radiation therapy for breast cancer in women of childbearing age through an experiment. The ovarian dose is evaluated by comparing and analyzing between the calculated dose in the treatment planning system according to the treatment technique and the measured dose using a thermoluminescence dosimeter (TLD). The clinical usefulness of lead (Pb) apron is investigated through dose analysis according to whether or not it is used. Materials and Methods: Rando humanoid phantom was used for measurement, and wedge filter radiation therapy, 3D conformal radiation therapy, and intensity modulated radiation therapy were used as treatment techniques. A treatment plan was established so that 95% of the prescribed dose could be delivered to the right breast of the Rando humanoid phantom 3D image obtained using the CT simulator. TLD was inserted into the surface and depth of the virtual ovary of the Rando hunmanoid phantom and irradiated with radiation. The measurement location was the center of treatment and the point moved 2 cm to the opposite breast from the center of the Rando hunmanoid phantom, 5cm, 10cm, 12.5cm, 15cm, 17.5cm, 20cm from the boundary of the right breast to the center of treatment and downward, and the surface and depth of the right ovary. Measurements were made at a total of 9 central points. In the dose comparison of treatment planning systems, two wedge filter treatment techniques, three-dimensional conformal radiotherapy, and intensity-modulated radiation therapy were established and compared. Treatments were compared, and dose measurements according to the use of lead apron were compared and analyzed in intensity-modulated radiation therapy. The measured value was calculated by averaging three TLD values for each point and converting using the TLD calibration value, which was calculated as the point dose mean value. In order to compare the treatment plan value with the actual measured value, the absolute dose value was measured and compared at each point (%Diff). Results: At Point A, the center of treatment, a maximum of 201.7cGy was obtained in the treatment planning system, and a maximum of 200.6cGy was obtained in the TLD. In all treatment planning systems, 0cGy was calculated from Point G, which is a point 17.5cm downward from the breast interface. As a result of TLD, a maximum of 2.6cGy was obtained at Point G, and a maximum of 0.9cGy was obtained at Point J, which is the ovarian dose, and the absolute dose was 0.3%~1.3%. The difference in dose according to the use of lead aprons was from a maximum of 2.1cGy to a minimum of 0.1cGy, and the %Diff value was 0.1%~1.1%. Conclusion: In the treatment planning system, the difference in dose according to the three treatment plans did not show a significant difference from 0.85% to 2.45%. In the ovary, the difference between the Rando humanoid phantom's treatment planning system and the actual measured dose was within 0.9%, and the actual measured dose was slightly higher. This did not accurately reflect the effect of scattered radiation in the treatment planning system, and it is thought that the dose of scattered radiation and the dose taken by CBCT with TLD inserted were reflected in the actual measurement. In dosimetry according to the with or without a lead apron, when a lead apron was used, the closer the distance from the treatment range, the more effective the shielding was. Although it is not clinically appropriate for pregnancy or artificial insemination during radiotherapy, the dose irradiated to the ovaries during treatment is not expected to significantly affect the reproductive function of women of childbearing age after radiotherapy. However, since women of childbearing age have constant anxiety, it is thought that psychological stability can be promoted by presenting the data from this study.

Study on the Characteristics of Cultivation Period, Adaptive Genetic Resources, and Quantity for Cultivation of Rice in the Desert Environment of United Arab Emirates (United Arab Emirates 사막환경에서 벼 재배를 위한 재배기간, 유전자원 및 수량 특성 연구)

  • Jeong, Jae-Hyeok;Hwang, Woon-Ha;Lee, Hyeon-Seok;Yang, Seo-Yeong;Choi, Myoung-Goo;Kim, Jun-Hwan;Kim, Jae-Hyeon;Jung, Kang-Ho;Lee, Su-Hwan;Oh, Yang-Yeol;Lee, Kwang-Seung;Suh, Jung-Pil;Jung, Ki-Yuol;Lee, Jae-Su;Choi, In-Chan;Yu, Seung-hwa;Choi, Soon-Kun;Lee, Seul-Bi;Lee, Eun-Jin;Lee, Choung-Keun;Lee, Chung-Kuen
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.3
    • /
    • pp.133-144
    • /
    • 2022
  • This study was conducted to investigate the cultivation period, adaptive genetic resources, growth and development patterns, and water consumption for rice cultivation in the desert environment of United Arab Emirates (UAE). R esearch on rice cultivation in the desert environment is expected to contribute to resolving food shortages caused by climate change and water scarcity. It was found that the optimal cultivation period of rice was from late November to late April of the following year during which the low temperature occurred at the vegetative growth stage of rice in the UAE. Asemi and FL478 were selected to be candidate cultivars for temperature and day-length conditions in the desert areas as a result of pre-testing genetic resources under reclaimed soil and artificial meteorological conditions. In the desert environment in the UAE, FL478 died before harvest due to the etiolation and poor growth in the early stage of growth. In contrast, Asemi overcame the etiolation in the early stage of growth, which allowed for harvest. The vegetative growth phases of Asemi were from early December to early March of the following year whereas its reproductive growth and ripening phases were from early March to late March and from late March to late April, respectively. The yield of milled rice for Asemi was 763kg/10a in the UAE, which was about 41.8% higher than that in Korea. Such an outcome was likely due to the abundant solar radiation during the reproductive growth and grain filling periods. On the other hand, water consumption during the cultivation period in the UAE was 2,619 ton/10a, which was about three times higher than that in Korea. These results suggest that irrigation technology and development of cultivation methods would be needed to minimize water consumption, which would make it economically viable to grow rice in the UAE. In addition, select on of genetic resources for the UAE desert environments such as minimum etiolation in the early stages of growth would be merited further studies, which would promote stable rice cultivation in the arid conditions.

The History of the Development of Meteorological Related Organizations with the 60th Anniversary of the Korean Meteorological Society - Universities, Korea Meteorological Administration, ROK Air Force Weather Group, and Korea Meteorological Industry Association - (60주년 (사)한국기상학회와 함께한 유관기관의 발전사 - 대학, 기상청, 공군기상단, 한국기상산업협회 -)

  • Jae-Cheol Nam;Myoung-Seok Suh;Eun-Jeong Lee;Jae-Don Hwang;Jun-Young Kwak;Seong-Hyen Ryu;Seung Jun Oh
    • Atmosphere
    • /
    • v.33 no.2
    • /
    • pp.275-295
    • /
    • 2023
  • In Korea, there are four institutions related to atmospheric science: the university's atmospheric science-related department, the Korea Meteorological Administration (KMA), the ROK Air Force Weather Group, and the Meteorological Industry Association. These four institutions have developed while maintaining a deep cooperative relationship with the Korea Meteorological Society (KMS) for the past 60 years. At the university, 6,986 bachelors, 1,595 masters, and 505 doctors, who are experts in meteorology and climate, have been accredited by 2022 at 7 universities related to atmospheric science. The KMA is carrying out national meteorological tasks to protect people's lives and property and foster the meteorological industry. The ROK Air Force Weather Group is in charge of military meteorological work, and is building an artificial intelligence and space weather support system through cooperation with universities, the KMA, and the KMS. Although the Meteorological Industry Association has a short history, its members, sales, and the number of employees are steadily increasing. The KMS greatly contributed to raising the national meteorological service to the level of advanced countries by supporting the development of universities, the KMA, the Air Force Meteorological Agency, and the Meteorological Industry Association.

Legal Issues on the Collection and Utilization of Infectious Disease Data in the Infectious Disease Crisis (감염병 위기 상황에서 감염병 데이터의 수집 및 활용에 관한 법적 쟁점 -미국 감염병 데이터 수집 및 활용 절차를 참조 사례로 하여-)

  • Kim, Jae Sun
    • The Korean Society of Law and Medicine
    • /
    • v.23 no.4
    • /
    • pp.29-74
    • /
    • 2022
  • As social disasters occur under the Disaster Management Act, which can damage the people's "life, body, and property" due to the rapid spread and spread of unexpected COVID-19 infectious diseases in 2020, information collected through inspection and reporting of infectious disease pathogens (Article 11), epidemiological investigation (Article 18), epidemiological investigation for vaccination (Article 29), artificial technology, and prevention policy Decision), (3) It was used as an important basis for decision-making in the context of an infectious disease crisis, such as promoting vaccination and understanding the current status of damage. In addition, medical policy decisions using infectious disease data contribute to quarantine policy decisions, information provision, drug development, and research technology development, and interest in the legal scope and limitations of using infectious disease data has increased worldwide. The use of infectious disease data can be classified for the purpose of spreading and blocking infectious diseases, prevention, management, and treatment of infectious diseases, and the use of information will be more widely made in the context of an infectious disease crisis. In particular, as the serious stage of the Disaster Management Act continues, the processing of personal identification information and sensitive information becomes an important issue. Information on "medical records, vaccination drugs, vaccination, underlying diseases, health rankings, long-term care recognition grades, pregnancy, etc." needs to be interpreted. In the case of "prevention, management, and treatment of infectious diseases", it is difficult to clearly define the concept of medical practicesThe types of actions are judged based on "legislative purposes, academic principles, expertise, and social norms," but the balance of legal interests should be based on the need for data use in quarantine policies and urgent judgment in public health crises. Specifically, the speed and degree of transmission of infectious diseases in a crisis, whether the purpose can be achieved without processing sensitive information, whether it unfairly violates the interests of third parties or information subjects, and the effectiveness of introducing quarantine policies through processing sensitive information can be used as major evaluation factors. On the other hand, the collection, provision, and use of infectious disease data for research purposes will be used through pseudonym processing under the Personal Information Protection Act, consent under the Bioethics Act and deliberation by the Institutional Bioethics Committee, and data provision deliberation committee. Therefore, the use of research purposes is recognized as long as procedural validity is secured as it is reviewed by the pseudonym processing and data review committee, the consent of the information subject, and the institutional bioethics review committee. However, the burden on research managers should be reduced by clarifying the pseudonymization or anonymization procedures, the introduction or consent procedures of the comprehensive consent system and the opt-out system should be clearly prepared, and the procedure for re-identifying or securing security that may arise from technological development should be clearly defined.