• Title/Summary/Keyword: Set Value

Search Result 3,739, Processing Time 0.027 seconds

The Study of Radiation Exposed dose According to 131I Radiation Isotope Therapy (131I 방사성 동위원소 치료에 따른 피폭 선량 연구)

  • Chang, Boseok;Yu, Seung-Man
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.653-659
    • /
    • 2019
  • The purpose of this study is to measure the (air dose rate of radiation dose) the discharged patient who was administrated high dose $^{131}I$ treatment, and to predict exposure radiation dose in public person. The dosimetric evaluation was performed according to the distance and angle using three copper rings in 30 patients who were treated with over 200mCi high dose Iodine therapy. The two observer were measured using a GM surverymeter with 8 point azimuth angle and three difference distance 50, 100, 150cm for precise radion dose measurement. We set up three predictive simulations to calculate the exposure dose based on this data. The most highest radiation dose rate was showed measuring angle $0^{\circ}$ at the height of 1m. The each distance average dose rate was used the azimuth angle average value of radiation dose rate. The maximum values of the external radiation dose rate depending on the distance were $214{\pm}16.5$, $59{\pm}9.1$ and $38{\pm}5.8{\mu}Sv/h$ at 50, 100, 150cm, respectively. If high dose Iodine treatment patient moves 5 hours using public transportation, an unspecified person in a side seat at 50cm is exposed 1.14 mSv radiation dose. A person who cares for 4days at a distance of 1 meter from a patient wearing a urine bag receives a maximum radiation dose of 6.5mSv. The maximum dose of radiation that a guardian can receive is 1.08mSv at a distance of 1.5m for 7days. The annual radiation dose limit is exceeded in a short time when applied the our developed radiation dose predictive modeling on the general public person who was around the patients with Iodine therapy. This study can be helpful in suggesting a reasonable guideline of the general public person protection system after discharge of high dose Iodine administered patients.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on the Online Newspaper Archive : Focusing on Domestic and International Case Studies (온라인 신문 아카이브 연구 국내외 구축 사례를 중심으로)

  • Song, Zoo Hyung
    • The Korean Journal of Archival Studies
    • /
    • no.48
    • /
    • pp.93-139
    • /
    • 2016
  • Aside from serving as a body that monitors and criticizes the government through reviews and comments on public issues, newspapers can also form and spread public opinion. Metadata contains certain picture records and, in the case of local newspapers, the former is an important means of obtaining locality. Furthermore, advertising in newspapers and the way of editing in newspapers can be viewed as a representation of the times. For the value of archiving in newspapers when a documentation strategy is established, the newspaper is considered as a top priority that should be collected. A newspaper archive that will handle preservation and management carries huge significance in many ways. Journalists use them to write articles while scholars can use a newspaper archive for academic purposes. Also, the NIE is a type of a practical usage of such an archive. In the digital age, the newspaper archive has an important position because it is located in the core of MAM, which integrates and manages the media asset. With this, there are prospects that an online archive will perform a new role in the production of newspapers and the management of publishing companies. Korea Integrated News Database System (KINDS), an integrated article database, began its service in 1991, whereas Naver operates an online newspaper archive called "News Library." Initially, KINDS received an enthusiastic response, but nowadays, the utilization ratio continues to decrease because of the omission of some major newspapers, such as Chosun Ilbo and JoongAng Ilbo, and the numerous user interface problems it poses. Despite these, however, the system still presents several advantages. For example, it is easy to access freely because there is a set budget for the public, and accessibility to local papers is simple. A national library consistently carries out the digitalization of time-honored newspapers. In addition, individual newspaper companies have also started the service, but it is not enough for such to be labeled an archive. In the United States (US), "Chronicling America"-led by the Library of Congress with funding from the National Endowment for the Humanities-is in the process of digitalizing historic newspapers. The universities of each state and historical association provide funds to their public library for the digitalization of local papers. In the United Kingdom, the British Library is constructing an online newspaper archive called "The British Newspaper Archive," but unlike the one in the US, this service charges a usage fee. The Joint Information Systems Committee has also invested in "The British Newspaper Archive," and its construction is still ongoing. ProQuest Archiver and Gale NewsVault are the representative platforms because of their efficiency and how they have established the standardization of newspapers. Now, it is time to change the way we understand things, and a drastic investment is required to improve the domestic and international online newspaper archive.

Current Status and Future Development Direction of University Archives' Information Services : Based on the Interview with the Archives' Staff (대학기록관 기록정보서비스의 현황과 발전 방안 실무자 면담을 중심으로)

  • Lee, Hye Kyoung;Rieh, Hae-Young
    • The Korean Journal of Archival Studies
    • /
    • no.40
    • /
    • pp.131-180
    • /
    • 2014
  • Various theoretical studies have been conducted to activate university archives, but the services provided currently in the field haven't been much studied. This study aims to investigate the usage and users of the domestic university archives, examine the types of the archival information services provided, understand the characteristics and limitations of the services, and suggest the development direction. This study set 3 objectives for the research. First, Identify the users of the university archives, the reason of the use, and the kinds of archival materials used. Second, the kinds of services and programs the university archives provide to the users. Third, the difficulties the university archives face to execute information services, the plans they consider in the future, and the best possible direction to prove the services. The authors of the study determined to apply interviews with the staffs at university archives to identify the current status of the services. For this, the range of the services offered in the field of university archives was defined first, and then, key research questions were composed. To collect valid data, authors carried out face to face interviews and email/phone interviews with the staff of 12 university archives, as well as the investigation of their Web sites. The collected data were categorized by the topic of the interview questions for analysis. By analyzing the data, some useful information was yielded including the demographic information of the research participants, the characteristics of the archives' users and requests, the types and activities of the services the university archives offered, and the limitations of archival information services, the archives' future plans, and the best possible development direction. Based on the findings, this study proposed the implications and suggestions for archival information services in university archives, in 3 domains as follows. First, university archives should build close relationship with internal university administrative units, student groups, and faculty members for effective collection and better use of archives. Second, university archives need to acquire both administrative records by transfer and manuscripts and archives by active collection. Especially, archives need to try to acquire unique archives of the universities own. Third, the archives should develop and provide various services that can elevate the awareness of university archives and induce more potential users to the archives. Finally, to solve the problems the archives face, such as the lack of the understanding of the value of the archives and the shortage of the archival materials, it was suggested that the archivists need to actively collect archival materials, and provide the valuable information by active seeking in the archives where ever it is needed.

Germination Characteristics of Medicinal Crop Adenophora triphylla var. japonica Hara as Affected by Seed Disinfection and Light Quality (종자 소독처리와 광질에 따른 약용작물 잔대 종자의 발아특성)

  • Lee, Hye Ri;Kim, Hyeon Min;Jeong, Hyeon Woo;Oh, Myung Min;Hwang, Seung Jae
    • Journal of Bio-Environment Control
    • /
    • v.28 no.4
    • /
    • pp.404-410
    • /
    • 2019
  • This study was performed to investigate the seed morphological characteristics and dormancy type of Adenophora triphylla var. japonica Hara that high valued medicinal crop and to select the disinfectants and light quality for germination rate improvement. The seed disinfection was carried out using distilled water (control), NaClO 4%, $H_2O_2$ 4%, and benomyl $500mg{\cdot}L^{-1}$. The light quality treatments were set to dark condition (control I), fluorescent lamp (control II), LEDs [red, blue, green, and combined RB LEDs (red:blue = 8:2, 6:4, 4:6, 2:8)] with a photoperiod of 12/12 (light/dark) and light intensity $150{\pm}10{\mu}mol{\cdot}m^{-2}{\cdot}s^{-1}$ photosynthetic photon flux density. Although the Adenophora triphylla var. japonica Hara seed was an underdeveloped embryo (E) and seed (S) with an embryo (E):seed (S) ratio of 0.4, it is germinated within 30 days, and seed moisture saturation was reached within 6 hours after immersion. After seed disinfection, the mold incidence rate was significantly inhibited, and the final germination rate was the highest at 87% in the benomyl seed disinfection. The final germination rate was the highest at 92% in the red light, and the mean daily germination was the lowest in the R2B8. Therefore, there is almost no dormancy in the Adenophora triphylla var. japonica Hara seed, and benomyl seed disinfectant and red light were effective in the improvement of germination rate. So it is considered to the high value of use for medicinal crop Adenophora triphylla var. japonica Hara cultivation.

Changes in microbial phase by period after hepa filter replacement in King oyster(Pleurotus eryngii) mushroom cultivation (큰느타리 재배사에서 헤파필터 교체 이후 기간에 따른 미생물상 변화)

  • Park, Hye-Sung;Min, Gyong-Jin;Lee, Eun-Ji;Lee, Chan-Jung
    • Journal of Mushroom
    • /
    • v.18 no.4
    • /
    • pp.398-402
    • /
    • 2020
  • This study was conducted to set up a proper replacement cycle of High Efficiency Particulate Air (HEPA) filters by observing the microbial populations in the air of the cultivation house of Pleurotus eryngii, before and after HEPA filter replacement at different periods. The density of bacteria and fungi in the air during each cultivation stage was measured using a sampler before the replacement of the HEPA filter. The results showed that airborne microorganisms had the highest density in the mushroom medium preparation room, with 169.7 CFU/㎥ of bacteria and 570 CFU/㎥ of fungi, and the removed old spaun had 126.3 CFU/㎥ of bacteria and 560 CFU/㎥ of fungi. The density of bacteria and fungi in the air at each cultivation stage before the replacement of the HEPA filter was 169.7 CFU/㎥ and 570 CFU/㎥, and 126.3 CFU/㎥ and 560 CFU/㎥, during the medium production and harvesting processes, respectively. After the replacement of the HEPA filter, the bacterial density was the lowest in the incubation room and the fungal density was the lowest in the cooling room. The microbial populations isolated at each period consisted of seven genera and seven species before the replacement, including Cladosporium sp., six genera and six species after 1 month of replacement, including Penicillium sp., 5 genera and 7 species after 3 months of replacement, including Mucor plumbeus, and 5 genera and 12 species, 5 genera and 10 species, and 5 genera and 10 species, 4, 5, and 6 months after the replacement, respectively, including Penicillium brevicompactum. During the period after replacement, the species were diversified and their number increased. The density of airborne microorganisms decreased drastically after the replacement of the HEPA filter. Its lowest value was recorded after 2 months of replacement, and it increased gradually afterwards, reaching a level similar to or higher than that of the pre-replacement period. Therefore, it was concluded that replacing the HEPA filter every 6 months is effective for reducing contamination.

Development of Digital Transceiver Unit for 5G Optical Repeater (5G 광중계기 구동을 위한 디지털 송수신 유닛 설계)

  • Min, Kyoung-Ok;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.156-167
    • /
    • 2021
  • In this paper, we propose a digital transceiver unit design for in-building of 5G optical repeaters that extends the coverage of 5G mobile communication network services and connects to a stable wireless network in a building. The digital transceiver unit for driving the proposed 5G optical repeater is composed of 4 blocks: a signal processing unit, an RF transceiver unit, an optical input/output unit, and a clock generation unit. The signal processing unit plays an important role, such as a combination of a basic operation of the CPRI interface, a 4-channel antenna signal, and response to external control commands. It also transmits and receives high-quality IQ data through the JESD204B interface. CFR and DPD blocks operate to protect the power amplifier. The RF transmitter/receiver converts the RF signal received from the antenna to AD, is transmitted to the signal processing unit through the JESD204B interface, and DA converts the digital signal transmitted from the signal processing unit to the JESD204B interface and transmits the RF signal to the antenna. The optical input/output unit converts an electric signal into an optical signal and transmits it, and converts the optical signal into an electric signal and receives it. The clock generator suppresses jitter of the synchronous clock supplied from the CPRI interface of the optical input/output unit, and supplies a stable synchronous clock to the signal processing unit and the RF transceiver. Before CPRI connection, a local clock is supplied to operate in a CPRI connection ready state. XCZU9CG-2FFVC900I of Xilinx's MPSoC series was used to evaluate the accuracy of the digital transceiver unit for driving the 5G optical repeater proposed in this paper, and Vivado 2018.3 was used as the design tool. The 5G optical repeater digital transceiver unit proposed in this paper converts the 5G RF signal input to the ADC into digital and transmits it to the JIG through CPRI and outputs the downlink data signal received from the JIG through the CPRI to the DAC. And evaluated the performance. The experimental results showed that flatness, Return Loss, Channel Power, ACLR, EVM, Frequency Error, etc. exceeded the target set value.

Analytical method study for cephalexin with high-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) applicable for residue studies in the whiteleg shrimp Litopenaeus vannamei (흰다리새우(Litopenaeus vannamei)에서 cephalexin의 잔류농도 연구를 위한 LC-MS/MS 분석법 연구)

  • Yang, Chan Yeong;Bae, Jun Sung;Lee, Chae Won;Jeong, Eun Ha;Lee, Ji-Hoon;Bak, Su-Jin;Choi, Sang-Hoon;Park, Kwan Ha
    • Journal of fish pathology
    • /
    • v.34 no.1
    • /
    • pp.71-80
    • /
    • 2021
  • Cephalexin, a semi-synthetic cephalosporin antibiotic, has long been used in fish aquaculture in various countries under legal authorization. The drug is thus widely available for use in other aquatic species except fishes like the crustacean whiteleg shrimp. This study aims to develop a sensitive method for laboratory residue studies to adopt in withdrawal period determinations. Through repeated trials from the existing methods developed for other food animal tissues, it was possible to achieve a sensitive high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method. The results showed that at a concentration of 0.1 mg/kg, the recovery rate was 81.79%, and C.V. value was 8.2%, which meet the recovery rate and C.V. recommended by Codex guideline. After satisfactory validation of analytical procedures, applicability to the shrimp tissue was confirmed in experimentally cephalexin-treated whiteleg shrimp. As a result, most muscle samples were detected below the limit of quantification (0.05 mg/kg) after day 3, and most hepatopancreas samples were detected below the limit of quantification after day 14. In particular, the limit of quantification 0.05 ppm with the presently developed method suggests sufficient sensitive over the current legal maximum residue limit of 0.2 mg/kg set for fishes.

A Study on the Impacters of the Disabled Worker's Subjective Career Success in the Competitive Labour Market: Application of the Multi-Level Analysis of the Individual and Organizational Properties (경쟁고용 장애인근로자의 주관적 경력성공에 대한 영향요인 분석: 개인 및 조직특성에 대한 다층분석의 적용)

  • Kwon, Jae-yong;Lee, Dong-Young;Jeon, Byong-Ryol
    • 한국사회정책
    • /
    • v.24 no.1
    • /
    • pp.33-66
    • /
    • 2017
  • Based on the premise that the systematic career process of workers in the general labor market was one of core elements of successful achievements and their establishment both at the individual and organizational level, this study set out to conduct empirical analysis of factors influencing the subjective career success of disabled workers in competitive employment at the multi-dimensional levels of individuals and organizations(corporations) and thus provide practical implications for the career management directionality of their successful vocational life with data based on practical and statistical accuracy. For those purposes, the investigator administered a structured questionnaire to 126 disabled workers at 48 companies in Seoul, Gyeonggi, Chungcheong, and Gangwon and collected data about the individual and organizational characteristics. Then the influential factors were analyzed with the multilevel analysis technique by taking into consideration the organizational effects. The analysis results show that organizational characteristics explained 32.1% of total variance of subjective career success, which confirms practical implications for the importance of organizational variables and the legitimacy of applying the multilevel model. The significant influential factors include the degree of disability, desire for growth, self-initiating career attitude and value-oriented career attitude at the individual level and the provision of disability-related convenience, career support, personnel support, and interpersonal support at the organizational level. The latter turned out to have significant moderating effects on the influences of subjective career success on the characteristic variables at the individual level. Those findings call for plans to increase subjective career success through the activation of individual factors based on organizational effects. The study thus proposed and discussed integrated individual-corporate practice strategies including setting up a convenience support system by reflecting the disability characteristics, applying a worker support program, establishing a frontier career development support system, and providing assistance for a human network.