• Title/Summary/Keyword: 마이크로

Search Result 12,019, Processing Time 0.046 seconds

Structural Changes in Rental Housing Markets and a Mismatch between Quartile Income and Rent (월세 임차시장의 구조적 변화에 따른 분위별 소득과 임대료 간의 부정합 분석)

  • JungHo Park;Taegyun Yim
    • Land and Housing Review
    • /
    • v.14 no.4
    • /
    • pp.17-37
    • /
    • 2023
  • The rental housing market in South Korea, specifically monthly rent with deposit, has been expanding over the last three decades (8.2% in 1990 to 21.0% in 2020), partly replacing the traditional Jeonse market. The distribution of rent has changed due to public rental subsidies and the emergence of luxury rental housing, while the distribution of rental household income has been polarized because of the emergence of rich renters. This study attempts to measure the structural changes in the rental market by developing a new indicator of income-rent mismatch. Using the seven series of the Korea Housing Survey, this study analyzed the changes in rent (reflecting the conversion rate) and income levels of rental households in 2006 (base year) and 10-15 years later (the analysis year) at the national level and at the spatial unit of 16 metropolitan cities and provinces (excluding Sejong), respectively, by dividing them into quartile data. The result reveals that rental housing was undersupplied in middle- and high-income rental housing due to the decline in the highest quartile (25%→18%) and the third quartile groups (25%→20%), while the supply of public rental housing expanded for the second quartile (25%→28%) and the lowest quartile (25%→35) groups. On the demand side, the highest income quartile shrank (25%→21%), while the lowest income quartile grew (25%→31%). Comparing the 16 metropolitan cities and provinces, there were significant regional differences in the direction and intensity of changes in rent and renter household income. In particular, the rental market in Seoul was characterized by supply polarization, which led to an imbalance in the income distribution of rental households. The structural changes in the apartment rental market were different from those in the non-apartment rental market. The findings of this study can be used as a basis for future regional rental housing markets. The findings can support securing affordable rental housing stock for each income quartile group on monthly rent and developing housing stability measures for a balance between income and rent distribution in each region.

Metagenomic Analysis of Jang Using Next-generation Sequencing: A ComparativeMicrobial Study of Korean Traditional Fermented Soybean Foods (차세대 염기서열 분석을 활용한 장류의 메타지놈 분석 : 한국 전통 콩 발효식품에 대한 미생물 비교 연구)

  • Ranhee Lee;Gwangsu Ha;Ho Jin Jeong;Do-Youn Jeong;Hee-Jong Yang
    • Journal of Life Science
    • /
    • v.34 no.4
    • /
    • pp.254-263
    • /
    • 2024
  • Korean jang is a food made using fermented soybeans, and the typical products include gochujang (GO), doenjang (DO), cheonggukjang (CH), and ganjang (GA). In this study, 16S rRNA metagenome analysis was performed on a total of 200 types of GO, DO, CH, and GA using next-generation sequencing to analyze the microbial community of fermented soybean foods and compare taxonomic (biomarker) differences. Alpha diversity analysis showed that in the CHAO index, the species richness index tended to be significantly higher compared to the DO and GA groups (p<0.001). The results of the microbial distribution analysis of the GO, DO, CH, and GA products showed that at the order level, Bacillales was the most abundant in the GO, DO, and CH groups, but Lactobacillales was most abundant in the GA group. Linear discriminant analysis effect (LEfSe) analysis was used to identify biomarkers at the family and species levels. Leuconostocaceae, Thermoactinomycetaceae, Bacillaceae, and Enterococcaceae appeared as biomarkers at the family level, and Bacillus subtilis, Kroppenstedtia sanguinis, Bacillus licheniformis, and Tetragenococcus halophilus appeared at the species level. Permutational multivariate analysis of variance (PERMANOVA) analysis showed that there was a significant difference in the microbial community structure of the GO, DO, CH, and GA groups (p=0.001), and the microbial community structure of the GA group showed the greatest difference. This study clarified the correlation between the characteristics of Korean fermented foods and microbial community distribution, enhancing knowledge of microorganisms participating in the fermentation process. These results could be leveraged to improve the quality of fermented soybean foods.

The association between COVID-19 and changes in food consumption in Korea: analyzing the microdata of household income and expenditure from Statistics Korea 2019-2022 (코로나19와 한국 식품 소비 변화의 관계: 2019-2022년 통계청 소비자 가계동향조사를 활용하여)

  • Haram Eom;Kyounghee Kim;Seonghwan Cho;Junghoon Moon
    • Journal of Nutrition and Health
    • /
    • v.57 no.1
    • /
    • pp.153-169
    • /
    • 2024
  • Purpose: The main goal of this study was to identify the impact of coronavirus disease 2019 (COVID-19) on grocery purchases (i.e., fresh and processed foods by grain, vegetable, fruit, seafood, and meat categories) in Korea. To understand the specific impact of COVID-19, the study period was divided into 3 segments: PRE-COVID-19, INTER-COVID-19, and POST-COVID-19. Methods: We used the microdata of household income and expenditure from Statistics Korea (KOSTAT), representing households across the country. The data comprised monthly grocery expenditure data from January 2019 to September 2022. First, we compared the PRE-COVID-19 period to INTER-COVID-19 and then INTER-COVID-19 to POST-COVID-19 and used multiple regression analysis. The covariates used were the gender and age of the head of the household, the household's monthly income, the number of family members, the price index, and the month (dummy variable). Results: The expenditures on all grocery categories except fresh fruit increased from PRE-COVID-19 to INTER-COVID-19. From INTER-COVID-19 to POST-COVID-19, almost all grocery category spending declined, with processed meat being the only exception. Most purchases of protein sources, increased during INTER-COVID-19 compared to PRE-COVID-19, while ham/sausage/bacon for meat protein, fish cakes and canned seafood for seafood protein, and soy milk for plant-based protein did not decrease during POST-COVID-19 compared to INTER-COVID-19. Conclusion: These results show an overall increase in in-home grocery expenditure during COVID-19 due to an increase in eating at home, followed by a decrease in this expenditure in the POST-COVID-19 period. Among the trends, the protein and highly processed convenience food categories did not see a decline in spending during the POST-COVID-19 period, which is a reflection of the preferences of consumers in the post-COVID-19 period.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Records Management and Archives in Korea : Its Development and Prospects (한국 기록관리행정의 변천과 전망)

  • Nam, Hyo-Chai
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.1 no.1
    • /
    • pp.19-35
    • /
    • 2001
  • After almost one century of discontinuity in the archival tradition of Chosun dynasty, Korea entered the new age of records and archival management by legislating and executing the basic laws (The Records and Archives Management of Public Agencies Ad of 1999). Annals of Chosun dynasty recorded major historical facts of the five hundred years of national affairs. The Annals are major accomplishment in human history and rare in the world. It was possible because the Annals were composed of collected, selected and complied records of primary sources written and compiled by generations of historians, As important public records are needed to be preserved in original forms in modern archives, we had to develop and establish a modern archival system to appraise and select important national records for archival preservation. However, the colonialization of Korea deprived us of the opportunity to do the task, and our fine archival tradition was not succeeded. A centralized archival system began to develop since the establishment of GARS under the Ministry of Government Administration in 1969. GARS built a modem repository in Pusan in 1984 succeeding to the tradition of History Archives of Chosun dynasty. In 1998, GARS moved its headquarter to Taejon Government Complex and acquired state-of-the-art audio visual archives preservation facilities. From 1996, GARS introduced an automated archival management system to remedy the manual registration and management system complementing the preservation microfilming. Digitization of the holdings was the key project to provided the digital images of archives to users. To do this, the GARS purchased new computer/server systems and developed application softwares. Parallel to this direction, GARS drastically renovated its manpower composition toward a high level of professionalization by recruiting more archivists with historical and library science backgrounds. Conservators and computer system operators were also recruited. The new archival laws has been in effect from January 1, 2000. The new laws made following new changes in the field of records and archival administration in Korea. First, the laws regulate the records and archives of all public agencies including the Legislature, the Judiciary, the Administration, the constitutional institutions, Army, Navy, Air Force, and National Intelligence Service. A nation-wide unified records and archives management system became available. Second, public archives and records centers are to be established according to the level of the agency; a central archives at national level, special archives for the National Assembly and the Judiciary, local government archives for metropolitan cities and provinces, records center or special records center for administrative agencies. A records manager will be responsible for the records management of each administrative divisions. Third, the records in the public agencies are registered in the computer system as they are produced. Therefore, the records are traceable and will be searched or retrieved easily through internet or computer network. Fourth, qualified records managers and archivists who are professionally trained in the field of records management and archival science will be assigned mandatorily to guarantee the professional management of records and archives. Fifth, the illegal treatment of public records and archives constitutes a punishable crime. In the future, the public records find archival management will develop along with Korean government's 'Electronic Government Project.' Following changes are in prospect. First, public agencies will digitize paper records, audio-visual records, and publications as well as electronic documents, thus promoting administrative efficiency and productivity. Second, the National Assembly already established its Special Archives. The judiciary and the National Intelligence Service will follow it. More archives will be established at city and provincial levels. Third, the more our society develop into a knowledge-based information society, the more the records management function will become one of the important national government functions. As more universities, academic associations, and civil societies participate in promoting archival awareness and in establishing archival science, and more people realize the importance of the records and archives management up to the level of national public campaign, the records and archival management in Korea will develop significantly distinguishable from present practice.

The Content of Minerals and Vitamins in Commercial Beverages and Liquid Teas (유통음료 및 액상차 중의 비타민과 미네랄 함량)

  • Shin, Young;Kim, Sung-Dan;Kim, Bog-Soon;Yun, Eun-Sun;Chang, Min-Su;Jung, Sun-Ok;Lee, Yong-Cheol;Kim, Jung-Hun;Chae, Young-Zoo
    • Journal of Food Hygiene and Safety
    • /
    • v.26 no.4
    • /
    • pp.322-329
    • /
    • 2011
  • This study was done to analyze the contents of minerals and vitamins to compare the measured values of minerals, vitamins with labeled values of them in food labeling and to investigate the ratio of measured values to labeled values in 437 specimen with minerals and vitamins - fortified commercial beverages and liquid teas. Content of calcium and sodium in samples after microwave digestion was analyzed with an ICP-OES (Inductively Coupled Plasma Optical Emission Spectrometer) and vitamins were determined using by HPLC (High Performance Liquid Chromatography). The measured values of calcium were ranged 80.3~142.6% of the labeled values in 21 samples composed calcium - fortified commercial beverages and liquid teas. In case of sodium, measured values were investigated 33.9~48.5% of the labeled values in 21 sports beverages. The measured values of vitamin C, vitamin $B_2$ and niacin were ranged 99.7~2003.6, 81.1~336.7, 90.7~393.2% of the labeled values in vitamins - fortified commercial beverages and liquid teas, 57, 12, 11 samples. To support achievement of the accurate nutrition label, there must be program and initiatives for better understanding and guidances on food labelling and nutrition for food manufacture.

Research on The Utility of Acquisition of Oblique Views of Bilateral Orbit During the Dacryoscintigraphy (눈물길 조영검사 시 양측 안 와 사위 상 획득의 유용성에 대한 연구)

  • Park, Jwa-Woo;Lee, Bum-Hee;Park, Seung-Hwan;Park, Su-Young;Jung, Chan-Wook;Ryu, Hyung-Gi;Kim, Ho-Shin
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.76-81
    • /
    • 2014
  • Purpose: Diversity and the lachrymal duct deformities and the passage inside the nasal cavity except for anterior image such as epiphora happens during the test were able to express more precisely during the dacryoscintigraphy. Also, we thought about the necessity of a method to classify the passage into the naso-lachrymal duct from epiphora. Therefore, we are to find the validity of the method to obtain both oblique views except for anterior views. Materials and Methods: The targets of this research are 78 patients with epiphora due to the blockage at the lachrymal duct from January 2013 to August 2013. Average age was $56.96{\pm}13.36$. By using a micropipette, we dropped 1-2 drops of $^{99m}TcO4^-$ of 3.7 MBq (0.1 mCi) with $10{\mu}L$ of each drop into the inferior conjunctival fold, then we performed dynamic check for 20 minutes with 20 frames of each minute. In case of we checked the passage from both eyes to nasal cavity immediately after the dynamic check, we obtained oblique view immediately. If we didn't see the passage in either side of the orbit, we obtained oblique views of the orbit after checking the frontal film in 40 minutes. The instrument we used was Pin-hole Collimator with Gamma Camera(Siemens Orbiter, Hoffman Estates, IL, USA). Results: Among the 78 patients with dacryoscintigraphy, 35 patients were confirmed with passage into the nasal cavity from the anterior view. Among those 35 patients, 15 patients were confirmed with passage into the nasal cavity on both eyes, and it was able to observe better passage patterns through oblique view with a result of 8 on both eyes, 2 on left eye, and 1 on right eye. 20 patients had passage in left eye or right eye, among those patients 10 patients showed clear passage compared to the anterior view. 13 patients had possible passage, and 30 patients had no proof of motion of the tracer. To sum up, 21 patients (60%) among 35 patients showed clear pattern of passage with additional oblique views compared to anterior view. People responded obtaining oblique views though 5 points scale about the utility of passage identification helps make diagnoses the passage, passage delayed, and blockage of naso-lachrymal duct by showing the well-seen portions from anterior view. Also, when classifying passage to naso-lachrymal duct and flow to the skin, oblique views has higher chance of classification in case of epiphora (anterior:$4.14{\pm}0.3$, oblique:$4.55{\pm}0.4$). Conclusion: It is considered that if you obtain oblique views of the bilateral orbits in addition to anterior view during the dacryoscintigraphy, the ability of diagnose for reading will become higher because you will be able to see the areas that you could not observe from the anterior view so that you can see if it emitted after the naso-lachrymal duct and the flow of epiphora on the skin.

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.