• Title/Summary/Keyword: problem generation

Search Result 2,354, Processing Time 0.03 seconds

A Study on the Trend and Utilization of Stone Waste (석재폐기물 현황 및 활용 연구)

  • Chea, Kwang-Seok;Lee, Young Geun;Koo, Namin;Yang, Hee Moon
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.35 no.3
    • /
    • pp.333-344
    • /
    • 2022
  • The quarrying and utilization of natural building stones such as granite and marble are rapidly emerging in developing countries. A huge amount of wastes is being generated during the processing, cutting and sizing of these stones to make them useable. These wastes are disposed of in the open environment and the toxic nature of these wastes negatively affects the environment and human health. The growth trend in the world stone industry was confirmed in output for 2019, increasing more than one percent and reaching a new peak of some 155 million tons, excluding quarry discards. Per-capita stone use rose to 268 square meters per thousand persons (m2/1,000 inh), from 266 the previous year and 177 in 2001. However, we have to take into consideration that the world's gross quarrying production was about 316 million tons (100%) in 2019; about 53% of that amount, however, is regarded as quarrying waste. With regards to the stone processing stage, we have noticed that the world production has reached 91.15 million tons (29%), and consequently this means that 63.35 million tons of stone-processing scraps is produced. Therefore, we can say that, on a global level, if the quantity of material extracted in the quarry is 100%, the total percentage of waste is about 71%. This raises a substantial problem from the environmental, economical and social point of view. There are essentially three ways of dealing with inorganic waste, namely, reuse, recycling, or disposal in landfills. Reuse and recycling are the preferred waste management methods that consider environmental sustainability and the opportunity to generate important economic returns. Although there are many possible applications for stone waste, they can be summarized into three main general applications, namely, fillers for binders, ceramic formulations, and environmental applications. The use of residual sludge for substrate production seems to be highly promising: the substrate can be used for quarry rehabilitation and in the rehabilitation of industrial sites. This new product (artificial soil) could be included in the list of the materials to use in addition to topsoil for civil works, railway embankments roundabouts and stone sludge wastes could be used for the neutralization of acidic soil to increase the yield. Stone waste is also possible to find several examples of studies for the recovery of mineral residues, including the extraction of metallic elements, and mineral components, the production of construction raw materials, power generation, building materials, and gas and water treatment.

Analysis and Satisfaction Survey of Summer Camp Trends of the Education Ministry of Korean Church in the 10th Age of COVID-19 : From 2020 to 2022 (코로나 19시대의 한국교회 교육부 여름 사역 동향 분석 및 만족도 조사 : 2020년부터 2022년까지)

  • Kim, Jaewoo
    • Journal of Christian Education in Korea
    • /
    • v.71
    • /
    • pp.277-303
    • /
    • 2022
  • The COVID-19 Pandemic, which began in 2020, has led to many changes in the Korean church. It created a situation in which not only the change and form of worship time, but also the definition, direction, and philosophy of ministry had to be re-established. In the early days of COVID-19 Pandemic, the Korean church recognized this as a crisis, but gradually regarded these as opportunities and tried to produce positive results. The Department of Education has also undergone many changes, especially in its summer ministry, and is expected to have undergone more dramatic changes in form, location and method than in any other church event or service. However, no accurate data on this has been collected. Accordingly, Mirae with Dreams (CEO: Pastor Kim Eun-ho), a corporation established by the Oryun Church for the next generation of ministry, conducted a survey on the summer ministry of the Korean church, which has been registered as a future member with dreams every year since 2020 when the COVID-19 fan dummy began. A similar survey was conducted in 2022 following 2021, and 260 churches responded, and the results are as follows. In 2022, the summer ministry of the Ministry of Education of the Korean Church returned to the form before the COVID-19 Pandemic. Unlike 2021, when many of them were held online, more than 81 percent said they had conducted summer camps offline, and 31 percent also conducted or attended outdoor camps. In terms of the importance of roles, when online was also the main focus, parents and teachers were equally viewed or emphasized, while in this summer's survey, 90 percent of respondents said that the role of teachers in charge or department was important. Summer events were mainly summer Bible schools and retreats, but 25% of all respondents said they conducted missionary work and evangelism at home and abroad. Compared to 2021, participation in summer camps has increased in all departments, including infant and kindergarten, elementary and middle school, and especially in infant and middle school. While preparing for the summer camp, most of the respondents said that the focus was on content and topics, and the main focus was on children's accessibility compared to 2021. As a result of synthesizing the description of the reason for the respondents who could not conduct the summer camp, about 40% said they could not conduct the summer camp due to a lack of volunteers. This is more than 30% who pointed out COVID-19 as the cause, which can be seen as an urgent problem to be solved at the Korean church and denomination level. In addition, this paper also mentioned detailed changes in each question, referring to the changes in summer camps from 2020 to 2022.

A Study on Image-Based Mobile Robot Driving on Ship Deck (선박 갑판에서 이미지 기반 이동로봇 주행에 관한 연구)

  • Seon-Deok Kim;Kyung-Min Park;Seung-Yeol Wang
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.7
    • /
    • pp.1216-1221
    • /
    • 2022
  • Ships tend to be larger to increase the efficiency of cargo transportation. Larger ships lead to increased travel time for ship workers, increased work intensity, and reduced work efficiency. Problems such as increased work intensity are reducing the influx of young people into labor, along with the phenomenon of avoidance of high intensity labor by the younger generation. In addition, the rapid aging of the population and decrease in the young labor force aggravate the labor shortage problem in the maritime industry. To overcome this, the maritime industry has recently introduced technologies such as an intelligent production design platform and a smart production operation management system, and a smart autonomous logistics system in one of these technologies. The smart autonomous logistics system is a technology that delivers various goods using intelligent mobile robots, and enables the robot to drive itself by using sensors such as lidar and camera. Therefore, in this paper, it was checked whether the mobile robot could autonomously drive to the stop sign by detecting the passage way of the ship deck. The autonomous driving was performed by detecting the passage way of the ship deck through the camera mounted on the mobile robot based on the data learned through Nvidia's End-to-end learning. The mobile robot was stopped by checking the stop sign using SSD MobileNetV2. The experiment was repeated five times in which the mobile robot autonomously drives to the stop sign without deviation from the ship deck passage way at a distance of about 70m. As a result of the experiment, it was confirmed that the mobile robot was driven without deviation from passage way. If the smart autonomous logistics system to which this result is applied is used in the marine industry, it is thought that the stability, reduction of labor force, and work efficiency will be improved when workers work.

A Study on the Future Prospect for Establishing the True Donghak Phase of Daesoon Thought (대순사상의 참동학 위상정립을 위한 미래관 연구)

  • Kim, Yong-hwan
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.29
    • /
    • pp.1-36
    • /
    • 2017
  • The purpose of this article is to investigate the future prospects for establishing the True Donghak phase of Daesoon Thought. The True Donghak refers to 'the future prospect of having a true life, true thinking, and true living' in which enjoying the world in a state of good fortune became a true reality after the death of Suwun, according to faith in Gucheon Sangje. The correlation between "Attending to the Lord of Heaven" in Donghak, and "The Reordering Works of Heaven and Earth" in Daesoon shows the prospect of achieving the Daesoonist transformation into energy to gain true life and re-creation. The correlation between "Nourishing the Lord of Heaven" in Donghak and "Attending to Study and Attending to Law" in Daesoon show the transformation of Daesoon-reason into true thinking and renewing. The correlation between "Humanity is Divine" in Donghak and "The Salvation of Humanity is the Will of Heaven" in Daesoon show transformation into the practice of Daesoon for the true living and renewing. This investigation utilizes the literature review and the generation theory of life-philosophy to examine revelations regarding the conversation between Spirit and Mind. This is the future prospect for the establishing the True Donghak phase of Daesoon thought. It consists of a threefold connection among life, thinking, and living. The "public-centered spirituality of Daesoon Truth" which connects and mediates among people appears in three aspects. Firstly, it is thought to be the vision of the true life through the 'renewal of active, energetic power' bestowed by Gucheon Sangje. Secondly, it is thought to be the vision of true thinking through the "renewal via freedom from delusion". Thirdly, it is thought to be the vision of true living through the "renewal of true mind". To bring about the creation of true Donghak, Gucheon Sangje incarnated to the Korean peninsula instead of Suwun and the salvation of the world salvation now centers on Korea with regards to the threefold connection future prospect. Gucheon Sangje's revelation addresses and solves the postscript problem of Chosun and further establishes a Utopia. Suwun established Donghak but failed later on due to his lankiness. At last the true Donghak has been opened for the future by Gucheon Sangje and Jeongsan's fifty years of religious accomplishments. In the long run, it has been developed further by Woodang's Daesoon Jinrihoe.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Visual Media Education in Visual Arts Education (미술교육에 있어서 시각적 미디어를 통한 조형교육에 관한 연구)

  • Park Ji-Sook
    • Journal of Science of Art and Design
    • /
    • v.7
    • /
    • pp.64-104
    • /
    • 2005
  • Visual media transmits image and information reproduced in large quantities, such as a photography, film, television, video, advertisement, or computer image. Correspondence to the students' reception and recognition of culture in the future. arrangements for the field of studies of visual culture. 'Visual Culture' implies cultural phenomena of visual images via visual media, which includes not only the categories of traditional arts like a painting, sculpture, print, or design, but the performance arts including a fashion show or parade of carnival, and the mass and electronic media like a photography, film, television, video, advertisement, cartoon, animation, or computer image. In the world of visual media, Image' functions as an essential medium of communication. Therefore, people call the culture of today fra of Image Culture', which has been converted from an alphabet convergence era to an image convergence one. Image, via visual media, has become a dominant means for communication in large part of human life, so we can designate an Image' as a typical aspect of visual culture today. Image, as an essential medium of communication, plays an important role in contemporary society. The one way is the conversion of analogue image like an actual picture, photograph, or film into digital one through the digitalization of digital camera or scanner as 'an analogue/digital commutator'. The other is a way of process with a computer drawing, or modeling of objects. It is appropriate to the production of pictorial and surreal images. Digital images, produced by the other, can be divided into the form of Pixel' and form of Vector'. Vector is a line linking the point of departure to the point of end, which organizes informations. Computer stores each line's standard location and correlative locations to one another Digital image shows for more 'Perfectness' than any other visual media. Digital image has been evolving in the diverse aspects, such as a production of geometrical or organic image compositing, interactive art, multimedia art, or web art, which has been applied a computer as an extended trot of painting. Someone often interprets digitalized copy with endless reproduction of original even as an extension of a print. Visual af is no longer a simple activity of representation by a painter or sculptor, but now is intimately associated with a matter of application of media. There is some problem in images via visual media. First, the image via media doesn't reflect a reality as it is, but reflects an artificial manipulated world, that is, a virtual reality. Second, the introduction of digital effect and the development of image processing technology have enhanced a spectacle of destructive and violent scenes. Third, a child intends to recognize the interactive images of computer game and virtual reality as a reality, or truth. Education needs not only to point out an ill effect of mass media and prevent the younger generation from being damaged by it, but also to offer a knowledge and know-how to cope actively with social, cultural circumstances. Visual media education is one of these essential methods for the contemporary and future human being in the overflowing of image informations. The fosterage of 'Visual Literacy' can be considered as a very purpose of visual media education. This is a way to lead an individual to the discerning, active consumer and producer of visual media in life as far as possible. The elements of 'Visual Literacy' can be divided into a faculty of recognition related to the visual media, a faculty of critical reception, a faculty of appropriate application, a faculty of active work and a faculty of creative modeling, which are promoted at the same time by the education of 'visual literacy'. In conclusion, the education of 'Visual Literacy' guides students to comprehend and discriminate the visual image media carefully, or receive them critically, apply them properly, or produce them creatively and voluntarily. Moreover, it leads to an artistic activity by means of new media. This education can be approached and enhanced by the connection and integration with real life. Visual arts and education of them play an important role in the digital era depended on visual communications via image information. Visual me야a of day functions as an essential element both in daily life and in arts. Students can soundly understand visual phenomena of today by means of visual media, and apply it as an expression tool of life culture as well. A new recognition and valuation visual image and media education is required to cultivate the capability of active, upright dealing with the changes of history of civilization. 1) Visual media education helps to cultivate a sensibility for images, which reacts to and deals with the circumstances. 2) It helps students to comprehend the contemporary arts and culture via new media. 3) It supplies a chance of students' experiencing a visual modeling by means of new media. 4) There are educational opportunities of images with temporality and spaciality, and therefore a discerning person becomes to increase. 5) The modeling activity via new media leads students to be continuously interested in the school and production of plastic arts. 6) It raises the ability of visual communications dealing with image information society. 7) An education of digital image is significant in respect of cultivation of man of talent for the future society of image information as well. To correspond to the changing and developing social, cultural circumstances, and the form and recognition of students' reception of them, visual arts education must arrange the field of studying on a new visual culture. Besides, a program needs to be developed, which is in more systematic and active level in relation to visual media education. Educational contents should be extended to the media for visual images, that is, photography, film, television, video, computer graphic, animation, music video, computer game and multimedia. Every media must be separately approached, because they maintain the modes and peculiarities of their own according to the conveyance form of message. The concrete and systematic method of teaching and the quality of education must be researched and developed, centering around the development of a course of study. Teacher's foundational capability of teaching should be cultivated for the visual media education. In this case, it must be paid attention to the fact that a technological level of media is considered as a secondary. Because school education doesn't intend to train expert and skillful producers, but intends to lay stress on the essential aesthetic one with visual media under the social and cultural context, in respect of a consumer including a man of culture.

  • PDF

Open Digital Textbook for Smart Education (스마트교육을 위한 오픈 디지털교과서)

  • Koo, Young-Il;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.177-189
    • /
    • 2013
  • In Smart Education, the roles of digital textbook is very important as face-to-face media to learners. The standardization of digital textbook will promote the industrialization of digital textbook for contents providers and distributers as well as learner and instructors. In this study, the following three objectives-oriented digital textbooks are looking for ways to standardize. (1) digital textbooks should undertake the role of the media for blended learning which supports on-off classes, should be operating on common EPUB viewer without special dedicated viewer, should utilize the existing framework of the e-learning learning contents and learning management. The reason to consider the EPUB as the standard for digital textbooks is that digital textbooks don't need to specify antoher standard for the form of books, and can take advantage od industrial base with EPUB standards-rich content and distribution structure (2) digital textbooks should provide a low-cost open market service that are currently available as the standard open software (3) To provide appropriate learning feedback information to students, digital textbooks should provide a foundation which accumulates and manages all the learning activity information according to standard infrastructure for educational Big Data processing. In this study, the digital textbook in a smart education environment was referred to open digital textbook. The components of open digital textbooks service framework are (1) digital textbook terminals such as smart pad, smart TVs, smart phones, PC, etc., (2) digital textbooks platform to show and perform digital contents on digital textbook terminals, (3) learning contents repository, which exist on the cloud, maintains accredited learning, (4) App Store providing and distributing secondary learning contents and learning tools by learning contents developing companies, and (5) LMS as a learning support/management tool which on-site class teacher use for creating classroom instruction materials. In addition, locating all of the hardware and software implement a smart education service within the cloud must have take advantage of the cloud computing for efficient management and reducing expense. The open digital textbooks of smart education is consdered as providing e-book style interface of LMS to learners. In open digital textbooks, the representation of text, image, audio, video, equations, etc. is basic function. But painting, writing, problem solving, etc are beyond the capabilities of a simple e-book. The Communication of teacher-to-student, learner-to-learnert, tems-to-team is required by using the open digital textbook. To represent student demographics, portfolio information, and class information, the standard used in e-learning is desirable. To process learner tracking information about the activities of the learner for LMS(Learning Management System), open digital textbook must have the recording function and the commnincating function with LMS. DRM is a function for protecting various copyright. Currently DRMs of e-boook are controlled by the corresponding book viewer. If open digital textbook admitt DRM that is used in a variety of different DRM standards of various e-book viewer, the implementation of redundant features can be avoided. Security/privacy functions are required to protect information about the study or instruction from a third party UDL (Universal Design for Learning) is learning support function for those with disabilities have difficulty in learning courses. The open digital textbook, which is based on E-book standard EPUB 3.0, must (1) record the learning activity log information, and (2) communicate with the server to support the learning activity. While the recording function and the communication function, which is not determined on current standards, is implemented as a JavaScript and is utilized in the current EPUB 3.0 viewer, ths strategy of proposing such recording and communication functions as the next generation of e-book standard, or special standard (EPUB 3.0 for education) is needed. Future research in this study will implement open source program with the proposed open digital textbook standard and present a new educational services including Big Data analysis.

Geochemical Equilibria and Kinetics of the Formation of Brown-Colored Suspended/Precipitated Matter in Groundwater: Suggestion to Proper Pumping and Turbidity Treatment Methods (지하수내 갈색 부유/침전 물질의 생성 반응에 관한 평형 및 반응속도론적 연구: 적정 양수 기법 및 탁도 제거 방안에 대한 제안)

  • 채기탁;윤성택;염승준;김남진;민중혁
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.7 no.3
    • /
    • pp.103-115
    • /
    • 2000
  • The formation of brown-colored precipitates is one of the serious problems frequently encountered in the development and supply of groundwater in Korea, because by it the water exceeds the drinking water standard in terms of color. taste. turbidity and dissolved iron concentration and of often results in scaling problem within the water supplying system. In groundwaters from the Pajoo area, brown precipitates are typically formed in a few hours after pumping-out. In this paper we examine the process of the brown precipitates' formation using the equilibrium thermodynamic and kinetic approaches, in order to understand the origin and geochemical pathway of the generation of turbidity in groundwater. The results of this study are used to suggest not only the proper pumping technique to minimize the formation of precipitates but also the optimal design of water treatment methods to improve the water quality. The bed-rock groundwater in the Pajoo area belongs to the Ca-$HCO_3$type that was evolved through water/rock (gneiss) interaction. Based on SEM-EDS and XRD analyses, the precipitates are identified as an amorphous, Fe-bearing oxides or hydroxides. By the use of multi-step filtration with pore sizes of 6, 4, 1, 0.45 and 0.2 $\mu\textrm{m}$, the precipitates mostly fall in the colloidal size (1 to 0.45 $\mu\textrm{m}$) but are concentrated (about 81%) in the range of 1 to 6 $\mu\textrm{m}$in teams of mass (weight) distribution. Large amounts of dissolved iron were possibly originated from dissolution of clinochlore in cataclasite which contains high amounts of Fe (up to 3 wt.%). The calculation of saturation index (using a computer code PHREEQC), as well as the examination of pH-Eh stability relations, also indicate that the final precipitates are Fe-oxy-hydroxide that is formed by the change of water chemistry (mainly, oxidation) due to the exposure to oxygen during the pumping-out of Fe(II)-bearing, reduced groundwater. After pumping-out, the groundwater shows the progressive decreases of pH, DO and alkalinity with elapsed time. However, turbidity increases and then decreases with time. The decrease of dissolved Fe concentration as a function of elapsed time after pumping-out is expressed as a regression equation Fe(II)=10.l exp(-0.0009t). The oxidation reaction due to the influx of free oxygen during the pumping and storage of groundwater results in the formation of brown precipitates, which is dependent on time, $Po_2$and pH. In order to obtain drinkable water quality, therefore, the precipitates should be removed by filtering after the stepwise storage and aeration in tanks with sufficient volume for sufficient time. Particle size distribution data also suggest that step-wise filtration would be cost-effective. To minimize the scaling within wells, the continued (if possible) pumping within the optimum pumping rate is recommended because this technique will be most effective for minimizing the mixing between deep Fe(II)-rich water and shallow $O_2$-rich water. The simultaneous pumping of shallow $O_2$-rich water in different wells is also recommended.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.