• Title/Summary/Keyword: 이미지정보

Search Result 5,789, Processing Time 0.036 seconds

A Study of the Representation in the Elementary Mathematical Problem-Solving Process (초등 수학 문제해결 과정에 사용되는 표현 방법에 대한 연구)

  • Kim, Yu-Jung;Paik, Seok-Yoon
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.9 no.2
    • /
    • pp.85-110
    • /
    • 2005
  • The purpose of this study is to examine the characteristics of visual representation used in problem solving process and examine the representation types the students used to successfully solve the problem and focus on systematizing the visual representation method using the condition students suggest in the problems. To achieve the goal of this study, following questions have been raised. (1) what characteristic does the representation the elementary school students used in the process of solving a math problem possess? (2) what types of representation did students use in order to successfully solve elementary math problem? 240 4th graders attending J Elementary School located in Seoul participated in this study. Qualitative methodology was used for data analysis, and the analysis suggested representation method the students use in problem solving process and then suggested the representation that can successfully solve five different problems. The results of the study as follow. First, the students are not familiar with representing with various methods in the problem solving process. Students tend to solve the problem using equations rather than drawing a diagram when they can not find a word that gives a hint to draw a diagram. The method students used to restate the problem was mostly rewriting the problem, and they could not utilize a table that is essential in solving the problem. Thus, various errors were found. Students did not simplify the complicated problem to find the pattern to solve the problem. Second, the image and strategy created as the problem was read and the affected greatly in solving the problem. The first image created as the problem was read made students to draw different diagram and make them choose different strategies. The study showed the importance of first image by most of the students who do not pass the trial and error step and use the strategy they chose first. Third, the students who successfully solved the problems do not solely depend on the equation but put them in the form which information are decoded. They do not write difficult equation that they can not solve, but put them into a simplified equation that know to solve the problem. On fraction problems, they draw a diagram to solve the problem without calculation, Fourth, the students who. successfully solved the problem drew clear diagram that can be understood with intuition. By representing visually, unnecessary information were omitted and used simple image were drawn using symbol or lines, and to clarify the relationship between the information, numeric explanation was added. In addition, they restricted use of complicated motion line and dividing line, proper noun in the word problems were not changed into abbreviation or symbols to clearly restate the problem. Adding additional information was useful source in solving the problem.

  • PDF

Suggestions for Settlement Stable Employment Culture of Dental Hygienist (치과위생사의 안정적인 고용문화 정착을 위한 제언)

  • Yoon, Mi-Sook
    • Journal of dental hygiene science
    • /
    • v.17 no.6
    • /
    • pp.463-471
    • /
    • 2017
  • The purpose of this study was to examine the causes of career interruptions among dental hygienists, institutional measures required for their long service and ways of creating a stable employment culture for them in determine how to resolve labor shortage, create stable jobs, and step up the reemployment of idle manpower. In addition, the following suggestions are made for the establishment of a stable employment culture for dental hygienists by analyzing related literature, research materials, and information such as forums for establishing appropriate jobs for female dental workers. First, a system should be set up to prevent career interruption among dental hygienists. The work environment should be improved to prevent career breaks, and the wages, working hours, and working style should be efficiently structured to maintain the tenure of employees. Second, a plan should be devised to make use of idle manpower, and a variety of necessary programs should be developed. With respect to regular working hours, the time conversion system should be used, which reduces the amount of time one would want to work while receiving a national subsidy. Third, dental hygienists working in different occupations for marriage, childbirth, childcare, school and personal hygiene should make a way to return to the dental system immediately when they want. Fourth, the government should take institutional measures and offer down-to-earth support and benefits for women consideration their social characteristics to guarantee a balance between work and childcare.

A Study on a Quantified Structure Simulation Technique for Product Design Based on Augmented Reality (제품 디자인을 위한 증강현실 기반 정량구조 시뮬레이션 기법에 대한 연구)

  • Lee, Woo-Hun
    • Archives of design research
    • /
    • v.18 no.3 s.61
    • /
    • pp.85-94
    • /
    • 2005
  • Most of product designers use 3D CAD system as a inevitable design tool nowadays and many new products are developed through a concurrent engineering process. However, it is very difficult for novice designers to get the sense of reality from modeling objects shown in the computer screens. Such a intangibility problem comes from the lack of haptic interactions and contextual information about the real space because designers tend to do 3D modeling works only in a virtual space of 3D CAD system. To address this problem, this research investigate the possibility of a interactive quantified structure simulation for product design using AR(augmented reality) which can register a 3D CAD modeling object on the real space. We built a quantified structure simulation system based on AR and conducted a series of experiments to measure how accurately human perceive and adjust the size of virtual objects under varied experimental conditions in the AR environment. The experiment participants adjusted a virtual cube to a reference real cube within 1.3% relative error(5.3% relative StDev). The results gave the strong evidence that the participants can perceive the size of a virtual object very accurately. Furthermore, we found that it is easier to perceive the size of a virtual object in the condition of presenting plenty of real reference objects than few reference objects, and using LCD panel than HMD. We tried to apply the simulation system to identify preference characteristics for the appearance design of a home-service robot as a case study which explores the potential application of the system. There were significant variances in participants' preferred characteristics about robot appearance and that was supposed to come from the lack of typicality of robot image. Then, several characteristic groups were segmented by duster analysis. On the other hand, it was interesting finding that participants have significantly different preference characteristics between robot with arm and armless robot and there was a very strong correlation between the height of robot and arm length as a human body.

  • PDF

Perception and Appraisal of Urban Park Users Using Text Mining of Google Maps Review - Cases of Seoul Forest, Boramae Park, Olympic Park - (구글맵리뷰 텍스트마이닝을 활용한 공원 이용자의 인식 및 평가 - 서울숲, 보라매공원, 올림픽공원을 대상으로 -)

  • Lee, Ju-Kyung;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.4
    • /
    • pp.15-29
    • /
    • 2021
  • The study aims to grasp the perception and appraisal of urban park users through text analysis. This study used Google review data provided by Google Maps. Google Maps Review is an online review platform that provides information evaluating locations through social media and provides an understanding of locations from the perspective of general reviewers and regional guides who are registered as members of Google Maps. The study determined if the Google Maps Reviews were useful for extracting meaningful information about the user perceptions and appraisals for parks management plans. The study chose three urban parks in Seoul, South Korea; Seoul Forest, Boramae Park, and Olympic Park. Review data for each of these three parks were collected via web crawling using Python. Through text analysis, the keywords and network structure characteristics for each park were analyzed. The text was analyzed, as were park ratings, and the analysis compared the reviews of residents and foreign tourists. The common keywords found in the review comments for the three parks were "walking", "bicycle", "rest" and "picnic" for activities, "family", "child" and "dogs" for accompanying types, and "playground" and "walking trail" for park facilities. Looking at the characteristics of each park, Seoul Forest shows many outdoor activities based on nature, while the lack of parking spaces and congestion on weekends negatively impacted users. Boramae Park has the appearance of a city park, with various facilities providing numerous activities, but reviewers often cited the park's complexity and the negative aspects in terms of dog walking groups. At Olympic Park, large-scale complex facilities and cultural events were frequently mentioned, emphasizing its entertainment functions. Google Maps Review can function as useful data to identify parks' overall users' experiences and general feelings. Compared to data from other social media sites, Google Maps Review's data provides ratings and understanding factors, including user satisfaction and dissatisfaction.

The Effect of the Surfactant on the Migration and Distribution of Immiscible Fluids in Pore Network (계면활성제가 공극 구조 내 비혼성 유체의 거동과 분포에 미치는 영향)

  • Park, Gyuryeong;Kim, Seon-Ok;Wang, Sookyun
    • Economic and Environmental Geology
    • /
    • v.54 no.1
    • /
    • pp.105-115
    • /
    • 2021
  • The geological CO2 sequestration in underground geological formation such as deep saline aquifers and depleted hydrocarbon reservoirs is one of the most promising options for reducing the atmospheric CO2 emissions. The process in geological CO2 sequestration involves injection of supercritical CO2 (scCO2) into porous media saturated with pore water and initiates CO2 flooding with immiscible displacement. The CO2 migration and distribution, and, consequently, the displacement efficiency is governed by the interaction of fluids. Especially, the viscous force and capillary force are controlled by geological formation conditions and injection conditions. This study aimed to estimate the effects of surfactant on interfacial tension between the immiscible fluids, scCO2 and porewater, under high pressure and high temperature conditions by using a pair of proxy fluids under standard conditions through pendant drop method. It also aimed to observe migration and distribution patterns of the immiscible fluids and estimate the effects of surfactant concentrations on the displacement efficiency of scCO2. Micromodel experiments were conducted by applying n-hexane and deionized water as proxy fluids for scCO2 and porewater. In order to quantitatively analyze the immiscible displacement phenomena by n-hexane injection in pore network, the images of migration and distribution pattern of the two fluids are acquired through a imaging system. The experimental results revealed that the addition of surfactants sharply reduces the interfacial tension between hexane and deionized water at low concentrations and approaches a constant value as the concentration increases. Also it was found that, by directly affecting the flow path of the flooding fluid at the pore scale in the porous medium, the surfactant showed the identical effect on the displacement efficiency of n-hexane at equilibrium state. The experimental observation results could provide important fundamental information on immiscible displacement of fluids in porous media and suggest the potential to improve the displacement efficiency of scCO2 by using surfactants.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

An Analysis of School Life Sensibility of Students at Korea National College of Agriculture and Fisheries Using Unstructured Data Mining(1) (비정형 데이터 마이닝을 활용한 한국농수산대학 재학생의 학교생활 감성 분석(1))

  • Joo, J.S.;Lee, S.Y.;Kim, J.S.;Song, C.Y.;Shin, Y.K.;Park, N.B.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.21 no.1
    • /
    • pp.99-114
    • /
    • 2019
  • In this study we examined the preferences of eight college living factors for students at Korea National College of Agriculture and Fisheries(KNCAF). Analytical techniques of unstructured data used opinion mining and text mining techniques, and the analysis results of text mining were visualized as word cloud. The college life factors included eight topics that were closely related to students: 'my present', 'my 10 years later', 'friendship', 'college festival', 'student restaurant', 'college dormitory', 'KNCAF', and 'long-term field practice'. In the text submitted by the students, we have established a dictionary of positive words and negative words to evaluate the preference by classifying the emotions of positive and negative. As a result, KNCAF students showed more than 85% positive emotions about the theme of 'student restaurant' and 'friendship'. But students' positive feelings about 'long-term field practice' and 'college dormitory' showed the lowest satisfaction rate of not exceeding 60%. The rest of the topics showed satisfaction of 69.3~74.2%. The gender differences showed that the positive emotions of male students were high in the topics of 'my present', 'my 10 years later', 'friendship', 'college dormitory' and 'long-term field practice'. And those of female were high in 'college festival', 'student restaurant' and 'KNCAF'. In addition, using text mining technique, the main words of positive and negative words were extracted, and word cloud was created to visualize the results.

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

Analysis of Waterbody Changes in Small and Medium-Sized Reservoirs Using Optical Satellite Imagery Based on Google Earth Engine (Google Earth Engine 기반 광학 위성영상을 이용한 중소규모 저수지 수체 변화 분석)

  • Younghyun Cho;Joonwoo Noh
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.363-375
    • /
    • 2024
  • Waterbody change detection using satellite images has recently been carried out in various regions in South Korea, utilizing multiple types of sensors. This study utilizes optical satellite images from Landsat and Sentinel-2 based on Google Earth Engine (GEE) to analyze long-term surface water area changes in four monitored small and medium-sized water supply dams and agricultural reservoirs in South Korea. The analysis covers 19 years for the water supply dams and 27 years for the agricultural reservoirs. By employing image analysis methods such as normalized difference water index, Canny Edge Detection, and Otsu'sthresholding for waterbody detection, the study reliably extracted water surface areas, allowing for clear annual changes in waterbodies to be observed. When comparing the time series data of surface water areas derived from satellite images to actual measured water levels, a high correlation coefficient above 0.8 was found for the water supply dams. However, the agricultural reservoirs showed a lower correlation, between 0.5 and 0.7, attributed to the characteristics of agricultural reservoir management and the inadequacy of comparative data rather than the satellite image analysis itself. The analysis also revealed several inconsistencies in the results for smaller reservoirs, indicating the need for further studies on these reservoirs. The changes in surface water area, calculated using GEE, provide valuable spatial information on waterbody changes across the entire watershed, which cannot be identified solely by measuring water levels. This highlights the usefulness of efficiently processing extensive long-term satellite imagery data. Based on these findings, it is expected that future research could apply this method to a larger number of dam reservoirs with varying sizes,shapes, and monitoring statuses, potentially yielding additional insights into different reservoir groups.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.