• Title/Summary/Keyword: Processing step

Search Result 1,689, Processing Time 0.026 seconds

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Continuous Process for the Etching, Rinsing and Drying of MEMS Using Supercritical Carbon Dioxide (초임계 이산화탄소를 이용한 미세전자기계시스템의 식각, 세정, 건조 연속 공정)

  • Min, Seon Ki;Han, Gap Su;You, Seong-sik
    • Korean Chemical Engineering Research
    • /
    • v.53 no.5
    • /
    • pp.557-564
    • /
    • 2015
  • The previous etching, rinsing and drying processes of wafers for MEMS (microelectromechanical system) using SC-$CO_2$ (supercritical-$CO_2$) consists of two steps. Firstly, MEMS-wafers are etched by organic solvent in a separate etching equipment from the high pressure dryer and then moved to the high pressure dryer to rinse and dry them using SC-$CO_2$. We found that the previous two step process could be applied to etch and dry wafers for MEMS but could not confirm the reproducibility through several experiments. We thought the cause of that was the stiction of structures occurring due to vaporization of the etching solvent during moving MEMS wafer to high pressure dryer after etching it outside. In order to improve the structure stiction problem, we designed a continuous process for etching, rinsing and drying MEMS-wafers using SC-$CO_2$ without moving them. And we also wanted to know relations of states of carbon dioxide (gas, liquid, supercritical fluid) to the structure stiction problem. In the case of using gas carbon dioxide (3 MPa, $25^{\circ}C$) as an etching solvent, we could obtain well-treated MEMS-wafers without stiction and confirm the reproducibility of experimental results. The quantity of rinsing solvent used could be also reduced compared with the previous technology. In the case of using liquid carbon dioxide (3 MPa, $5^{\circ}C$, we could not obtain well-treated MEMS-wafers without stiction due to the phase separation of between liquid carbon dioxide and etching co-solvent(acetone). In the case of using SC-$CO_2$ (7.5 Mpa, $40^{\circ}C$), we had as good results as those of the case using gas-$CO_2$. Besides the processing time was shortened compared with that of the case of using gas-$CO_2$.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.

Comparison of Sea Level Data from TOPEX/POSEIDON Altimeter and in-situ Tide Gauges in the East Asian Marginal Seas (동아시아 주변해역에서의 TOPEX/POSEIDON 고도 자료와 현장 해수면 자료의 비교)

  • Youn, Yong-Hoon;Kim, Ki-Hyun;Park, Young-Hyang;Oh, Im-Sang
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.5 no.4
    • /
    • pp.267-275
    • /
    • 2000
  • In an effort to assess the reliability of satellite altimeter system, we conducted a comparative analysis of sea level data that were collected using the TOPEX/POSEIDON (T/P) altimeter and the 10 tide gauge (TG) stations in the satellite passing track. The analysis was made using data sets collected from marginal sea regions surrounding the Korean Peninsula at T/P cycles of 2 to 230, which correspond to October 1992 to December 1998. Because of strong tidal activity in the study area, treatment of tidal errors is a very critical step in data processing. Hence in the computation of dynamic heights from the Tn data, we adapted the procedures of Park and Gamberoni (1995) to reduce errors associated with it. When these T/P data were treated, the alias periods of M$_2$, S$_2$, and K$_1$ constitutions were found at 62.1, 58.7, and 173 days. The compatibility of the T/P and TG data sets were examined at various filtering periods. The results indicate that the low-frequency signal of Tn data can be interpreted more safely with longer filtering periods (such as up to the maximum selected values of 200 days). When RMS errors for 200-day low-pass filter period was compared among the whole 10 tidal stations, the values spanned in the range of 2.8 to 6.7 cm. The results of correlation analysis at this filtering period also showed a strong agreement between the Tn and TG data sets over the whole stations investigated (e.g., P values consistently less than 0.0001). According to our analysis, we conclude that the analysis of surface sea level using satellite altimeter data can be made safely and reasonably long filtering periods such as 200 days.

  • PDF

A study of conception of pyo(標).bon(本).joong(中) in the part of woongihak(運氣學) in negeong(內徑) (내경(內徑) 운기편(運氣篇)의 표(標).본(本).중(中) 개념에 대한 연구(硏究))

  • Baik, You Sang;Park, Chan-Guk
    • Journal of Korean Medical classics
    • /
    • v.11 no.2
    • /
    • pp.114-134
    • /
    • 1998
  • The conception of pyo(標) bon(本) joong(中) in the part of woongihak(運氣學) of negeong(內徑) one of the important thing that decides the relation between six gi(六氣) and samyum and samyang(三陰三陽) or between each other's of samyum and samyang itself, it says that the relation of Pyo-rce(表裏). So this conception from the ancient times have been used to explain the theory of meridian(經絡) and organs(五臟六腑) and in other important field of oriental medicine - Sanghannon(傷寒論), it became basis of explanation of pcthoiogical principles in the system of six kyung(六徑). At first, the subject or this study is limited to the rament of $\ll$Somun(素問)$\gg$ in order to find the accurate and original meanings of pyo(標) bon(本) joong(中). And the meanings are studied by the way of expanding it's meaning with basic conceptions of woongihak(運氣學) and astronomy included in negeong(內徑). In this study, the results are summarized as the followings. 1. The contents of - the 68th chapter of negeong(內徑), concerning pyo(標) and joong(中) come under chogi(初氣) and joonggi(中氣) of the same chapter, after consideration of astronomical knowledge. And they become active during the period that last about 30days, a haft of one step(一步) of kaekgi(客氣). 2. Bon(本) as a kind of six gi(六氣) that is revealed from internal principle of something, that is to say Ohhaeng(五行), comes mainly under the kaekgi(客氣) of woongihak(運氣學) with the meaning of 'sign' is thai the specific properties of six gi(六氣) are revealed to our sight, so we can feel that through the change of nature, Joong(中) is the other property hidden in the inside of six gi(六氣), that is a portion of original nature(本性) like the bon(本). 3. The relation of pyo(標) and bon(本) is like that bctween the principle hidden inside in all things(理) and it's expression into the real world(氣) also similar to thai of yumyang(陰陽) and ohhaeng(五行). Therefore bon(本), though it means one of the six gi(六氣), hale the property of ohhaeng(五行) and pyo(標) is revealed, with an appearance of samyum-samyang(三陰三陰). 4. pyo(標) and joong(中) are also the both sides of yum(陰) and yang(陰) that revealed under the change of yumyang-ohhaengl(陰陽五行) in the nature. For example, if the one is yang(陰), the other is yum(陰). In the process that the change of all things is revealed out, first the property of pyo(標) appears strongly and then that of joong(中) appears comparatively weakly. But, in spite of the inhibitive relation of yumyang(陰陽), pyo(標) and joong(中) promote each other. 5. Under the course of change. It happens according to the bon(本), the property of ohhaeng(五行) in the case of soyang(少陽) and taeyum(太陰), because the effect of moisture(濕) and fire(火) that makes hyung(形) and gi(氣) is very strong in the universe. In the case of taeyang(太陽) and soyum(少陰), it happens according to the bon(本) and pyo(標) because they hare the polarity of water and fire(火水), at the same time, are not separated each other. In the case of yangmeong(陽明) and gualyum(厥陰), the change appears only according to the joong(中), but not strongly because the phase of yangmeong(陽明) and gualyum(厥陰) is a lull phase processing to the next one.

  • PDF

Regulation of $LH{\beta}$ subunit mRNA by Ovarian Steroid in Ovariectomized Rats (난소제거된 흰쥐에서 난소호르몬에 의한 $LH{\beta}$ subunit의 유전자 발현조절)

  • Kim, Chang-Mee;Park, Deok-Bae;Ryu, Kyung-Za
    • The Korean Journal of Pharmacology
    • /
    • v.29 no.2
    • /
    • pp.225-235
    • /
    • 1993
  • Pituitary LH release has been known to be regulated by the hypothalamic gonadotropin releasing hormone (GnRH) and the gonadal steroid hormones. In addition, neurotransmitters and neuropeptides are actively involved in the control of LH secretion. The alteration in LH release might reflect changes in biosynthesis and/or posttranslational processing of LH. However, little is known about the mechanism by which biosynthesis of LH subunits is regulated, especially at the level of transcription. In order to investigate if ovarian steroid hormones regulate the LH subunit gene expression, ${\alpha}\;and\;LH{\beta}$ steady state mRNA levels were determined in anterior pituitaries of ovariectomized rats. Serum LH concentrations and pituitary LH concentrations were increased markedly with time after ovariectomy. ${\alpha}\;and\;LH{\beta}$ subunit mRNA levels after ovariectomy were increased in a parallel manner with serum LH concentrations and pituitary LH contents, the rise in $LH{\beta}$ subunit mRNA levels being more prominent than the rise in ${\alpha}\;subunit$ mRNA. ${\alpha}\;and\;LH{\beta}$ subunit mRNA levels in ovariectomized rats were negatively regulated by the continuous treatment of ovarian steriod hormones for $1{\sim}4\;days$ and $LH{\beta}\;subunit$ mRNA seemed to be more sensitive to negative feedback of estradiol than progesterone. Treatment of estrogen antagonist, LY117018 or progesterone antagonist, RU486 significantly restroed LH subunit mRNA levels as well as LH release which were suppressed by estradiol or progesterone treatment. These results suggest that ovarian steroids negatively regulate the LH synthesis at the pretranslational level by modulating the steady state levels of ${\alpha}\;and\;LH{\beta}\;subunit$ mRNA and $LH{\beta}\;subunit$ mRNA seemed to be more sensitive to negative feedback action of estradiol than progesterone.

  • PDF

Quantitative Indices of Small Heart According to Reconstruction Method of Myocardial Perfusion SPECT Using the 201Tl (201Tl을 이용한 심근관류 SPECT에서 재구성 방법에 따른 작은 용적 심장의 정량 지표 변화)

  • Kim, Sung Hwan;Ryu, Jae Kwang;Yoon, Soon Sang;Kim, Eun Hye
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.18-24
    • /
    • 2013
  • Purpose: Myocardial perfusion SPECT using $^{201}Tl$ is an important method for viability of left ventricle and quantitative evaluation of cardiac function and now various reconstruction methods are used to improve the image quality. But in case of small sized heart, you should always be careful because of the Partial Volume Effect which may cause errors of quantitative indices at the reconstruction step. So, In this study, we compared those quantitative indices of left ventricle according to the reconstruction method of myocardial perfusion SPECT with the Echocardiography and verified the degree of the differences between them. Materials and Methods: Based on ESV 30 mL of Echocardiography, we divided 278 patients (male;98, female;188, Mean age;$65.5{\pm}11.1$) who visited the Asan medical center from February to September, 2012 into two categories; below the criteria to small sized heart, otherwise, normal or large sized heart. Filtered and output each case, we applied the method of FBP and OSEM to each of them, and calculated EDV, ESV and LVEF, and we conducted statistical processing through Repeated Measures ANOVA with indices that measured in Echocardiography. Results: In case of men and women, there were no significant difference in EDV between FBP and OSEM (p=0.053, p=0.098), but in case of Echocardiography, there were meaningful differences (p<0.001). The change of ESV especially women in small sized heard, significant differences has occurred among FBP, OSEM and Echocardiography. Also, in LVEF, there were no difference in men and women who have normal sized heart among FBP, OSEM and Echocardiography (p=0.375, p=0.969), but the women with small sized heart have showed significant differences (p<0.001). Conclusion: The change in quantitative indices of left ventricle between Nuclear cardiology image reconstruction, no difference has occurred in the patients with normal sized heart but based on ESV, under 30 mL of small sized heart, especially in female, there were significant differences in FBP, OSEM and Echocardiography. We found out that overestimated LVEF caused by PVE can be reduced in average by applying OSEM to all kinds of gamma camera, which are used in analyzing the differences.

  • PDF

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.