• Title/Summary/Keyword: Complex structure

Search Result 4,562, Processing Time 0.034 seconds

Degree of Self-Understanding Through "Self-Guided Interpretation" in Yeoncheon, Hantan River UNESCO Geopark: Focusing on Readability and Curriculum Relevance (한탄강 세계지질공원 연천 지역의 자기-안내식 해설 매체를 통한 스스로 이해 가능 정도: 이독성과 교육과정 관련성을 중심으로)

  • Min Ji Kim;Chan-Jong Kim;Eun-Jeong Yu
    • Journal of the Korean earth science society
    • /
    • v.44 no.6
    • /
    • pp.655-674
    • /
    • 2023
  • This study examined whether the "self-guided interpretation" media in the Yeoncheon area of the Hantangang River UNESCO Geopark are intelligible for visitors. Accordingly, two on-site investigations were conducted in the Hantangang River Global Geopark in September and November 2022. The Yeoncheon area, known for its diverse geological features and the era of geological attraction formation, was selected for analysis. We analyzed the readability levels, graphic characteristics, and alignment with science curriculum of the interpretive media specific to geological sites among a total of 36 self-guided interpretive media in the Yeoncheon area. Results indicated that information boards, primarily offering guidance on geological attractions, were the most prevalent type of interpretive media in the Yeoncheon area. The quantity of text in explanatory media surpassed that of a 12th-grade science textbook. The average vocabulary grade was similar to that of 11th- and 12th-grade science textbooks, with somewhat reduced readability due to a high occurrence of complex sentences. Predominant graphic types included illustrative photographs, aiding comprehension of the geological formation process through multi-structure graphics. Regarding scientific terms used in the interpretive media, 86.3% of the terms were within the "Solid Earth" section of the 2015 revised curriculum, with the majority being at the 4th-grade level. The 11th-grade optional curriculum terms comprised the second largest portion, and 13.7% of all science terms were from outside the curriculum. Notably, variations in the scientific terminology's complexity was based on geological attractions. Specifically, the terminology level on the homepage tended to be generally higher than that on information boards. Through these findings, specific factors impeding visitor comprehension of geological attractions in the Yeoncheon area, based on the interpretation medium, were identified. We suggest further research to effect improvements in self-guided interpretation media, fostering geological resource education for general visitors and anticipating advancements in geology education.

Proposal for the Hourglass-based Public Adoption-Linked National R&D Project Performance Evaluation Framework (Hourglass 기반 공공도입연계형 국가연구개발사업 성과평가 프레임워크 제안: 빅데이터 기반 인공지능 도시계획 기술개발 사업 사례를 바탕으로)

  • SeungHa Lee;Daehwan Kim;Kwang Sik Jeong;Keon Chul Park
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.31-39
    • /
    • 2023
  • The purpose of this study is to propose a scientific performance evaluation framework for measuring and managing the overall outcome of complex types of projects that are linked to public demand-based commercialization, such as information system projects and public procurement, in integrated national R&D projects. In the case of integrated national R&D projects that involve multiple research institutes to form a single final product, and in the case of demand-based demonstration and commercialization of the project results, the existing evaluation system that evaluates performance based on the short-term outputs of the detailed tasks comprising the R&D project has limitations in evaluating the mid- and long-term effects and practicality of the integrated research products. (Moreover, as the paradigm of national R&D projects is changing to a mission-oriented one that emphasizes efficiency, there is a need to change the performance evaluation of national R&D projects to focus on the effectiveness and practicality of the results.) In this study, we propose a performance evaluation framework from a structural perspective to evaluate the completeness of each national R&D project from a practical perspective, such as its effectiveness, beyond simple short-term output, by utilizing the Hourglass model. In particular, it presents an integrated performance evaluation framework that links the top-down and bottom-up approaches leading to Tool-System-Service-Effect according to the structure of R&D projects. By applying the proposed detailed evaluation indicators and performance evaluation frame to actual national R&D projects, the validity of the indicators and the effectiveness of the proposed performance evaluation frame were verified, and these results are expected to provide academic, policy, and industrial implications for the performance evaluation system of national R&D projects that emphasize efficiency in the future.

Analysis of the Case of Separation of Mixtures Presented in the 2015 Revised Elementary School Science 4th Grade Authorized Textbook and Comparison of the Concept of Separation of Mixtures between Teachers and Students (2015 개정 초등학교 과학과 4학년 검정 교과서에 제시된 혼합물의 분리 사례 분석 및 교사와 학생의 혼합물 개념 비교)

  • Chae, Heein;Noh, Sukgoo
    • Journal of Korean Elementary Science Education
    • /
    • v.43 no.1
    • /
    • pp.122-135
    • /
    • 2024
  • The purpose of this study was to analyze the examples presented in the "Separation of Mixtures" section of the 2015 revised science authorized textbook introduced in elementary schools in 2022 and to see how the teachers and students understand the concept. To do that, 96 keywords were extracted through three cleansing processes to separate the elements of the mixture presented in the textbook. In order to analyze the teachers' perceptions, 32 teachers at elementary schools in Gyeonggi-do received responses to a survey, and a survey of 92 fourth graders who learned the separation of the mixture with an authorized textbook in 2022 was used for the analysis. As a result, as for the solids, 54 out of 96 separations (56.3%) showed the highest ratio, and the largest number of cases were presented according to the characteristics of the development stage of students. It was followed by living things, liquids, other objects and substances, and gasses. By analyzing the mixture, the structure and the interrelationships between the 96 extracted keywords were systematized through the network analysis, and the connection between the keywords, which were a part of the mixture was analyzed. The teachers partially responded to the separation of the complex mixture presented in the textbook, but most of the students did not recognize it. Because the analysis of the teachers' and students' perceptions of the seven separate categories presented in the survey was not based on a clear conceptual perception of the separation of the mixture, but rather they tended to respond differently for each characteristic of each individual category, it was decided that it was necessary to present clearer examples of the separation of the mixture, so that the students could better understand the concept of separation of mixtures that could be somewhat abstract.

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

Multi-Variate Tabular Data Processing and Visualization Scheme for Machine Learning based Analysis: A Case Study using Titanic Dataset (기계 학습 기반 분석을 위한 다변량 정형 데이터 처리 및 시각화 방법: Titanic 데이터셋 적용 사례 연구)

  • Juhyoung Sung;Kiwon Kwon;Kyoungwon Park;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.121-130
    • /
    • 2024
  • As internet and communication technology (ICT) is improved exponentially, types and amount of available data also increase. Even though data analysis including statistics is significant to utilize this large amount of data, there are inevitable limits to process various and complex data in general way. Meanwhile, there are many attempts to apply machine learning (ML) in various fields to solve the problems according to the enhancement in computational performance and increase in demands for autonomous systems. Especially, data processing for the model input and designing the model to solve the objective function are critical to achieve the model performance. Data processing methods according to the type and property have been presented through many studies and the performance of ML highly varies depending on the methods. Nevertheless, there are difficulties in deciding which data processing method for data analysis since the types and characteristics of data have become more diverse. Specifically, multi-variate data processing is essential for solving non-linear problem based on ML. In this paper, we present a multi-variate tabular data processing scheme for ML-aided data analysis by using Titanic dataset from Kaggle including various kinds of data. We present the methods like input variable filtering applying statistical analysis and normalization according to the data property. In addition, we analyze the data structure using visualization. Lastly, we design an ML model and train the model by applying the proposed multi-variate data process. After that, we analyze the passenger's survival prediction performance of the trained model. We expect that the proposed multi-variate data processing and visualization can be extended to various environments for ML based analysis.

An Exploratory Study on the Components of Visual Merchandising of Internet Shopping Mall (인터넷쇼핑몰의 VMD 구성요인에 대한 탐색적 연구)

  • Kim, Kwang-Seok;Shin, Jong-Kuk;Koo, Dong-Mo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.19-45
    • /
    • 2008
  • This study is to empirically examine the primary dimensions of visual merchandising (VMD) of internet shopping mall, namely store design, merchandise, and merchandising cues, to be a attractive virtual store to the shoppers. The authors reviewed the literature related to the major components of VMD from the perspective of the AIDA model, which has been mainly applied to the offline store settings. The major purposes of the study are as follows; first, tries to derive the variables related with the components of visual merchandising through reviewing the existing literatures, establish the hypotheses, and test it empirically. Second, examines the relationships between the components of VMD and the attitude toward the VMD, however, putting more emphasis on finding out the component structure of the VMD. VMD needs to be examined with the perspective that an online shopping mall is a virtual self-service or clerkless store, which could reduce the number of employees, help the shoppers search, evaluate and purchase for themselves, and to be explored in terms of the in-store persuasion processes of customers. This study reviewed the literatures related to store design, merchandise, and merchandising cues which might be relevant to the store, product, and promotion respectively. VMD is a total communication tool, and AIDA model could explain the in-store consumer behavior of online shopping. Store design has to do with triggering a consumer attention to the online mall, merchandise with a product related interest, and merchandising cues with promotions such as recommendation and links that induce the desire to pruchase. These three steps might be seen as the processes for purchase actions. The theoretical rationale for the relationship between VMD and AIDA could be found in Tyagi(2005) that the three steps of consumer-oriented merchandising are a store, a product assortment, and placement, in Omar(1999) that three types of interior display are a architectural design display, commodity display, and point-of-sales(POS) display, and in Davies and Ward(2005) that the retail store interior image is related to an atmosphere, merchandise, and in-store promotion. Lee et al(2000) suggested as the web merchandising components a merchandising cues, a shopping metaphor which is an assistant tool for search, a store design, a layout(web design), and a product assortment. The store design which includes differentiation, simplicity and navigation is supposed to be related to the attention to the virtual store. Second, the merchandise dimensions comprising product assortments, visual information and product reputation have to do with the interest in the product offerings. Finally, the merchandising cues that refer to merchandiser(MD)'s recommendation of products and providing the hyperlinks to relevant goods for the shopper is concerned with attempt to induce the desire to purchase. The questionnaire survey was carried out to collect the data about the consumers who would shop at internet shopping malls frequently. To select the subject malls, the mall ranking data announced by a mall rating agency was used to differentiate the most popular and least popular five mall each. The subjects was instructed to answer the questions after navigating the designated mall for five minutes. The 300 questionnaire was distributed to the consumers, 166 samples were used in the final analysis. The empirical testing focused on identifying and confirming the dimensionality of VMD and its subdimensions using a structural equation modeling method. The confirmatory factor analysis for the endogeneous and exogeneous variables was carried out in four parts. The second-order factor analysis was done for a store design, a merchandise, and a merchandising cues, and first-order confirmatory factor analysis for the attitude toward the VMD. The model test results shows that the chi-square value of structural equation is 144.39(d.f 49), significant at 0.01 level which means the proposed model was rejected. But, judging from the ratio of chi-square value vs. degree of freedom, the ratio was 2.94 which smaller than an acceptable level of 3.0, RMR is 0.087 which is higher than a generally acceptable level of 0.08. GFI and AGFI is turned out to be 0.90 and 0.84 respectively. Both NFI and NNFI is 0.94, and CFI 0.95. The major test results are as follows; first, the second-order factor analysis and structural equational modeling reveals that the differentiation, simplicity and ease of identifying current status of the transaction are confirmed to be subdimensions of store design and to be a significant predictors of the dependent variable. This result implies that when designing an online shopping mall, it is necessary to differentiate visually from other malls to improve the effectiveness of the communications of store design. That is, the differentiated store design raise the contrast stimulus to sensory organs to promote the memory of the store and to have a favorable attitude toward the VMD of a store. The results that navigation which means the easiness of identifying current status of shopping affects the attitude to VMD could be interpreted that the navigating processes via the hyperlinks which is characteristics of an internet shopping is a complex and cognitive process and shoppers are likely to lack the sense of overall structure of the store. Consequently, shoppers are likely to be alost amid shopping not knowing where to go. The orientation tool enhance the accessibility of information to raise the perceptive power about the store environment.(Titus & Everett 1995) Second, the primary dimension of merchandise and its subdimensions was confirmed to be unidimensional respectively, have a construct validity, and nomological validity which the VMD dimensions supposed to have a positive correlation with the dependent variable. The subdimensions of product assortment, brand fame and information provision proved to have a positive effect on the attitude toward the VMD. It could be interpreted that the more plentiful the product and brand assortment of the mall is, the more likely the shoppers to favor it. Brand fame and information provision as well affect the VMD attitude, which means that the more famous the brand, the more likely the shoppers would trust and feel familiar with the mall, and the plentifully and visually presented information could have the shopper have a favorable attitude toward the store VMD. Third, it turned out to be that merchandising cue of product recommendation and hyperlinks affect the VMD attitude. This could be interpreted that recommended products could reduce the uncertainty related with the purchase decision, and the hyperlinks to relevant products would help the shopper save the cognitive effort exerted into the information search and gathering, which could lead to a favorable attitude to the VMD. This study tried to sheds some new light on the VMD of online store by reviewing the variables mentioned to be relevant with offline VMD in the existing literatures, and tried to link the VMD components from the perspective of AIDA model. The effect size of the VMD dimensions on the attitude was in the order of the merchandise, the store design and the merchandising cues.It is said that an internet has an unlimited place for display, however, the virtual store is not unlimited since the consumer has a limited amount of cognitive ability to process the external information and internal memory. Particularly, the shoppers are likely to face some difficulties in decision making on account of too many alternative and information overloads. Therefore, the internet shopping mall manager should take into consideration the cost of information search on the part of the consumer, to establish the optimal product placements and search routes. An efficient store composition would be possible by reducing the psychological burdens and cognitive efforts exerted to information search and alternatives evaluation. The store image is in most part determined by the product category and its brand it deals in. The results of this study support this proposition that the merchandise is most important to the VMD attitude than other components, the manager is required to take a strategic approach to VMD. The internet users are getting more accustomed and more knowledgeable about the internet media and more likely to accept the internet as a shopping channel as the period of time during which they use the internet to shop become longer. The web merchandiser should be aware that the product introduction using a moving pictures and a bulletin board become more important in order to present the interactive product information visually and communicate with customers more actively, therefore leading to making the quantity and quality of product information more rich.

  • PDF

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

Geology of Athabasca Oil Sands in Canada (캐나다 아사바스카 오일샌드 지질특성)

  • Kwon, Yi-Kwon
    • The Korean Journal of Petroleum Geology
    • /
    • v.14 no.1
    • /
    • pp.1-11
    • /
    • 2008
  • As conventional oil and gas reservoirs become depleted, interests for oil sands has rapidly increased in the last decade. Oil sands are mixture of bitumen, water, and host sediments of sand and clay. Most oil sand is unconsolidated sand that is held together by bitumen. Bitumen has hydrocarbon in situ viscosity of >10,000 centipoises (cP) at reservoir condition and has API gravity between $8-14^{\circ}$. The largest oil sand deposits are in Alberta and Saskatchewan, Canada. The reverves are approximated at 1.7 trillion barrels of initial oil-in-place and 173 billion barrels of remaining established reserves. Alberta has a number of oil sands deposits which are grouped into three oil sand development areas - the Athabasca, Cold Lake, and Peace River, with the largest current bitumen production from Athabasca. Principal oil sands deposits consist of the McMurray Fm and Wabiskaw Mbr in Athabasca area, the Gething and Bluesky formations in Peace River area, and relatively thin multi-reservoir deposits of McMurray, Clearwater, and Grand Rapid formations in Cold Lake area. The reservoir sediments were deposited in the foreland basin (Western Canada Sedimentary Basin) formed by collision between the Pacific and North America plates and the subsequent thrusting movements in the Mesozoic. The deposits are underlain by basement rocks of Paleozoic carbonates with highly variable topography. The oil sands deposits were formed during the Early Cretaceous transgression which occurred along the Cretaceous Interior Seaway in North America. The oil-sands-hosting McMurray and Wabiskaw deposits in the Athabasca area consist of the lower fluvial and the upper estuarine-offshore sediments, reflecting the broad and overall transgression. The deposits are characterized by facies heterogeneity of channelized reservoir sands and non-reservoir muds. Main reservoir bodies of the McMurray Formation are fluvial and estuarine channel-point bar complexes which are interbedded with fine-grained deposits formed in floodplain, tidal flat, and estuarine bay. The Wabiskaw deposits (basal member of the Clearwater Formation) commonly comprise sheet-shaped offshore muds and sands, but occasionally show deep-incision into the McMurray deposits, forming channelized reservoir sand bodies of oil sands. In Canada, bitumen of oil sands deposits is produced by surface mining or in-situ thermal recovery processes. Bitumen sands recovered by surface mining are changed into synthetic crude oil through extraction and upgrading processes. On the other hand, bitumen produced by in-situ thermal recovery is transported to refinery only through bitumen blending process. The in-situ thermal recovery technology is represented by Steam-Assisted Gravity Drainage and Cyclic Steam Stimulation. These technologies are based on steam injection into bitumen sand reservoirs for increase in reservoir in-situ temperature and in bitumen mobility. In oil sands reservoirs, efficiency for steam propagation is controlled mainly by reservoir geology. Accordingly, understanding of geological factors and characteristics of oil sands reservoir deposits is prerequisite for well-designed development planning and effective bitumen production. As significant geological factors and characteristics in oil sands reservoir deposits, this study suggests (1) pay of bitumen sands and connectivity, (2) bitumen content and saturation, (3) geologic structure, (4) distribution of mud baffles and plugs, (5) thickness and lateral continuity of mud interbeds, (6) distribution of water-saturated sands, (7) distribution of gas-saturated sands, (8) direction of lateral accretion of point bar, (9) distribution of diagenetic layers and nodules, and (10) texture and fabric change within reservoir sand body.

  • PDF