• Title/Summary/Keyword: Distribution Information

Search Result 11,986, Processing Time 0.042 seconds

Consumer Responses to Retailer's Location-based Mobile Shopping Service : Focusing on PAD Emotional State Model and Information Relevance (유통업체의 위치기반 모바일 쇼핑서비스 제공에 대한 소비자 반응 : PAD 감정모델과 정보의 상황관련성을 중심으로)

  • Lee, Hyun-Hwa;Moon, Hee-Kang
    • Journal of Distribution Research
    • /
    • v.17 no.2
    • /
    • pp.63-92
    • /
    • 2012
  • This study investigated consumer intention to use a location-based mobile shopping service (LBMSS) that integrates cognitive and affective responses. Information relevancy was integrated into pleasure-arousal-dominance (PAD) emotional state model in the present study as a conceptual framework. The results of an online survey of 335 mobile phone users in the U.S. indicated the positive effects of arousal and information relevancy on pleasure. In addition, there was a significant relationship between pleasure and intention to use a LBMSS. However, the relationship between dominance and pleasure was not statistically significant. The results of the present study provides insight to retailers and marketers as to what factors they need to consider to implement location-based mobile shopping services to improve their business performance. Extended Abstract : Location aware technology has expanded the marketer's reach by reducing space and time between a consumer's receipt of advertising and purchase, offering real-time information and coupons to consumers in purchasing situations (Dickenger and Kleijnen, 2008; Malhotra and Malhotra, 2009). LBMSS increases the relevancy of SMS marketing by linking advertisements to a user's location (Bamba and Barnes, 2007; Malhotra and Malhotra, 2009). This study investigated consumer intention to use a location-based mobile shopping service (LBMSS) that integrates cognitive and affective response. The purpose of the study was to examine the relationship among information relevancy and affective variables and their effects on intention to use LBMSS. Thus, information relevancy was integrated into pleasure-arousal-dominance (PAD) model and generated the following hypotheses. Hypothesis 1. There will be a positive influence of arousal concerning LBMSS on pleasure in regard to LBMSS. Hypothesis 2. There will be a positive influence of dominance in LBMSS on pleasure in regard to LBMSS. Hypothesis 3. There will be a positive influence of information relevancy on pleasure in regard to LBMSS. Hypothesis 4. There will be a positive influence of pleasure about LBMSS on intention to use LBMSS. E-mail invitations were sent out to a randomly selected sample of three thousand consumers who are older than 18 years old and mobile phone owners, acquired from an independent marketing research company. An online survey technique was employed utilizing Dillman's (2000) online survey method and follow-ups. A total of 335 valid responses were used for the data analysis in the present study. Before the respondents answer any of the questions, they were told to read a document describing LBMSS. The document included definitions and examples of LBMSS provided by various service providers. After that, they were exposed to a scenario describing the participant as taking a saturday shopping trip to a mall and then receiving a short message from the mall. The short message included new product information and coupons for same day use at participating stores. They then completed a questionnaire containing various questions. To assess arousal, dominance, and pleasure, we adapted and modified scales used in the previous studies in the context of location-based mobile shopping service, each of the five items from Mehrabian and Russell (1974). A total of 15 items were measured on a seven-point bipolar scale. To measure information relevancy, four items were borrowed from Mason et al. (1995). Intention to use LBMSS was captured using two items developed by Blackwell, and Miniard (1995) and one items developed by the authors. Data analyses were conducted using SPSS 19.0 and LISREL 8.72. A total of usable 335 data were obtained after deleting the incomplete responses, which results in a response rate of 11.20%. A little over half of the respondents were male (53.9%) and approximately 60% of respondents were married (57.4%). The mean age of the sample was 29.44 years with a range from 19 to 60 years. In terms of the ethnicity there were European Americans (54.5%), Hispanic American (5.3%), African-American (3.6%), and Asian American (2.9%), respectively. The respondents were highly educated; close to 62.5% of participants in the study reported holding a college degree or its equivalent and 14.5% of the participants had graduate degree. The sample represents all income categories: less than $24,999 (10.8%), $25,000-$49,999 (28.34%), $50,000-$74,999 (13.8%), and $75,000 or more (10.23%). The respondents of the study indicated that they were employed in many occupations. Responses came from all 42 states in the U.S. To identify the dimensions of research constructs, Exploratory Factor Analysis (EFA) using a varimax rotation was conducted. As indicated in table 1, these dimensions: arousal, dominance, relevancy, pleasure, and intention to use, suggested by the EFA, explained 82.29% of the total variance with factor loadings ranged from .74 to .89. As a next step, CFA was conducted to validate the dimensions that were identified from the exploratory factor analysis and to further refine the scale. Table 1 exhibits the results of measurement model analysis and revealed a chi-square of 202.13 with degree-of-freedom of 89 (p =.002), GFI of .93, AGFI = .89, CFI of .99, NFI of .98, which indicates of the evidence of a good model fit to the data (Bagozzi and Yi, 1998; Hair et al., 1998). As table 1 shows, reliability was estimated with Cronbach's alpha and composite reliability (CR) for all multi-item scales. All the values met evidence of satisfactory reliability in multi-item measure for alpha (>.91) and CR (>.80). In addition, we tested the convergent validity of the measure using average variance extracted (AVE) by following recommendations from Fornell and Larcker (1981). The AVE values for the model constructs ranged from .74 through .85, which are higher than the threshold suggested by Fornell and Larcker (1981). To examine discriminant validity of the measure, we again followed the recommendations from Fornell and Larcker (1981). The shared variances between constructs were smaller than the AVE of the research constructs and confirm discriminant validity of the measure. The causal model testing was conducted using LISREL 8.72 with a maximum-likelihood estimation method. Table 2 shows the results of the hypotheses testing. The results for the conceptual model revealed good overall fit for the proposed model. Chi-square was 342.00 (df = 92, p =.000), NFI was .97, NNFI was .97, GFI was .89, AGFI was .83, and RMSEA was .08. All paths in the proposed model received significant statistical support except H2. The paths from arousal to pleasure (H1: ${\ss}$=.70; t = 11.44), from information relevancy to intention to use (H3 ${\ss}$ =.12; t = 2.36), from information relevancy to pleasure (H4 ${\ss}$ =.15; t = 2.86), and pleasure to intention to use (H5: ${\ss}$=.54; t = 9.05) were significant. However, the path from dominance to pleasure was not supported. This study investigated consumer intention to use a location-based mobile shopping service (LBMSS) that integrates cognitive and affective responses. Information relevancy was integrated into pleasure-arousal-dominance (PAD) emotional state model as a conceptual framework. The results of the present study support previous studies indicating that emotional responses as well as cognitive responses have a strong impact on accepting new technology. The findings of this study suggest potential marketing strategies to mobile service developers and retailers who are considering the implementation of LBMSS. It would be rewarding to develop location-based mobile services that integrate information relevancy and which cause positive emotional responses.

  • PDF

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

The Sensitivity Analysis according to Observed Frequency of Daily Composite Insolation based on COMS (관측 빈도에 따른 COMS 기반의 일 평균 일사량 산출의 민감도 분석)

  • Kim, Honghee;Lee, Kyeong-Sang;Seo, Minji;Choi, Sungwon;Sung, Noh-Hun;Lee, Darae;Jin, Donghyun;Kwon, Chaeyoung;Huh, Morang;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.733-739
    • /
    • 2016
  • Insolation is an major indicator variable that can serve as an energy source in earth system. It is important to monitor insolation content using remote sensing to evaluate the potential of solar energy. In this study, we performed sensitivity analysis of observed frequency on daily composite insolation over the Korean peninsula. We estimated INS through the channel data of Communication, Ocean and Meteorological Satellite (COMS) and Cloud Mask which have temporal resolution of 1 and 3 hours. We performed Hemispherical Integration by spatial resolution for meaning whole sky. And we performed daily composite insolation. And then we compared the accuracy of estimated COMS insolation data with pyranometer data from 37 points. As a result, there was no great sensitivity in the daily composite INS by observed frequency of satellite that accuracy of the calculated insolation at 1 hour interval was $28.6401W/m^2$ and 3 hours interval was $30.4960W/m^2$. However, there was a great difference in the space distribution of two other INS data by observed frequency of clouds. So, we performed sensitivity analysis with observed frequency of clouds and distinction between the two other INS data. Consequently, there was showed sensitivity up to $19.4392W/m^2$.

Improvement of protein identification performance by reinterpreting the precursor ion mass tolerance of mass spectrum (질량스펙트럼의 펩타이드 분자량 오차범위 재해석에 의한 단백질 동정의 성능 향상)

  • Gwon, Gyeong-Hun;Kim, Jin-Yeong;Park, Geon-Uk;Lee, Jeong-Hwa;Baek, Yung-Gi;Yu, Jong-Sin
    • Bioinformatics and Biosystems
    • /
    • v.1 no.2
    • /
    • pp.109-114
    • /
    • 2006
  • In proteomics research, proteins are digested into peptides by an enzyme and in mass spectrometer, these peptides break into fragment ions to generate tandem mass spectra. The tandem mass spectral data obtained from the mass spectrometer consists of the molecular weights of the precursor ion and fragment ions. The precursor ion mass of tandem mass spectrum is the first value that is fetched to sort the candidate peptides in the database search. We look far the peptide sequences whose molecular weight matches with precursor ion mass of the mass spectrum. Then, we choose one peptide sequence that shows the best match with fragment ions information. The precursor ion mass of the tandem mass spectrum is compared with that of the digested peptides of protein database within the mass tolerance that is assigned by users according to the mass spectrometer accuracy. In this study, we used reversed sequence database method to analyze the molecular weight distribution of precursor ions of the tandem mass spectra obtained by the FT LTQ mass spectrometer for human plasma sample. By reinterpreting the precursor ion mass distribution, we could compute the experimental accuracy and we suggested a method to improve the protein identification performance.

  • PDF

Recommendation and current status in exposure assessment using monitoring data in ship building industry - focused on the similar exposure group(SEG) (조선업의 작업환경측정결과를 이용한 노출평가의 문제점과 해결방향 - 유사노출군을 중심으로 -)

  • Roh, Youngman;Yim, Hyeon Woo;Kim, Suk Il;Park, Hyo Man;Jung, Jae Yeol;Park, Sook Kyung;Kim, Hyun-Wook;Chung, Chee Kyung;Lee, Won Chul;Kim, Jung Man;Kim, Soo Keun;Koh, Sang Baek;Karl, Sieber;Kim, Euna;Choi, Jung Keun
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.11 no.2
    • /
    • pp.126-134
    • /
    • 2001
  • Statistical approaches for analysis of data from the limited number of samples in ship building industry(SBI) collected by an industrial hygienist for checking compliance to an occupational standard were considered. Sampling for compliance usually has been guided by judgment selection, rather than true randomness, resulting in the creation of compliance samples which approximate a censored sample from the upper tail of the exposure distribution. Similar exposure groups(SEGs) including welding and painting process were established to assess representative values in each groups after reviewing the whole production line in SBI. For the convenient statistical approaches, the code has assigned to each SEGs. The descriptive statistics and probability plotting were used to yield the representative values in each SEGs. In the first step, SEGs of 558 were established from 5 ship building companies. The 38 SEGs showed the uncertainty are divided into each 5 companies and assessed the representative values again. The 44 SEGs in each companies was not showed the normal and lognormal distribution was analyzed each data. And also, recommendation was suggested to resolve the uncertainty in each groups.

  • PDF

The Significance of the Distribution Patterns of Certain Elements in the Stream Sediments' of the St. Austell Granite Mass, Cornwall (영국(英國)콘웰주(州)의 성(聖)오우스텔 화강암괴(花崗岩塊)에 대(對)한 지구화학적(地球化學的) 연구(硏究))

  • Lee, Jae Yeong;Olinze, Simon Kaine
    • Economic and Environmental Geology
    • /
    • v.2 no.4
    • /
    • pp.23-71
    • /
    • 1969
  • Sediment samples were taken at about half-mile intervals from all the inajor rivers draining the St. Austell granite mass. The minus 80 mesh(B.S.S.) fraction of each sample was analysed, using semiquantitative methods, for sodium, potassium, lithium, phosphorus, nickel, chromium, tin, tungsten, arsenic copper, zinc and lead. The work was carried out with the view to gaining further information as to the geographical distribution of such different granite facies as might axist, and to investigate the geochemical dispersion of these elements with relation to mineralisation in this area. The sesults confirm Exley's suggestion that the mass consists of two major granite intrusions, the earlier undifferentiated one is joined on the west by a later differentiated intrutive. During the work grid deviation maps proved particularly useful in obtaining data concerning the nature of the granite but frequency diagrams were not particularly helpful. All the known lode areas were associated with stream sediments containing anomalously high concentrations of lode metals and it is concluded that these high concentrations are due premarily to lode material transferred to the streams in the form of tailings lost during milling operations.

  • PDF

Distribution of Weeds on Upland Crop Field in Northern Gyeonggi-do (경기북부 밭 잡초 분포)

  • Oh, Young-Ju;Lee, Wook-Jae;Hong, Sun-Hee;Lee, Yong-Ho;Na, Chae-Sun;Lee, In-Yong;Kim, Chang-Seok
    • Weed & Turfgrass Science
    • /
    • v.3 no.4
    • /
    • pp.276-283
    • /
    • 2014
  • This study was conducted in order to investigate the distribution pattern of weeds on upland crop field in northern Gyeonggi-do. The weeds were summarized as 201 taxa including 42 families, 129 genera, 178 species, 1 subspecies, 21 varieties and 1 form. One hundred and thirty one species were classified to annual plants, accounting for 65.1% and 70 species were classified to perennials, accounting for the rest of 34.9%. Compositae was dominant family (21%), followed by Gramineae (12%), Polygonaceae (7%) and Brassicaceae (5%). Among the weeds appearing in the fields of northern Gyeonggi-do, the invasive weeds were classified to 62 species in18 families. The most dominant weed species in the fields were Portulaca oleracea, followed by Echinochloa crus-galli, Amaranthus lividus. Detrended correspondence analysis for investigation of occurrence pattern of weeds by crops revealed the occurrence pattern of weed species in adlay field were different from those in other crop fields. This information could be useful for establishment of weed control methods in northern Gyeonggi-do.