• Title/Summary/Keyword: PAGES

Search Result 1,063, Processing Time 0.023 seconds

Environmental effects from Natural Waters Contaminated with Acid Mine Drainage in the Abandoned Backun Mine Area (백운 폐광산의 방치된 폐석으로 인한 주변 수계의 환경적 영향)

  • 전서령;정재일;김대현
    • Economic and Environmental Geology
    • /
    • v.35 no.4
    • /
    • pp.325-337
    • /
    • 2002
  • We examined the contamination of stream water and stream sediments by heavy metal elements with respect to distance from the abandoned Backun Au-Ag-Cu mine. High contents of heavy metals (Pb, Zn, Cu, Cd, Mn, and Fe) and aluminum in the waters connected with mining and associated deposits (dumps, tailings) reduce water quality. In the mining area, Ca and SO$_4$ are predominant cation and anion. The mining water is Ca-SO$_4$ type and is enriched in heavy metals resulted from the weathering of sulfide minerals. This mine drainage water is weakly acid or neutral (pH; 6.5-7.1) because of neutralizing effect by other alkali and alkaline earth elements. The effluent from the mine adit is also weakly acid or neutral, and contains elevated concentrations of most elements due to reactions with ore and gangue minerals in the deposit. The concentration of ions in the Backun mining water is high in the mine adit drainage water and steeply decreased award to down stream. Buffering process can be reasonably considered as a partial natural control of pollution, since the ion concentration becomes lower and the pH value becomes neutralized. In order to evaluate mobility and bioavailability of metals, sequential extraction was used for stream sediments into five operationally defined groups: exchangeable, bound to carbonates, bound to FeMn oxide, bound to organic matter, and residual. The residual fraction was the most abundant pool for Cu(2l-92%), Zn(28-89%) and Pb(23-94%). Almost sediments are low concentrated with Cd(2.7-52.8 mg/kg) than any other elements. But Cd dominate with non stable fraction (68-97%). Upper stream sediments are contaminated with Pb, and down area sediments are enriched with Zn. It is indicate high mobility of Zn and Cd.

Research Direction for Functional Foods Safety (건강기능식품 안전관리 연구방향)

  • Jung, Ki-Hwa
    • Journal of Food Hygiene and Safety
    • /
    • v.25 no.4
    • /
    • pp.410-417
    • /
    • 2010
  • Various functional foods, marketing health and functional effects, have been distributed in the market. These products, being in forms of foods, tablets, and capsules, are likely to be mistaken as drugs. In addition, non-experts may sell these as foods, or use these for therapy. Efforts for creating health food regulations or building regulatory system for improving the current status of functional foods have been made, but these have not been communicated to consumers yet. As a result, problems of circulating functional foods for therapy or adding illegal medical to such products have persisted, which has become worse by internet media. The cause of this problem can be categorized into (1) product itself and (2) its use, but in either case, one possible cause is lack of communications with consumers. Potential problems that can be caused by functional foods include illegal substances, hazardous substances, allergic reactions, considerations when administered to patients, drug interactions, ingredients with purity or concentrations too low to be detected, products with metabolic activations, health risks from over- or under-dose of vitamin and minerals, and products with alkaloids. (Journal of Health Science, 56, Supplement (2010)). The reason why side effects related to functional foods have been increasing is that under-qualified functional food companies are exaggerating the functionality for marketing purposes. KFDA has been informing consumers, through its web pages, to address the above mentioned issues related to functional foods, but there still is room for improvement, to promote proper use of functional foods and avoid drug interactions. Specifically, to address these issues, institutionalizing to collect information on approved products and their side effects, settling reevaluation systems, and standardizing preclinical tests and clinical tests are becoming urgent. Also to provide crucial information, unified database systems, seamlessly aggregating heterogeneous data in different domains, with user interfaces enabling effective one-stop search, are crucial.

A Comparative Analysis of News Frame on U. S. Beef Imports and Candlelight Vigils (미국산 수입쇠고기와 촛불시위 보도에 나타난 뉴스 프레임 비교 연구)

  • Im, Yang-June
    • Korean journal of communication and information
    • /
    • v.46
    • /
    • pp.108-147
    • /
    • 2009
  • This study explores the news frames on the U. S. beef imports and candlelight vigils covered by the two national dailies such as ChosunIlbo and the Hankyoreh Shinmun; the KwangwonIlbo, a local daily. The news frames extracted based on the models of Iyengar(1987), Semetko & Valkenburg(2000) and other researchers are attribution of responsibility, economic sequences, protest against the authorities, national health and governmental public relations and so on. The result shows that the news reports are consisted of the straight news(75.9%), feature stories(11.7%) and editorials(6.3%). More specifically, there is a comparatively hight ratio of editorials(11.0%) for the ChosunIlbo, feature stories(20.9%) for the Hankyoreh, and the straight news(89.7%) for the KwangwonIlbo. In terms of the news frames stressed by the three dailies, the ChosunIlbo focuses and stresses on the national health(17.8%) and the attribution of responsibilities(10.6%). However, the Hankyoreh have a tendency to stress on the protest against the authorities(31.3%) and attribution of responsibilities(38.4%); the KwangwonIlbo, focuses on the protest against the authorities(38.4%) and the economic sequences(17.9%). Finally, in the case of the main characteristics of the dailies, the governmental public relations frame is found only on the ChosunIlbo that has a comparatively high ratio; the Hankyoreh also has a high ratio of the feature stories on the U. S. beef imports. Even thought the KwangwonIlbo has a high ratio of the economic sequence frame, the ratio of opinion pages, such as editorial and columns, the local newspaper has not spoken up for the potential economic crisis of the local Kwangwon province beef industry, mainly caused by the U. S. import beef.

  • PDF

The Recognition and Utilization of Middle School Technology.Home Economics Teacher's Guidebook (중학교 "기술.가정" 교과 교사용 지도서에 대한 가정 교사의 인식 및 활용)

  • Kang, Eun-Yeong;Shin, Hye-Won
    • Journal of Korean Home Economics Education Association
    • /
    • v.19 no.2
    • /
    • pp.1-12
    • /
    • 2007
  • This study analyzed the recognition and utilization regarding teacher's guidebook for middle school technology-home economics class in the 7th Educational Curriculum. The data were collected via e-mail to teachers teaching home economics in middle schools. These e-mail addresses were acquired from middle school web pages registered on the Educational Board. The 355 data were analyzed using the SPSS program. The results were as follows: First, teachers recognized highly the necessity of teacher's guidebook. However, as the actual guidebook was not adequately helpful, the overall degree of satisfaction was relatively low. Teachers utilizing guidebook had more positive recognition on teacher's guidebook than teachers who did not. And teachers majored in technology education thought teacher's guidebook more helpful compared with teachers majored in home economics education. Second, teachers referenced teacher's guidebook mostly for field practice guidance. Third, teachers who did not utilize teacher's guidebook used other reference materials such as Internet Web sites and audiovisual materials. They were most commonly used for the reason that the contents were ample and easy to access. Fourth, the followings were suggested to improve teacher's guidebook. The provision of learning contents that can be practically used in class, the various samples of teaching-learning method, the specified methods of planning and criteria for performance assessment, the adequate supplementations regarding textbook contents, and the improvement of the outward layout format of the guidebook.

  • PDF

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.

Development of Convertor supporting Multi-languages for Mobile Network (무선전용 다중 언어의 번역을 지원하는 변환기의 구현)

  • Choe, Ji-Won;Kim, Gi-Cheon
    • The KIPS Transactions:PartC
    • /
    • v.9C no.2
    • /
    • pp.293-296
    • /
    • 2002
  • UP Link is One of the commercial product which converts HTML to HDML convertor in order to show the internet www contents in the mobile environments. When UP browser accesses HTML pages, the agent in the UP Link controls the converter to change the HTML to the HDML, I-Mode, which is developed by NTT-Docomo of Japan, has many contents through the long and stable commercial service. Micro Explorer, which is developed by Stinger project, also has many additional function. In this paper, we designed and implemented WAP convertor which can accept C-HTML contents and mHTML contents. C-HTML format by I-Mode is a subset of HTML format, mHTML format by ME is similar to C-HTML, So the content provides can easily develop C-HTML contents compared with WAP and the other case. Since C-HTML, mHTML and WML are used under the mobile environment, the limited transmission capacity of one page is also similar. In order to make a match table. After that, we apply conversion algorithm on it. If we can not find matched element, we arrange some tags which only can be supported by WML to display in the best shape. By the result, we can convert over 90% contents.

The Effects of LBS Information Filtering on Users' Perceived Uncertainty and Information Search Behavior (위치기반 서비스를 통한 정보 필터링이 사용자의 불확실성과 정보탐색 행동에 미치는 영향)

  • Zhai, Xiaolin;Im, Il
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.493-513
    • /
    • 2014
  • With the development of related technologies, Location-Based Services (LBS) are growing fast and being used in many ways. Past LBS studies have focused on adoption of LBS because of the fact that LBS users have privacy concerns regarding revealing their location information. Meanwhile, the number of LBS users and revenues from LBS are growing rapidly because users can get some benefits by revealing their location information. Little research has been done on how LBS affects consumers' information search behavior in product purchase. The purpose of this paper is examining the effect of LBS information filtering on buyers' uncertainty and their information search behavior. When consumers purchase a product, they try to reduce uncertainty by searching information. Generally, there are two types of uncertainties - knowledge uncertainty and choice uncertainty. Knowledge uncertainty refers to the lack of information on what kinds of alternatives are available in the market and/or their important attributes. Therefore, consumers having knowledge uncertainty will have difficulties in identifying what alternatives exist in the market to fulfil their needs. Choice uncertainty refers to the lack of information about consumers' own preferences and which alternative will fit in their needs. Therefore, consumers with choice uncertainty have difficulties selecting best product among available alternatives.. According to economics of information theory, consumers narrow the scope of information search when knowledge uncertainty is high. It is because consumers' information search cost is high when their knowledge uncertainty is high. If people do not know available alternatives and their attributes, it takes time and cognitive efforts for them to acquire information about available alternatives. Therefore, they will reduce search breadth. For people with high knowledge uncertainty, the information about products and their attributes is new and of high value for them. Therefore, they will conduct searches more in-depth because they have incentive to acquire more information. When people have high choice uncertainty, people tend to search information about more alternatives. It is because increased search breadth will improve their chances to find better alternative for them. On the other hand, since human's cognitive capacity is limited, the increased search breadth (more alternatives) will reduce the depth of information search for each alternative. Consumers with high choice uncertainty will spend less time and effort for each alternative because considering more alternatives will increase their utility. LBS provides users with the capability to screen alternatives based on the distance from them, which reduces information search costs. Therefore, it is expected that LBS will help users consider more alternatives even when they have high knowledge uncertainty. LBS provides distance information, which helps users choose alternatives appropriate for them. Therefore, users will perceive lower choice uncertainty when they use LBS. In order to test the hypotheses, we selected 80 students and assigned them to one of the two experiment groups. One group was asked to use LBS to search surrounding restaurants and the other group was asked to not use LBS to search nearby restaurants. The experimental tasks and measures items were validated in a pilot experiment. The final measurement items are shown in Appendix A. Each subject was asked to read one of the two scenarios - with or without LBS - and use a smartphone application to pick a restaurant. All behaviors on smartphone were recorded using a recording application. Search breadth was measured by the number of restaurants clicked by each subject. Search depths was measured by two metrics - the average number of sub-level pages each subject visited and the average time spent on each restaurant. The hypotheses were tested using SPSS and PLS. The results show that knowledge uncertainty reduces search breadth (H1a). However, there was no significant correlation between knowledge uncertainty and search depth (H1b). Choice uncertainty significantly reduces search depth (H2b), but no significant relationship was found between choice uncertainty and search breadth (H2a). LBS information filtering significantly reduces the buyers' choice uncertainty (H4) and reduces the negative relationship between knowledge uncertainty and search breadth (H3). This research provides some important implications for service providers. Service providers should use different strategies based on their service properties. For those service providers who are not well-known to consumers (high knowledge uncertainty) should encourage their customers to use LBS. This is because LBS would increase buyers' consideration sets when the knowledge uncertainty is high. Therefore, less known services have chances to be included in consumers' consideration sets with LBS. On the other hand, LBS information filtering decrease choice uncertainty and the near service providers are more likely to be selected than without LBS. Hence, service providers should analyze geographically approximate competitors' strength and try to reduce the gap so that they can have chances to be included in the consideration set.