• Title/Summary/Keyword: 웹 기반 멀티미디어

Search Result 753, Processing Time 0.018 seconds

Development of Virtual Ambient Weather Measurement System for the Smart Greenhouse (스마트온실을 위한 가상 외부기상측정시스템 개발)

  • Han, Sae-Ron;Lee, Jae-Su;Hong, Young-Ki;Kim, Gook-Hwan;Kim, Sung-Ki;Kim, Sang-Cheol
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.5 no.5
    • /
    • pp.471-479
    • /
    • 2015
  • This study was conducted to make use of Korea Meteorological Administration(KMA)'s Automatic Weather Station(AWS) data to operate smart green greenhouse. A Web-based KMA AWS data receiving system using JAVA and APM_SETUP 8 on windows 7 platform was developed. The system was composed of server and client. The server program was developed by a Java application to receive weather data from the KMA every 30 minutes and to send the weather data to smart greenhouse. The client program was developed by a Java applets to receive the KMA AWS data from the server every 30 minutes through communicating with the server so that smart greenhouse could recognize the KMA AWS data as the ambient weather information. This system was evaluated by comparing with local weather data measured by Inc. Ezfarm. In case of ambient air temperature, it showed some difference between virtual data and measured data. But, the average absolute deviation of the difference has a little difference as less than 2.24℃. Therefore, the virtual weather data of the developed system was considered available as the ambient weather information of the smart greenhouse.

Malaysian Muslim's Awareness, Attitude and Purchasing Behavior of Ginseng and Red Ginseng Products (말레이시아 무슬림의 인삼·홍삼제품 인식과 태도 및 구매행동)

  • Park, Soojin
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.12
    • /
    • pp.37-50
    • /
    • 2017
  • This study was performed to understand Malaysian Muslims' awareness, attitudes and purchasing behaviour on ginseng (G) and red ginseng (RG) products. A survey of 200 Muslims residing in Malaysia was conducted on awareness, eating experience, preferences, cognitive efficiency of G and RG products, purchase behaviors and satisfaction through a online-survey methodology. Results shows that 50 % and 40% of the participants aware the G and RG products. In particular, awareness amongst female or married consumers is relatively high. Health promotion is the major reasons to consume eat G and RG products in this group of participants. However, the most frequently consumed type of G products was ginseng coffee, candies and chocolates, in their 40s and 50s or married consumers. Participants are also aware of the efficacy claims of these products with regard to improvement of fatigue, immunity and hypertension. While Malaysian Muslim consumers are satisfied with the health claims, convenience to purchase and tastes and aroma, they are dissatisfied with packaging specifications, price. Participants would intend to recommend G and RG products to relatives (82.6%), and are willing to buy them in the future (83.5%). Conclusively, there must be a clear interest and demands of Halal-certified G and RG products among Malaysian Muslims and it is deemed to need of strategic product development and marketing to enhance awareness of G and RG products in the future.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.