• Title/Summary/Keyword: 속성데이터

Search Result 1,598, Processing Time 0.029 seconds

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

A Study on Property Change of Auto Body Color Design (자동차 바디컬러 디자인의 속성 변화에 관한 연구)

  • Cho, Kyung-Sil;Lee, Myung-Ki
    • Archives of design research
    • /
    • v.19 no.1 s.63
    • /
    • pp.253-262
    • /
    • 2006
  • Research of color has been developed and also has raised consumer desire through changing from a tool to pursue curiosity or beauty to a tool creating effects in the 20th century. People have been interested in colors as a dynamic expression of results since the color TV appeared. The meaning of colors has been recently diversified as the roles of colors became important to the emotional aspects of design. While auto colors have developed along with such changes of the times, black led the color trend during the first half of the 20th century from 1900 to 1950, a transitional period of economic growth and world war. Since then, automobile production has increased apace with the rapid economic growth throughout the world and automobiles became the most expensive item out of the goods that people use. Accordingly, increasing production induced facility investment in mass production and a technology leveling was achieved. Auto manufacturing processes are very complicated, auto makers gradually recognized that software changes such as to colors or materials was an easier way for the improvement of brand identity as opposed to hardware changes such as the mechanical or design components of the body. Color planning and development systems were segmented in various aspects. In the segmentation issue, pigment technology and painting methods are important elements that have an influence on body colors and have a higher technical correlation with colors than in other industries. In other words, the advanced mixture of pigments is creating new body colors that have not existed previously. This diversifies the painting structure and methods and so maximizes the transparency and depth of body colors. Thus, body colors that are closely related to technical factors will increase in the future and research on color preferences by region have been systemized to cope with global competition due to the expansion and change of auto export regions.

  • PDF

Features of Korean Webtoons through the Statistical Analysis (웹툰 통계 분석을 통한 한국 웹툰의 특징)

  • Yoon, Ki-Heon;Jung, Kiu-Ha;Choi, In-Soo;Choi, Hae-Sol
    • Cartoon and Animation Studies
    • /
    • s.38
    • /
    • pp.177-194
    • /
    • 2015
  • This study that had been conducted two months by a research team of Pusan National University at the request of Korea Manwha Contents Agency in Dec. 2013 is about the statistical analysis on 'Korean Webtoon DB and its Flow Report' which resulted from the complete survey of Korean webtoons which had been published with payment in official media from early 2000 to 2013. Webtoon which means the cartoons published on web has become a typical type of Korean cartoons and has developed into a main industry since 2000s when traditional published cartoons had declined and social environments had changed. Today, it represents cultural contents in Korea. This study collected the webtoons officially published in media with payment, among Korean webtoons having been published from the early 2000s to Jan. Based on the collected data, it analyzed the general characteristics of webtoons, including cartoonists, the number of cartoons, distribution chart of each media, genre, and publication cycle. According to the data analysis and statistics, a great deal of Korean webtoons are still published in main portal websites, but their platform is being diversified and a webtoon's publication cycle tends to be shortened. In terms of genre, traditional popular genres, such as drama, comic, fantasy, and action, are still popular, and the genres of history, sports, and food are on the rise along with a social trend. Regarding webtoon application, such events as relay webtoon and brand webtoon, and a new type of webtoon featuring PPL commercialism appear. Such phenomena can realize the common profits of cartoonists, media, and ordering bodies, and are various trials to test the possibility of webtoons. In addition, what needs to pay attention on in the expansion of webtoons is increasing webtoons for adults. The study subjects are the webtoons published with payment, excluding free webtoons. However, this study failed to collect the webtoons published on the online websites already closed, and the lost information on cartoonists and their lost webtoons, and it is necessary to conduct a complete survey on all webtoons including free ones. Despite the limitations, this study is meaningful in the points that it categorized and analyzed Korean webtoons accoridng to official media, webtoons, cartoonists, and genres and that it provided a fundamental material to understand the current conditions of webtoons. It is expected that this study will be able to contribute to activating more research on webtoons and producing more supplementary data which will be used for the Korean cartoon industry and academia.

A Study on Classifications and Trends with Convergence Form Characteristics of Architecture in Tall Buildings (초고층빌딩의 융합적 건축형태 분류와 경향에 관한 연구)

  • Park, Sang Jun
    • Korea Science and Art Forum
    • /
    • v.37 no.5
    • /
    • pp.119-133
    • /
    • 2019
  • This study is as skyscrapers are becoming increasingly taller, more constructors have decided the height alone cannot be a sufficient differentiator. As a result, atypical architecture is emerging as a new competitive factor. Also, it can be used for symbolizing the economic competitiveness of a country, city, or business through its form. Before the introduction of digital media, there was a discrepancy between the structure and form of a building and correcting this discrepancy required a separate structural medium. Since the late 1980s, however, digitally-based atypical form development began to be used experimentally, and, until the 2000s, it was used mostly for super-tall skyscrapers for offices or for industrial chimneys and communication towers. Since the 2000s, many global brand hotels and commercial and residential buildings have been built as super-tall skyscrapers, which shows the recent trend in architecture that is moving beyond the traditional limits. Complex atypical structure is formed and the formative characteristics of diagonal lines and curved surfaces, which are characteristics of atypical architecture, are created digitally. Therefore, it's goal is necessary to identify a new relationship between the structure and forms. According to the data of Council on Tall Buildings and Urban Habitat (CTBUH), 100-story and taller buildings were classified into typical, diagonal, curved, and segment types in order to define formative shapes of super-tall skyscrapers and provide a ground of the design process related to the initial formation of the concept. The purpose of this study was to identify the correlation between different forms for building atypical architectural shapes that are complex and diverse. The study results are presented as follows: Firstly, complex function follows convergence form characteristics. Secondly, fold has inside of architecture with repeat. Thirdly, as curve style which has pure twist, helix twist, and spiral twist. The findings in this study can be used as basic data for classifying and predicting trends of the future super-tall skyscrapers.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Stock-Index Invest Model Using News Big Data Opinion Mining (뉴스와 주가 : 빅데이터 감성분석을 통한 지능형 투자의사결정모형)

  • Kim, Yoo-Sin;Kim, Nam-Gyu;Jeong, Seung-Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.143-156
    • /
    • 2012
  • People easily believe that news and stock index are closely related. They think that securing news before anyone else can help them forecast the stock prices and enjoy great profit, or perhaps capture the investment opportunity. However, it is no easy feat to determine to what extent the two are related, come up with the investment decision based on news, or find out such investment information is valid. If the significance of news and its impact on the stock market are analyzed, it will be possible to extract the information that can assist the investment decisions. The reality however is that the world is inundated with a massive wave of news in real time. And news is not patterned text. This study suggests the stock-index invest model based on "News Big Data" opinion mining that systematically collects, categorizes and analyzes the news and creates investment information. To verify the validity of the model, the relationship between the result of news opinion mining and stock-index was empirically analyzed by using statistics. Steps in the mining that converts news into information for investment decision making, are as follows. First, it is indexing information of news after getting a supply of news from news provider that collects news on real-time basis. Not only contents of news but also various information such as media, time, and news type and so on are collected and classified, and then are reworked as variable from which investment decision making can be inferred. Next step is to derive word that can judge polarity by separating text of news contents into morpheme, and to tag positive/negative polarity of each word by comparing this with sentimental dictionary. Third, positive/negative polarity of news is judged by using indexed classification information and scoring rule, and then final investment decision making information is derived according to daily scoring criteria. For this study, KOSPI index and its fluctuation range has been collected for 63 days that stock market was open during 3 months from July 2011 to September in Korea Exchange, and news data was collected by parsing 766 articles of economic news media M company on web page among article carried on stock information>news>main news of portal site Naver.com. In change of the price index of stocks during 3 months, it rose on 33 days and fell on 30 days, and news contents included 197 news articles before opening of stock market, 385 news articles during the session, 184 news articles after closing of market. Results of mining of collected news contents and of comparison with stock price showed that positive/negative opinion of news contents had significant relation with stock price, and change of the price index of stocks could be better explained in case of applying news opinion by deriving in positive/negative ratio instead of judging between simplified positive and negative opinion. And in order to check whether news had an effect on fluctuation of stock price, or at least went ahead of fluctuation of stock price, in the results that change of stock price was compared only with news happening before opening of stock market, it was verified to be statistically significant as well. In addition, because news contained various type and information such as social, economic, and overseas news, and corporate earnings, the present condition of type of industry, market outlook, the present condition of market and so on, it was expected that influence on stock market or significance of the relation would be different according to the type of news, and therefore each type of news was compared with fluctuation of stock price, and the results showed that market condition, outlook, and overseas news was the most useful to explain fluctuation of news. On the contrary, news about individual company was not statistically significant, but opinion mining value showed tendency opposite to stock price, and the reason can be thought to be the appearance of promotional and planned news for preventing stock price from falling. Finally, multiple regression analysis and logistic regression analysis was carried out in order to derive function of investment decision making on the basis of relation between positive/negative opinion of news and stock price, and the results showed that regression equation using variable of market conditions, outlook, and overseas news before opening of stock market was statistically significant, and classification accuracy of logistic regression accuracy results was shown to be 70.0% in rise of stock price, 78.8% in fall of stock price, and 74.6% on average. This study first analyzed relation between news and stock price through analyzing and quantifying sensitivity of atypical news contents by using opinion mining among big data analysis techniques, and furthermore, proposed and verified smart investment decision making model that could systematically carry out opinion mining and derive and support investment information. This shows that news can be used as variable to predict the price index of stocks for investment, and it is expected the model can be used as real investment support system if it is implemented as system and verified in the future.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Development of Sauces Made from Gochujang Using the Quality Function Deployment Method: Focused on U.S. and Chinese Markets (품질기능전개(Quality Function Deployment) 방법을 적용한 고추장 소스 콘셉트 개발: 미국과 중국 시장을 중심으로)

  • Lee, Seul Ki;Kim, A Young;Hong, Sang Pil;Lee, Seung Je;Lee, Min A
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.44 no.9
    • /
    • pp.1388-1398
    • /
    • 2015
  • Quality Function Deployment (QFD) is the most complete and comprehensive method for translating what customers need from a product. This study utilized QFD to develop sauces made from Gochujang and to determine how to fulfill international customers' requirements. A customer survey and expert opinion survey were conducted from May 13 to August 22, 2014 and targeted 220 consumers and 20 experts in the U.S. and China. Finally, a total of 208 (190 consumers and 18 experts) useable data were selected. The top three customer requirements for Gochujang sauces were identified as fresh flavor (4.40), making better flavor (3.99), and cooking availability (3.90). Thirty-three engineering characteristics were developed. The results from the calculation of relative importance of engineering characteristics identified that 'cooking availability', 'free sample and food testing', 'unique concept', and 'development of brand' were the highest. The relative importance of engineering characteristics, correlation, and technical difficulties are ranked, and this result could contribute to the development Korean sauces based on customer needs and engineering characteristics.

A Spatio-Temporal Clustering Technique for the Moving Object Path Search (이동 객체 경로 탐색을 위한 시공간 클러스터링 기법)

  • Lee, Ki-Young;Kang, Hong-Koo;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.7 no.3 s.15
    • /
    • pp.67-81
    • /
    • 2005
  • Recently, the interest and research on the development of new application services such as the Location Based Service and Telemetics providing the emergency service, neighbor information search, and route search according to the development of the Geographic Information System have been increasing. User's search in the spatio-temporal database which is used in the field of Location Based Service or Telemetics usually fixes the current time on the time axis and queries the spatial and aspatial attributes. Thus, if the range of query on the time axis is extensive, it is difficult to efficiently deal with the search operation. For solving this problem, the snapshot, a method to summarize the location data of moving objects, was introduced. However, if the range to store data is wide, more space for storing data is required. And, the snapshot is created even for unnecessary space that is not frequently used for search. Thus, non storage space and memory are generally used in the snapshot method. Therefore, in this paper, we suggests the Hash-based Spatio-Temporal Clustering Algorithm(H-STCA) that extends the two-dimensional spatial hash algorithm used for the spatial clustering in the past to the three-dimensional spatial hash algorithm for overcoming the disadvantages of the snapshot method. And, this paper also suggests the knowledge extraction algorithm to extract the knowledge for the path search of moving objects from the past location data based on the suggested H-STCA algorithm. Moreover, as the results of the performance evaluation, the snapshot clustering method using H-STCA, in the search time, storage structure construction time, optimal path search time, related to the huge amount of moving object data demonstrated the higher performance than the spatio-temporal index methods and the original snapshot method. Especially, for the snapshot clustering method using H-STCA, the more the number of moving objects was increased, the more the performance was improved, as compared to the existing spatio-temporal index methods and the original snapshot method.

  • PDF

A Study on the Product Design Process in I-Business Environment Focusing on Development of the Internet-based Design Process - (e-비지니스환경에서의 제품디자인 프로세스에 관한 기초연구-인터넷기반의 디자인 프로세스 개발을 중심으로-)

  • 이수봉;이돈희
    • Archives of design research
    • /
    • v.16 no.1
    • /
    • pp.181-198
    • /
    • 2003
  • The purpose of this study is to develop a on-line design tool for effectively coping with e-Business environment, or product design process into a Cyber model for traditional manufacturers which attempts new product development under such environment. It was finally developed as a model named $\ulcorner$Design Vortal Site; e-BVDS) that was based on the structure and style of internet web site. Results of the study can be described as follows ; \circled1 e-Business is based on the Internet. All processes in the context of e-Business require models whose structure and method of use are on-line styles. \circled2 In case that a traditional manufacturing business is converted into e-Business, it is better to first consider Hybrid Model that combines resources and advantages of both such traditional and digital businesses. \circled3 The product design process appropriate for e-Business environment has to have a structure and style that ensure utilization of the process as an Internet web site, active participation by product developers and interactive communication between participants in designing and designers. \circled4 $\ulcorner$e-BDVS) makes possible the use of designers around the wend like in-house designers, overcoming lack in creativity, ideas and human resources traditional business organizations face. However, the operation of $\ulcorner$e-BDVS$\lrcorner$ requires time and budget investments in securing related elements and conditions. \circled5 Cyber designers under $\ulcorner$e-BDVS$\lrcorner$ can easily perform all design projects in cyber space. But they have some limits in playing a role as designers and they have difficulty in getting rewards if such projects completed by them are not finally accepted. \circled6 $\ulcorner$e-BDVS) ensures the rapid use of a wide range of design information and data, reception of a variety of solutions and ideas and effective design development, all of which are not possible through traditional processes. However, this process may not be suitable to be used routine process or tool. \circled7 $\ulcorner$e-BDVS$\lrcorner$ makes it possible for out-sourcing or partners businesses to overcome restrictions in time and space and improve productivity and effectiveness. But such they may have to continue off-line works that can not be treated on-line.

  • PDF