• Title/Summary/Keyword: 인터넷기반 응용프로그램

Search Result 268, Processing Time 0.025 seconds

Design and Implementation of a Physical Network Separation System using Virtual Desktop Service based on I/O Virtualization (입출력 가상화 기반 가상 데스크탑 서비스를 이용한 물리적 네트워크 망분리 시스템 설계 및 구현)

  • Kim, Sunwook;Kim, Seongwoon;Kim, Hakyoung;Chung, Seongkwon;Lee, Sookyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.7
    • /
    • pp.506-511
    • /
    • 2015
  • IOV is a technology that supports one or more virtual desktops, and can share a single physical device. In general, the virtual desktop uses the virtual IO devices which are provided by virtualization SW, using SW emulation technology. Virtual desktops that use the IO devices based on SW emulation have a problem in which service quality and performance are declining. Also, they cannot support the high-end application operations such as 3D-based CAD and game applications. In this paper, we propose a physical network separation system using Virtual Desktop Service based on HW direct assignments to overcome these problems. The proposed system provides independent desktops that are used to access the intranet or internet using server virtualization technology in a physical desktop computer for the user. In addition, this system can also support a network separation without network performance degradation caused by inspection of the network packet for logical network separations and additional installations of the desktop for physical network separations.

MOBIGSS: A Group Decision Support System in the Mobile Internet (MOBIGSS: 모바일 인터넷에서의 그룹의사결정지원시스템)

  • Cho Yoon-Ho;Choi Sang-Hyun;Kim Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.2
    • /
    • pp.125-144
    • /
    • 2006
  • The development of mobile applications is fast in recent years. However, nearly all applications are for messaging, financial, locating services based on simple interactions with mobile users because of the limited screen size, narrow network bandwidth, and low computing power. Processing an algorithm for supporting a group decision process on mobile devices becomes impossible. In this paper, we introduce the mobile-oriented simple interactive procedure for support a group decision making process. The interactive procedure is developed for multiple objective linear programming problems to help the group select a compromising solution in the mobile Internet environment. Our procedure lessens the burden of group decision makers, which is one of necessary conditions of the mobile environment. Only the partial weak order preferences of variables and objectives from group decision makers are enough for searching the best compromising solution. The methodology is designed to avoid any assumption about the shape or existence of the decision makers' utility function. For the purpose of the experimental study of the procedure, we developed a group decision support system in the mobile Internet environment, MOBIGSS and applied to an allocation problem of investor assets.

  • PDF

Improving TCP Performance by Limiting Congestion Window in Fixed Bandwidth Networks (고정대역 네트워크에서 혼잡윈도우 제한에 의한 TCP 성능개선)

  • Park, Tae-Joon;Lee, Jae-Yong;Kim, Byung-Chul
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.12
    • /
    • pp.149-158
    • /
    • 2005
  • This paper proposes a congestion avoidance algorithm which provides stable throughput and transmission rate regardless of buffer size by limiting the TCP congestion window in fixed bandwidth networks. Additive Increase, Multiplicative Decrease (AIMD) is the most commonly used congestion control algorithm. But, the AIMD-based TCP congestion control method causes unnecessary packet losses and retransmissions from the congestion window increment for available bandwidth verification when used in fixed bandwidth networks. In addition, the saw tooth variation of TCP throughput is inappropriate to be adopted for the applications that require low bandwidth variation. We present an algorithm in which congestion window can be limited under appropriate circumstances to avoid congestion losses while still addressing fairness issues. The maximum congestion window is determined from delay information to avoid queueing at the bottleneck node, hence stabilizes the throughput and the transmission rate of the connection without buffer and window control process. Simulations have performed to verify compatibility, steady state throughput, steady state packet loss count, and the variance of congestion window. The proposed algorithm can be easily adopted to the sender and is easy to deploy avoiding changes in network routers and user programs. The proposed algorithm can be applied to enhance the performance of the high-speed access network which is one of the fixed bandwidth networks.

A study on the Job Analysis and Curriculum Development of Technical Information Searcher with DACUM (기술정보검색사의 직무분석 및 교육과정 개발에 관한 연구)

  • Noh, Dong-Jo
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.15 no.1
    • /
    • pp.177-191
    • /
    • 2004
  • According to the shift from industrial society to knowledge-based society, prompt acquisition, organization, and analysis of technical information at a variety of industrial organizations are becoming more important than before. Education for professionals in acquisition and management of technical information should be accomplished systematically, and connected with in-service training. The purpose of this study is to develop curriculum of information professionals from the analysis of the tasks of technical information searchers using DACUM methods. The results of this study is as follows: First, professional technical information searcher's tasks are divided into 6 categories and these are also divided into 40 sub-categories. Second, selection of information sources are the most important tasks in education. And last, major educational areas should include planning and development of databases, practice of OA applied programs, practice of PC communications, analysis of trends information, classification and practice of Internet, practice of interview, information architecture, information retrieval, understanding and practice of information sources, patents management, and planning and development of home pages.

  • PDF

Metadata extraction using AI and advanced metadata research for web services (AI를 활용한 메타데이터 추출 및 웹서비스용 메타데이터 고도화 연구)

  • Sung Hwan Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.499-503
    • /
    • 2024
  • Broadcasting programs are provided to various media such as Internet replay, OTT, and IPTV services as well as self-broadcasting. In this case, it is very important to provide keywords for search that represent the characteristics of the content well. Broadcasters mainly use the method of manually entering key keywords in the production process and the archive process. This method is insufficient in terms of quantity to secure core metadata, and also reveals limitations in recommending and using content in other media services. This study supports securing a large number of metadata by utilizing closed caption data pre-archived through the DTV closed captioning server developed in EBS. First, core metadata was automatically extracted by applying Google's natural language AI technology. The next step is to propose a method of finding core metadata by reflecting priorities and content characteristics as core research contents. As a technology to obtain differentiated metadata weights, the importance was classified by applying the TF-IDF calculation method. Successful weight data were obtained as a result of the experiment. The string metadata obtained by this study, when combined with future string similarity measurement studies, becomes the basis for securing sophisticated content recommendation metadata from content services provided to other media.

Prospective for Successful IT in Agriculture (일본 농업분야 정보기술활용 성공사례와 전망)

  • Seishi Ninomiya;Byong-Lyol Lee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.6 no.2
    • /
    • pp.107-117
    • /
    • 2004
  • If doubtlessly contributes much to agriculture and rural development. The roles can be summarized as; 1. to activate rural areas and to provide more comfortable and safe rural life with equivalent services to those in urban areas, facilitating distance education, tole-medicine, remote public services, remote entertainment etc. 2. To initiate new agricultural and rural business such as e-commerce, real estate business for satellite officies, rural tourism and virtual corporation of small-scale farms. 3. To support policy-making and evaluation on optimal farm production, disaster management, effective agro-environmental resource management etc., providing tools such as GIS. 4. To improve farm management and farming technologies by efficient farm management, risk management, effective information or knowledge transfer etc., realizing competitive and sustainable farming with safe products. 5. To provide systems and tools to secure food traceability and reliability that has been an emerging issue concerning farm products since serious contamination such as BSE and chicken flu was detected. 6. To take an important and key role for industrialization of farming or lam business enterprise, combining the above roles.

Design and Implementation of Medical Information System using QR Code (QR 코드를 이용한 의료정보 시스템 설계 및 구현)

  • Lee, Sung-Gwon;Jeong, Chang-Won;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.109-115
    • /
    • 2015
  • The new medical device technologies for bio-signal information and medical information which developed in various forms have been increasing. Information gathering techniques and the increasing of the bio-signal information device are being used as the main information of the medical service in everyday life. Hence, there is increasing in utilization of the various bio-signals, but it has a problem that does not account for security reasons. Furthermore, the medical image information and bio-signal of the patient in medical field is generated by the individual device, that make the situation cannot be managed and integrated. In order to solve that problem, in this paper we integrated the QR code signal associated with the medial image information including the finding of the doctor and the bio-signal information. bio-signal. System implementation environment for medical imaging devices and bio-signal acquisition was configured through bio-signal measurement, smart device and PC. For the ROI extraction of bio-signal and the receiving of image information that transfer from the medical equipment or bio-signal measurement, .NET Framework was used to operate the QR server module on Window Server 2008 operating system. The main function of the QR server module is to parse the DICOM file generated from the medical imaging device and extract the identified ROI information to store and manage in the database. Additionally, EMR, patient health information such as OCS, extracted ROI information needed for basic information and emergency situation is managed by QR code. QR code and ROI management and the bio-signal information file also store and manage depending on the size of receiving the bio-singnal information case with a PID (patient identification) to be used by the bio-signal device. If the receiving of information is not less than the maximum size to be converted into a QR code, the QR code and the URL information can access the bio-signal information through the server. Likewise, .Net Framework is installed to provide the information in the form of the QR code, so the client can check and find the relevant information through PC and android-based smart device. Finally, the existing medical imaging information, bio-signal information and the health information of the patient are integrated over the result of executing the application service in order to provide a medical information service which is suitable in medical field.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.