• Title/Summary/Keyword: Computer and information application

Search Result 4,407, Processing Time 0.035 seconds

A Nonunique Composite Foreign Key-Based Approach to Fact Table Modeling and MDX Query Composing (비유일 외래키 조합 복합키 기반의 사실테이블 모델링과 MDX 쿼리문 작성법)

  • Yu, Han-Ju;Lee, Duck-Sung;Choi, In-Soo
    • KSCI Review
    • /
    • v.14 no.2
    • /
    • pp.185-197
    • /
    • 2006
  • A star schema consists of a central fact table, which is surrounded by one or more dimension tables. Each row int the fact table contains a multi-part primary key(or a composite foreign key) along with one or more columns containing various facts about the data stored in the row Each of the composit foreign key components is related to a dimensional table. The combination of keys in the fact table creates a composite foreign key that is unique to the fact table record. The composite foreign key, however, is rarely unique to the fact table record in real-world applications, particularly in financial applications. In order to make the composite foreign key be the determinant in real-world application, some precalculation might be performed in the SQL relational database, and cached in the OLAP database. However, there are many drawbacks to this approach. In some cases, this approach might give users the wrong results. In this paper, an approach to fact table modeling and related MDX query composing, which can be used in real-world applications without performing any precalculation and gives users the correct results, is proposed.

  • PDF

A Study on the Evaluation of Materials for Aircraft Turbofan Engine Using Data Base. (항공기용 터어보팬 엔진의 재료선정용 DATA BASE를 이용한 재료평가에 관한 연구)

  • Kim, Gwang-Bae;Bu, Jun-Hong;Kim, Hak-Bong;Im, Gyeong-Ho;Yu, Sang-Sin
    • Korean Journal of Materials Research
    • /
    • v.1 no.3
    • /
    • pp.156-167
    • /
    • 1991
  • The purpose of this study is to develop a data base for material selection of turbofan engines, which is preferred in these days on many application due to their high performance with economical operation. Hundreds of Super Alloys have been developed by this time, each having special properties. Since it is very difficult task for a design engineer to select materials of adequate Properties for specific engine components, a good data bate is strongly desired to manage informations on various kinds of materials. However, no basic research is reported in this area so far in our country. The operating conditions such as temperature, pressure, rpm of spools are assumed to be provided by other mechanical studies. Creep rupture strength, corrosion resistance, yield strength, thermal expansion, melting point, etc., are considered as typical properties in this study to search a group of candidate materials. Formability, manufacturing or purchase cost can also be important variables to be considered. As a result of this study, a user-friendly computer program has been developed for input of new material information, interactive material selection, and output of selection results. Finally, discussion is presented from. the viewpoint of materials engineering. A method to evaluate the performance of the selected materials is also suggested.

  • PDF

APC: An Adaptive Page Prefetching Control Scheme in Virtual Memory System (APC: 가상 메모리 시스템에서 적응적 페이지 선반입 제어 기법)

  • Ahn, Woo-Hyun;Yang, Jong-Cheol;Oh, Jae-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.3
    • /
    • pp.172-183
    • /
    • 2010
  • Virtual memory systems (VM) reduce disk I/Os caused by page faults using page prefetching, which reads pages together with a desired page at a page fault in a single disk I/O. Operating systems including 4.4BSD attempt to prefetch as many pages as possible at a page fault regardless of page access patterns of applications. However, such an approach increases a disk access time taken to service a page fault when a high portion of the prefetched pages is not referenced. More seriously, the approach can cause the memory pollution, a problem that prefetched pages not to be accessed evict another pages that will be referenced soon. To solve these problems, we propose an adaptive page prefetching control scheme (APC), which periodically monitors access patterns of prefetched pages in a process unit. Such a pattern is represented as the ratio of referenced pages among prefetched ones before they are evicted from memory. Then APC uses the ratio to adjust the number of pages that 4.4BSD VM intends to prefetch at a page fault. Thus APC allows 4.4BSD VM to prefetch a proper number of pages to have a better effect on reducing disk I/Os, though page access patterns of an application vary in runtime. The experiment of our technique implemented in FreeBSD 6.2 shows that APC improves the execution times of SOR, SMM, and FFT benchmarks over 4.4BSD VM by up to 57%.

Hierarchical Message Forwarding Scheme for Efficient Data Distribution in P2P Messaging System (P2P 출판-구독 메시징 시스템에서 효율적인 정보 전파를 위한 계층적 메시지 전송 기법)

  • Jung, Jin Sun;Oh, Sangyoon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.9
    • /
    • pp.209-216
    • /
    • 2019
  • Publish-subscribe communication model is popular for various type of distributed applications because of its loosely coupled style connections. Among the various architecture style for publish-subscribe system, peer-to-peer architecture has been used for the mission critical application domain since it provides high scalability and real-timeness. On the other hand, to utilize the bandwidth of given networks, message filtering is frequently used to reduce the number of messages on the system. Even if P2P provides superior scalability, it is hard to apply filtering to the its messaging system because the filtering process should be done on the peer-side in P2P architecture that are usually done on the broker server in conventional pub/sub architecture. In this paper, we propose a hierarchical subscription management structure as well as message forwarding scheme for efficient data dissemination. Our proposed scheme reduces the number of received messages by filter-out un-wanted messages and offloading the message dissemination work to other subscribers to enhance the messaging throughput.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

An Application-Specific and Adaptive Power Management Technique for Portable Systems (휴대장치를 위한 응용프로그램 특성에 따른 적응형 전력관리 기법)

  • Egger, Bernhard;Lee, Jae-Jin;Shin, Heon-Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.367-376
    • /
    • 2007
  • In this paper, we introduce an application-specific and adaptive power management technique for portable systems that support dynamic voltage scaling (DVS). We exploit both the idle time of multitasking systems running soft real-time tasks as well as memory- or CPU-bound code regions. Detailed power and execution time profiles guide an adaptive power manager (APM) that is linked to the operating system. A post-pass optimizer marks candidate regions for DVS by inserting calls to the APM. At runtime, the APM monitors the CPU's performance counters to dynamically determine the affinity of the each marked region. for each region, the APM computes the optimal voltage and frequency setting in terms of energy consumption and switches the CPU to that setting during the execution of the region. Idle time is exploited by monitoring system idle time and switching to the energy-wise most economical setting without prolonging execution. We show that our method is most effective for periodic workloads such as video or audio decoding. We have implemented our method in a multitasking operating system (Microsoft Windows CE) running on an Intel XScale-processor. We achieved up to 9% of total system power savings over the standard power management policy that puts the CPU in a low Power mode during idle periods.

Increase of Tc-99m RBC SPECT Sensitivity for Small Liver Hemangioma using Ordered Subset Expectation Maximization Technique (Tc-99m RBC SPECT에서 Ordered Subset Expectation Maximization 기법을 이용한 작은 간 혈관종 진단 예민도의 향상)

  • Jeon, Tae-Joo;Bong, Jung-Kyun;Kim, Hee-Joung;Kim, Myung-Jin;Lee, Jong-Doo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.344-356
    • /
    • 2002
  • Purpose: RBC blood pool SPECT has been used to diagnose focal liver lesion such as hemangioma owing to its high specificity. However, low spatial resolution is a major limitation of this modality. Recently, ordered subset expectation maximization (OSEM) has been introduced to obtain tomographic images for clinical application. We compared this new modified iterative reconstruction method, OSEM with conventional filtered back projection (FBP) in imaging of liver hemangioma. Materials and Methods: Sixty four projection data were acquired using dual head gamma camera in 28 lesions of 24 patients with cavernous hemangioma of liver and these raw data were transferred to LINUX based personal computer. After the replacement of header file as interfile, OSEM was performed under various conditions of subsets (1,2,4,8,16, and 32) and iteration numbers (1,2,4,8, and 16) to obtain the best setting for liver imaging. The best condition for imaging in our investigation was considered to be 4 iterations and 16 subsets. After then, all the images were processed by both FBP and OSEM. Three experts reviewed these images without any information. Results: According to blind review of 28 lesions, OSEM images revealed at least same or better image quality than those of FBP in nearly all cases. Although there showed no significant difference in detection of large lesions more than 3 cm, 5 lesions with 1.5 to 3 cm in diameter were detected by OSEM only. However, both techniques failed to depict 4 cases of small lesions less than 1.5 cm. Conclusion: OSEM revealed better contrast and define in depiction of liver hemangioma as well as higher sensitivity in detection of small lesions. Furthermore this reconstruction method dose not require high performance computer system or long reconstruction time, therefore OSEM is supposed to be good method that can be applied to RBC blood pool SPECT for the diagnosis of liver hemangioma.

A Study on Necessity and Demands of Teachers and Students for Housing Contents in Technology.Home Economics Curriculum of the Middle School (중학교 기술.가정 교과의 주생활 영역 교과내용에 대한 교사와 학생의 필요성 및 요구도 -울산광역시를 중심으로-)

  • Choi, Hye-Mi;Kim, Sun-Joong
    • Journal of Korean Home Economics Education Association
    • /
    • v.19 no.4
    • /
    • pp.75-89
    • /
    • 2007
  • This study has its aim at suggesting new direction of our education to search different ways in housing contents by comparing the necessity perception and demands between teachers and students for housing contents in Technology Home Economics curriculum of middle school. To achieve this aim, I chose middle school teachers in charge of Technology Home Economics and male and female students who are in the first grade in high school in Ulsan. I sent e-mail, mail, and visited researcher to gather the data. I used SPSS +12 statistical package for frequency, percentage, mean, standard deviation, and t-test to analyze the data. Here is the result. First, In the part of application of living place, teachers had necessity perception in use and placement of furniture, and arrangement of objects. Students had necessity perception in the use and placement of furniture, the kind and choice of furniture. Also in the indoor environment and equipment part, both teachers and students had necessity perception in controlling of ventilation, temperature, and humidity. In the part of maintenance repair of housing, teachers had necessity perception in the need for maintenance management but students had necessity perception in house equipments and repair had high necessity perception Second, In housing-related general part, teachers demanded housing for elderly, disabled people, information about future housing and students demanded environmentally friendly living environment, housing for elderly, disabled people. In interior design part, teachers demanded in the expression of interior places through computer, the kind and characteristic of housing material and students demanded the way to reuse old furniture, kind and characteristic of housing material. In the part of housing preparation and occupation, teachers demanded the kind of housing-related occupation and students demanded the housing tax and the process of house purchase or concerned matter. Third, there were some difference of necessity perception and degree of demand between teachers and students. Teachers had higher necessity perception and demand in all part except in demand for housing equipment, maintenance, and environmentally friendly living environment.

  • PDF