Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)
-
- Journal of Intelligence and Information Systems
- /
- v.24 no.2
- /
- pp.171-193
- /
- 2018
LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which
The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.
Energy aware server clusters aim to reduce power consumption at maximum while keeping QoS(Quality of Service) compared to energy non-aware server clusters. They adjust the power mode of each server in a fixed or variable time interval to let only the minimum number of servers needed to handle current user requests ON. Previous studies on energy aware server cluster put efforts to reduce power consumption further or to keep QoS, but they do not consider energy efficiency well. In this paper, we propose an energy efficient cluster management based on autonomous learning for energy aware server clusters. Using parameters optimized through autonomous learning, our method adjusts server power mode to achieve maximum performance with respect to power consumption. Our method repeats the following procedure for adjusting the power modes of servers. Firstly, according to the current load and traffic pattern, it classifies current workload pattern type in a predetermined way. Secondly, it searches learning table to check whether learning has been performed for the classified workload pattern type in the past. If yes, it uses the already-stored parameters. Otherwise, it performs learning for the classified workload pattern type to find the best parameters in terms of energy efficiency and stores the optimized parameters. Thirdly, it adjusts server power mode with the parameters. We implemented the proposed method and performed experiments with a cluster of 16 servers using three different kinds of load patterns. Experimental results show that the proposed method is better than the existing methods in terms of energy efficiency: the numbers of good response per unit power consumed in the proposed method are 99.8%, 107.5% and 141.8% of those in the existing static method, 102.0%, 107.0% and 106.8% of those in the existing prediction method for banking load pattern, real load pattern, and virtual load pattern, respectively.
The imprecise real-time system provides flexibility in scheduling time-critical tasks. Most scheduling problems of satisfying both 0/1 constraint and timing constraints, while the total error is minimized, are NP-complete when the optional tasks have arbitrary processing times. Liu suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on uniprocessors for minimizing the total error. Song et at suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on multiprocessors for minimizing the total error. But, these algorithms are all off-line algorithms. In the online scheduling, the NORA algorithm can find a schedule with the minimum total error for the imprecise online task system. In NORA algorithm, EDF strategy is adopted in the optional scheduling. On the other hand, for the task system with 0/1 constraint, EDF_Scheduling may not be optimal in the sense that the total error is minimized. Furthermore, when the optional tasks are scheduled in the ascending order of their required processing times, NORA algorithm which EDF strategy is adopted may not produce minimum total error. Therefore, in this paper, an online algorithm is proposed to minimize total error for the imprecise task system with 0/1 constraint. Then, to compare the performance between the proposed algorithm and NORA algorithm, a series of experiments are performed. As a conseqence of the performance comparison between two algorithms, it has been concluded that the proposed algorithm can produce similar total error to NORA algorithm when the optional tasks are scheduled in the random order of their required processing times but, the proposed algorithm can produce less total error than NORA algorithm especially when the optional tasks are scheduled in the ascending order of their required processing times.
The purpose of this study was to evaluate the difference of the galvanic corrosion behaviour of the titanium in contact with gold alloy, silva-palladium alloy, and nickel-chromium alloy using the immersion and electrochemical method. And the effects of galvallit couples between titanium and the dental alloys were assessed for their usefulness as materials for superstructure. The immersion method was performed by measuring the amount of metal elementsreleased by Inductivey coupled plasma emission spectroscopy(ICPES) The specimen of fifteen titanium plates, the five gold alloy, five silver-palladium, five nickel-chromium plates, and twenty acrylic resin plates ware fabricated, and also the specimen of sixty titanium plugs, the thirty gold alloy, thirty silver-palladium, and nickelc-hromium plugs were made. Thereafter, each plug of gold alloy, silver-palladium, and nickel-chromium inserted into the the titanium and acrylic resin plate, and also titanium plug inserted into the acrylic resin plate. The combination specimens uf galvanic couples immersed in 70m1 artificial saliva solution, and also specimens of four type alloy(that is, titanium, gold, silver-palladium and nickel-chromium alloy) plugs were immersed solely in 70m1 artificial sativa solution. The amount of metal elements released was observed during 21 weeks in the interval of each seven week. The electrochemical method was performed using computer-controlled potentiosta(Autostat 251. Sycopel Sicentific Ltd., U.K). The wax patterns(diameter 11.0mm, thickness,in 1.5mm) of four dental casting alloys were casted by centrifugal method and embedded in self-curing acrylic resin to be about
The role of pension plans in the macroeconomy has been a subject of much interest for some years. It has come to be recognized that pension plans may alter basic macroeconomic behavior patterns. The net effects on both savings and labor supply are thus matters for speculation. The aim of the present paper is to provide quantitative results which may be helpful in attaching orders of magnitude to some of the possible effects. We are not concerned with the providing empirical evidence relating to actual behavior, but rather with deriving the macroeconomic implications for a alternative possibilities. The pension plan interacts with the economy and the population in a number of ways. Demographic variables may thus affect both the economic burden of a national pension plan and the ability of the economy to sustain the burden. The tax transfer process associated with the pension plan may have implications for national patterns of saving and consumption. The existence of a pension plan may have implications also for the size of the labor force, inasmuch as labor force participation rates may be affected. Changes in technology and the associated changes in average productivity levels bear directly on the size of the national income, and hence on the pension contribution base. The vehicle for the analysis is a hypothetical but broadly realistic simulation model of an economic- demographic system into which is inserted a national pension plan. All income, expenditure, and related aggregates are in real terms. The economy is basically neoclassical; full employment is assumed, output is generated by a Cobb-Douglas production process, and factors receive their marginal products. The model was designed for use in computer simulation experiments. The simulation results suggest a number of general conclusions. These may be summarized as follows; - The introduction of a national pension plan (funded system) tends to increase the rate of economic growth until cost exceeds revenue. - A scheme with full wage indexing is more expensive than one in which pensions are merely price indexed. - The rate of technical progress is not a critical element in determining the economic burden of the pension scheme. - Raising the rate of benefits affects its economic burden, and raising the age of eligibility may decrease the burden substantially. - The level of fertility is an element in determining the long-run burden. A sustained low fertility rate increases the proportion of the aged in total population and increases the burden of the pension plan. High fertility has inverse effects.
Purpose : The quantization noise in magnetic resonance imaging (MRI) systems is analyzed. The signal-to-quantization noise ratio (SQNR) in the reconstructed image is derived from the level of quantization in the signal in spatial frequency domain. Based on the derived formula, the SQNRs in various main magnetic fields with different receiver systems are evaluated. From the evaluation, the quantization noise could be a major noise source determining overall system signal-to-noise ratio (SNR) in high field MRI system. A few methods to reduce the quantization noise are suggested. Materials and methods : In Fourier imaging methods, spin density distribution is encoded by phase and frequency encoding gradients in such a way that it becomes a distribution in the spatial frequency domain. Thus the quantization noise in the spatial frequency domain is expressed in terms of the SQNR in the reconstructed image. The validity of the derived formula is confirmed by experiments and computer simulation. Results : Using the derived formula, the SQNRs in various main magnetic fields with various receiver systems are evaluated. Since the quantization noise is proportional to the signal amplitude, yet it cannot be reduced by simple signal averaging, it could be a serious problem in high field imaging. In many receiver systems employing analog-to-digital converters (ADC) of 16 bits/sample, the quantization noise could be a major noise source limiting overall system SNR, especially in a high field imaging. Conclusion : The field strength of MRI system keeps going higher for functional imaging and spectroscopy. In high field MRI system, signal amplitude becomes larger with more susceptibility effect and wider spectral separation. Since the quantization noise is proportional to the signal amplitude, if the conversion bits of the ADCs in the receiver system are not large enough, the increase of signal amplitude may not be fully utilized for the SNR enhancement due to the increase of the quantization noise. Evaluation of the SQNR for various systems using the formula shows that the quantization noise could be a major noise source limiting overall system SNR, especially in three dimensional imaging in a high field imaging. Oversampling and off-center sampling would be an alternative solution to reduce the quantization noise without replacement of the receiver system.