• Title/Summary/Keyword: Model making

Search Result 4,880, Processing Time 0.036 seconds

Adaptive Lock Escalation in Database Management Systems (데이타베이스 관리 시스템에서의 적응형 로크 상승)

  • Chang, Ji-Woong;Lee, Young-Koo;Whang, Kyu-Young;Yang, Jae-Heon
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.742-757
    • /
    • 2001
  • Since database management systems(DBMSS) have limited lock resources, transactions requesting locks beyond the limit mutt be aborted. In the worst carte, if such transactions are aborted repeatedly, the DBMS can become paralyzed, i.e., transaction execute but cannot commit. Lock escalation is considered a solution to this problem. However, existing lock escalation methods do not provide a complete solution. In this paper, we prognose a new lock escalation method, adaptive lock escalation, that selves most of the problems. First, we propose a general model for lock escalation and present the concept of the unescalatable look, which is the major cause making the transactions to abort. Second, we propose the notions of semi lock escalation, lock blocking, and selective relief as the mechanisms to control the number of unescalatable locks. We then propose the adaptive lock escalation method using these notions. Adaptive lock escalation reduces needless aborts and guarantees that the DBMS is not paralyzed under excessive lock requests. It also allows graceful degradation of performance under those circumstances. Third, through extensive simulation, we show that adaptive lock escalation outperforms existing lock escalation methods. The results show that, compared to the existing methods, adaptive lock escalation reduces the number of aborts and the average response time, and increases the throughput to a great extent. Especially, it is shown that the number of concurrent transactions can be increased more than 16 ~256 fold. The contribution of this paper is significant in that it has formally analysed the role of lock escalation in lock resource management and identified the detailed underlying mechanisms. Existing lock escalation methods rely on users or system administrator to handle the problems of excessive lock requests. In contrast, adaptive lock escalation releases the users of this responsibility by providing graceful degradation and preventing system paralysis through automatic control of unescalatable locks Thus adaptive lock escalation can contribute to developing self-tuning: DBMSS that draw a lot of attention these days.

  • PDF

Design and Implementation of Quality Broker Architecture to Web Service Selection based on Autonomic Feedback (자율적 피드백 기반 웹 서비스 선정을 위한 품질 브로커 아키텍처의 설계 및 구현)

  • Seo, Young-Jun;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.15D no.2
    • /
    • pp.223-234
    • /
    • 2008
  • Recently the web service area provides the efficient integrated environment of the internal and external of corporation and enterprise that wants the introduction of it is increasing. Also the web service develops and the new business model appears, the domestic enterprise environment and e-business environment are changing caused by web service. The web service which provides the similar function increases, most the method which searches the suitable service in demand of the user is more considered seriously. When it needs to choose one among the similar web services, service consumer generally needs quality information of web service. The problem, however, is that the advertised QoS information of a web service is not always trustworthy. A service provider may publish inaccurate QoS information to attract more customers, or the published QoS information may be out of date. Allowing current customers to rate the QoS they receive from a web service, and making these ratings public, can provide new customers with valuable information on how to rank services. This paper suggests the agent-based quality broker architecture which helps to find a service providing the optimum quality that the consumer needs in a position of service consumer. It is able to solve problem which modify quality requirements of the consumer from providing the architecture it selects a web service to consumer dynamically. Namely, the consumer is able to search the service which provides the optimal quality criteria through UDDI browser which is connected in quality broker server. To quality criteria value decision of each service the user intervention is excluded the maximum. In the existing selection architecture, the objective evaluation was difficult in subjective class of service selecting of the consumer. But the proposal architecture is able to secure an objectivity with the quality criteria value decision where the agent monitors binding information in consumer location. Namely, it solves QoS information of service which provider does not provide with QoS information sharing which is caused by with feedback of consumer side agents.

A Study on the Ecosystem Services Value Assessment According to City Development: In Case of the Busan Eco-Delta City Development (도시개발에 따른 생태계서비스 가치 평가 연구: 부산 에코델타시티 사업을 대상으로)

  • Choi, Jiyoung;Lee, Youngsoo;Lee, Sangdon
    • Journal of Environmental Impact Assessment
    • /
    • v.28 no.5
    • /
    • pp.427-439
    • /
    • 2019
  • Natural environmental ecology ofthe environmental impact assessment(EIA)is very much lacking in quantitative evaluation. Thus, this study attempted to evaluate quantitative assessment for ecosystem service in the site of Eco-delta project in Busan. As a part of climate change adaptation, this study evaluated and compared with the value for carbon fixation and habitat quality using the InVEST model before and after development with three alternatives of land-use change. Carbon fixation showed 216,674.48 Mg of C (year 2000), and 203,474.25 Mg of C (year 2015)reducing about 6.1%, and in the future of year 2030 the value was dropped to 120,490.84 Mg of C which is 40% lower than year 2015. Alternative 3 of land use planning was the best in terms of carbon fixation showing 6,811.31 Mg of C. Habitat quality also changed from 0.57 (year 2000), 0.35 (year 2015), and 0.21 (year 2030) with continued degradation as development goes further. Alternative 3 also was the highest with 0.21(Alternative 1 : 0.20, Alternative 2 : 0.18). In conclusion,this study illustrated that quantitative method forland use change in the process of EIA can helpdecision making for stakeholders anddevelopers with serving the best scenario forlow impact of carbon. Also it can help better for land use plan, greenhouse gas and natural environmental assets in EIA. This study could be able to use in the environmental policy with numerical data of ecosystem and prediction. Supplemented with detailed analysis and accessibility of basic data, this method will make it possible for wide application in the ecosystem evaluation.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Estimation of Nondestructive Rice Leaf Nitrogen Content Using Ground Optical Sensors (지상광학센서를 이용한 비파괴 벼 엽 질소함량 추정)

  • Kim, Yi-Hyun;Hong, Suk-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.40 no.6
    • /
    • pp.435-441
    • /
    • 2007
  • Ground-based optical sensing over the crop canopy provides information on the mass of plant body which reflects the light, as well as crop nitrogen content which is closely related to the greenness of plant leaves. This method has the merits of being non-destructive real-time based, and thus can be conveniently used for decision making on application of nitrogen fertilizers for crops standing in fields. In the present study relationships among leaf nitrogen content of rice canopy, crop growth status, and Normalized Difference Vegetation Index (NDVI) values were investigated. We measured Green normalized difference vegetation index($gNDVI=({\rho}0.80{\mu}m-{\rho}0.55{\mu}m)/({\rho}0.80{\mu}m+{\rho}0.55{\mu}m)$) and NDVI($({\rho}0.80{\mu}m-{\rho}0.68{\mu}m)/({\rho}0.80{\mu}m+{\rho}0.68{\mu}m)$) were measured by using two different active sensors (Greenseeker, NTech Inc. USA). The study was conducted in the years 2005-06 during the rice growing season at the experimental plots of National Institute of Agricultural Science and Technology located at Suwon, Korea. The experiments carried out with randomized complete block design with the application of four levels of nitrogen fertilizers (0, 70, 100, 130kg N/ha) and same amount of phosphorous and potassium content of the fertilizers. gNDVI and rNDVI increased as growth advanced and reached to maximum values at around early August, G(NDVI) were a decrease in values of observed with the crop maturation. gNDVI values and leaf nitrogen content were highly correlated at early July in 2005 and 2006. On the basis of this finding we attempted to estimate the leaf N contents using gNDVI data obtained in 2005 and 2006. The determination coefficients of the linear model by gNDVI in the years 2005 and 2006 were 0.88 and 0.94, respectively. The measured and estimated leaf N contents using gNDVI values showed good agreement ($R^2=0.86^{***}$). Results from this study show that gNDVI values represent a significant positive correlation with leaf N contents and can be used to estimate leaf N before the panicle formation stage. gNDVI appeared to be a very effective parameter to estimate leaf N content the rice canopy.

A Study on the 18th Joseon Dynasty Sculturers Choi Cheon-Yak (조선 18세기 조각가 최천약(崔天若) 연구)

  • Kim, Min Kyu
    • Korean Journal of Heritage: History & Science
    • /
    • v.46 no.1
    • /
    • pp.124-139
    • /
    • 2013
  • Choi Cheon-yak(about 1684~1755) is an artist who left the various artistic works such as sculptures of royal tombs, architectures. In addition, he was not only a skilled caster but also an able artisan carving jade in the early Joseon Dynasty period. Starting from making royal seals of King Suk-jong, he had made more than about 40 Royal seals until 1755. Choi Cheon-yak was well known as a skilled jade craftsman. Another of his great abilities was to carve subjects into ideal figures. In virtue of his greater abilities, he could take part in the process of constructing Royal tombs and sculpting the stone statues of military officials' which were erected at aristocrats' tombs. With these accumulated skills, when he was in charge of designing the folding screen stones for King In-jo's Jang Neong, he could even replace 12 animals system and clouds with peonies and lotus. Among his various abilities, his skill in carving a sculpture can stand comparison with any other contemporaries. His sculpture skill was at its zenith in 1752, the stone statues of military officials' at the Ui soseson's tomb count his showpiece that describes a model at the age of his late teens and is a realistic and portrayal sculpture, which met the royal family's dignity. In the same year, the stone statues of military officials' constructed by Choi Cheon-yak was elected in front of the Jo Hyen-myeong's tomb(1690~1752). This masterpiece referred to the armor of those of King Gong-min Neong and newly added a helmet and the patterns of a tortoise shell. These patterns of a tortoise shell were passed down to Park Moon-su's tomb in 1756 and Queen Jeong-sung's Hong Neung by his colleagues : Kim Ha-jeong and Byeon Yi-jin etc. He was one of the greatest sculptors in the $18^{th}$ century. People in Joseon praised him highly for his imaginative work from an amorphous object. Especially, these stone statues of military at Jo Hyeonmyeong's tomb shows the proofs of his supreme artwork.

"As the Scientific Witness Is a Court Witness and Is Not a Party Witness" ("과학의 승리"는 어떻게 선언될 수 있는가? 친자 확인을 위한 혈액형 검사가 법원으로 들어갔던 과정)

  • Kim, Hyomin
    • Journal of Science and Technology Studies
    • /
    • v.19 no.1
    • /
    • pp.1-51
    • /
    • 2019
  • The understanding of law and science as fundamentally different two systems, in which fact stands against justice, rapid progress against prudent process, is far too simple to be valid. Nonetheless, such account is commonly employed to explain the tension between law and science or justice and truth. Previous STS research raises fundamental doubts upon the off-the-shelf concept of "scientific truth" that can be introduced to the court for legal judgment. Delimiting the qualification of the expert, the value of the expert knowledge, or the criteria of the scientific expertise have always included social negotiation. What are the values that are affecting the boundary-making of the thing called "modern science" that is supposedly useful in solving legal conflicts? How do the value of law and the meaning of justice change as the boundaries of modern science take shapes? What is the significance of "science" when it is emphasized, particularly in relation to the legal provisions of paternity, and how does this perception of science affect unfoldings of legal disputes? In order to explore the answers to the above questions, we follow a process in which a type of "knowledge-deficient model" of a court-that is, law lags behind science and thus, under-employs its useful functions-can be closely examined. We attend to a series of discussions and subsequent changes that occurred in the US courts between 1930s and 1970s, when blood type tests began to be used to determine parental relations. In conclusion, we argue that it was neither nature nor truth in itself that was excavated by forensic scientists and legal practitioners, who regarded blood type tests as a truth machine. Rather, it was their careful practices and crafty narratives that made the roadmaps of modern science, technology, and society on which complex tensions between modern states, families, and courts were seen to be "resolved".

Prioritization of Species Selection Criteria for Urban Fine Dust Reduction Planting (도시 미세먼지 저감 식재를 위한 수종 선정 기준의 우선순위 도출)

  • Cho, Dong-Gil
    • Korean Journal of Environment and Ecology
    • /
    • v.33 no.4
    • /
    • pp.472-480
    • /
    • 2019
  • Selection of the plant material for planting to reduce fine dust should comprehensively consider the visual characteristics, such as the shape and texture of the plant leaves and form of bark, which affect the adsorption function of the plant. However, previous studies on reduction of fine dust through plants have focused on the absorption function rather than the adsorption function of plants and on foliage plants, which are indoor plants, rather than the outdoor plants. In particular, the criterion for selection of fine dust reduction species is not specific, so research on the selection criteria for plant materials for fine dust reduction in urban areas is needed. The purpose of this study is to identify the priorities of eight indicators that affect the fine dust reduction by using the fuzzy multi-criteria decision-making model (MCDM) and establish the tree selection criteria for the urban planting to reduce fine dust. For the purpose, we conducted a questionnaire survey of those who majored in fine dust-related academic fields and those with experience of researching fine dust. A result of the survey showed that the area of leaf and the tree species received the highest score as the factors that affect the fine dust reduction. They were followed by the surface roughness of leaves, tree height, growth rate, complexity of leaves, edge shape of leaves, and bark feature in that order. When selecting the species that have leaves with the coarse surface, it is better to select the trees with wooly, glossy, and waxy layers on the leaves. When considering the shape of the leaves, it is better to select the two-type or three-type leaves and palm-shaped leaves than the single-type leaves and to select the serrated leaves than the smooth edged leaves to increase the surface area for adsorbing fine dust in the air on the surface of the leaves. When considering the characteristics of the bark, it is better to select trees that have cork layers or show or are likely to show the bark loosening or cracks than to select those with lenticel or patterned barks. This study is significant in that it presents the priorities of the selection criteria of plant material based on the visual characteristics that affect the adsorption of fine dust for the planning of planting to reduce fine dust in the urban area. The results of this study can be used as basic data for the selection of trees for plantation planning in the urban area.

Mediating Effect of Ease of Use and Customer Satisfaction in the Relationship between Mobile Shopping Mall of Service Quality and Repurchase Intention of University Student consumer (모바일쇼핑몰 서비스품질과 대학생 고객의 재구매의도 관계에서 사용용이성과 고객만족도의 매개효과)

  • Kim, Sun-A;Park, Ji-Eun;Park, Song-Choon
    • Management & Information Systems Review
    • /
    • v.38 no.1
    • /
    • pp.201-223
    • /
    • 2019
  • The purpose of this study is to verify empirically the causal relationship between service quality, ease of use, customer satisfaction, and repurchase intention of mobile shopping mall. And this study is to investigate the ease of use and customer satisfaction mediating effect of between service quality and repurchase intention. Therefore, 323 university students in Jeonnam area were surveyed and the structural equation model was derived based on previous research. Service quality of mobile shopping mall make a significant effect on using easiness, purchasing satisfaction and repurchase intention. However, among service quality of mobile shopping mall, service scape like mobile interface and site design made a positive effect on purchasing satisfaction, but did not any effect on repurchase intention. In other words, service quality factors that make positive effects on customer's pleasant using and repurchase intention make a positive effect on repurchase intention when providing and using the service customer wants faithfully rather than external part of the site and mutually influencing attitude or behavior well. The implications suggested by this study are as follows. First, service quality of mobile shopping mall makes a significant effect on repurchase intention, so it's necessary to improve CS service system so as to treat customers' inquiries or inconveniences actively during mobile shopping and return and refund of defective products quickly and conveniently. And, in addition to the finally used factors in analysis process, benefits using customers' grade by number of purchases, such as various events, coupons, reserve, etc. and active contents marketing strategies providing more various pleasures and values of shopping are necessary. Second, satisfaction of mobile shopping mall makes a positive effect on repurchase intention, so visiting of site and repurchasing of product are continuously done as customers' satisfaction on shopping mall is increasing. Therefore, shopping mall site requires differentiation of contents, exact plan and practice of service, marketing, etc. so that customers can feel more satisfaction. This study is significant as it systematically analyzed concepts of components that service quality of mobile shopping mall makes an effect on using easiness, purchasing satisfaction, and repurchase intention, verified the relations, systematized it by theoretical structure, and widened the understanding of effects making an effect on repurchase intention.

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.