• 제목/요약/키워드: Computer application

Search Result 7,919, Processing Time 0.033 seconds

Performance Analysis of Slave-Side Arbitration Schemes for the Multi-Layer AHB BusMatrix (ML-AHB 버스 매트릭스를 위한 슬레이브 중심 중재 방식의 성능 분석)

  • Hwang, Soo-Yun;Park, Hyeong-Jun;Jhang, Kyoung-Son
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.5_6
    • /
    • pp.257-266
    • /
    • 2007
  • In On-Chip bus, the arbitration scheme is one of the critical factors that decide the overall system performance. The arbitration scheme used in traditional shared bus is the master-side arbitration based on the request and grant signals between multiple masters and single arbiter. In the case of the master-side arbitration, only one master and one slave can transfer the data at a time. Therefore the throughput of total bus system and the utilization of resources are decreased in the master-side arbitration. However in the slave-side arbitration, there is an arbiter at each slave port and the master just starts a transaction and waits for the slave response to proceed to the next transfer. Thus, the unit of arbitration can be a transaction or a transfer. Besides the throughput of total bus system and the utilization of resources are increased since the multiple masters can simultaneously perform transfers with independent slaves. In this paper, we implement and analyze the arbitration schemes for the Multi-Layer AHB BusMatrix based on the slave-side arbitration. We implement the slave-side arbitration schemes based on fixed priority, round robin and dynamic priority and accomplish the performance simulation to compare and analyze the performance of each arbitration scheme according to the characteristics of the master and slave. With the performance simulation, we observed that when there are few masters on critical path in a bus system, the arbitration scheme based on dynamic priority shows the maximum performance and in other cases, the arbitration scheme based on round robin shows the highest performance. In addition, the arbitration scheme with transaction based multiplexing shows higher performance than the same arbitration scheme with single transfer based switching in an application with frequent accesses to the long latency devices or memories such as SDRAM. The improvements of the arbitration scheme with transaction based multiplexing are 26%, 42% and 51%, respectively when the latency times of SDRAM are 1, 2 and 3 clock cycles.

Understanding the Mismatch between ERP and Organizational Information Needs and Its Responses: A Study based on Organizational Memory Theory (조직의 정보 니즈와 ERP 기능과의 불일치 및 그 대응책에 대한 이해: 조직 메모리 이론을 바탕으로)

  • Jeong, Seung-Ryul;Bae, Uk-Ho
    • Asia pacific journal of information systems
    • /
    • v.22 no.2
    • /
    • pp.21-38
    • /
    • 2012
  • Until recently, successful implementation of ERP systems has been a popular topic among ERP researchers, who have attempted to identify its various contributing factors. None of these efforts, however, explicitly recognize the need to identify disparities that can exist between organizational information requirements and ERP systems. Since ERP systems are in fact "packages" -that is, software programs developed by independent software vendors for sale to organizations that use them-they are designed to meet the general needs of numerous organizations, rather than the unique needs of a particular organization, as is the case with custom-developed software. By adopting standard packages, organizations can substantially reduce many of the potential implementation risks commonly associated with custom-developed software. However, it is also true that the nature of the package itself could be a risk factor as the features and functions of the ERP systems may not completely comply with a particular organization's informational requirements. In this study, based on the organizational memory mismatch perspective that was derived from organizational memory theory and cognitive dissonance theory, we define the nature of disparities, which we call "mismatches," and propose that the mismatch between organizational information requirements and ERP systems is one of the primary determinants in the successful implementation of ERP systems. Furthermore, we suggest that customization efforts as a coping strategy for mismatches can play a significant role in increasing the possibilities of success. In order to examine the contention we propose in this study, we employed a survey-based field study of ERP project team members, resulting in a total of 77 responses. The results of this study show that, as anticipated from the organizational memory mismatch perspective, the mismatch between organizational information requirements and ERP systems makes a significantly negative impact on the implementation success of ERP systems. This finding confirms our hypothesis that the more mismatch there is, the more difficult successful ERP implementation is, and thus requires more attention to be drawn to mismatch as a major failure source in ERP implementation. This study also found that as a coping strategy on mismatch, the effects of customization are significant. In other words, utilizing the appropriate customization method could lead to the implementation success of ERP systems. This is somewhat interesting because it runs counter to the argument of some literature and ERP vendors that minimized customization (or even the lack thereof) is required for successful ERP implementation. In many ERP projects, there is a tendency among ERP developers to adopt default ERP functions without any customization, adhering to the slogan of "the introduction of best practices." However, this study asserts that we cannot expect successful implementation if we don't attempt to customize ERP systems when mismatches exist. For a more detailed analysis, we identified three types of mismatches-Non-ERP, Non-Procedure, and Hybrid. Among these, only Non-ERP mismatches (a situation in which ERP systems cannot support the existing information needs that are currently fulfilled) were found to have a direct influence on the implementation of ERP systems. Neither Non-Procedure nor Hybrid mismatches were found to have significant impact in the ERP context. These findings provide meaningful insights since they could serve as the basis for discussing how the ERP implementation process should be defined and what activities should be included in the implementation process. They show that ERP developers may not want to include organizational (or business processes) changes in the implementation process, suggesting that doing so could lead to failed implementation. And in fact, this suggestion eventually turned out to be true when we found that the application of process customization led to higher possibilities of failure. From these discussions, we are convinced that Non-ERP is the only type of mismatch we need to focus on during the implementation process, implying that organizational changes must be made before, rather than during, the implementation process. Finally, this study found that among the various customization approaches, bolt-on development methods in particular seemed to have significantly positive effects. Interestingly again, this finding is not in the same line of thought as that of the vendors in the ERP industry. The vendors' recommendations are to apply as many best practices as possible, thereby resulting in the minimization of customization and utilization of bolt-on development methods. They particularly advise against changing the source code and rather recommend employing, when necessary, the method of programming additional software code using the computer language of the vendor. As previously stated, however, our study found active customization, especially bolt-on development methods, to have positive effects on ERP, and found source code changes in particular to have the most significant effects. Moreover, our study found programming additional software to be ineffective, suggesting there is much difference between ERP developers and vendors in viewpoints and strategies toward ERP customization. In summary, mismatches are inherent in the ERP implementation context and play an important role in determining its success. Considering the significance of mismatches, this study proposes a new model for successful ERP implementation, developed from the organizational memory mismatch perspective, and provides many insights by empirically confirming the model's usefulness.

  • PDF

Specifying the Characteristics of Tangible User Interface: centered on the Science Museum Installation (실물형 인터렉션 디자인 특성 분석: 과학관 체험 전시물을 대상으로)

  • Cho, Myung Eun;Oh, Myung Won;Kim, Mi Jeong
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.553-564
    • /
    • 2012
  • Tangible user interfaces have been developed in the area of Human-Computer Interaction for the last decades, however, the applied domains recently have been extended into the product design and interactive art. Tangible User Interfaces are the combination of digital information and physical objects or environments, thus they provide tangible and intuitive interaction as input and output devices, often combined with Augmented Reality. The research developed a design guideline for tangible user interfaces based on key properties of tangible user interfaces defined previously in five representative research: Tangible Interaction, Intuitiveness and Convenience, Expressive Representation, Context-aware and Spatial Interaction, and Social Interaction. Using the guideline emphasizing user interaction, this research evaluated installation in a science museum in terms of the applied characteristics of tangible user interfaces. The selected 15 installations which were evaluated are to educate visitors for science by emphasizing manipulation and experience of interfaces in those installations. According to the input devices, they are categorized into four Types. TUI properties in Type 3 installation, which uses body motions for interaction, shows the highest score, where items for context-aware and spatial interaction were highly rated. The context-aware and spatial interaction have been recently emphasized as extended properties of tangible user interfaces. The major type of installation in the science museum is equipped with buttons and joysticks for physical manipulation, thus multimodal interfaces utilizing visual, aural, tactile senses etc need to be developed to provide more innovative interaction. Further, more installation need to be reconfigurable for embodied interaction between users and the interactive space. The proposed design guideline can specify the characteristics of tangible user interfaces, thus this research can be a basis for the development and application of installation involving more TUI properties in future.

  • PDF

Development and Application of an After-school Program for an Astronomy Observation Club in a Highschool: Standardized Coefficient Decision Program in Consideration of the Observation Site's Environment (고등학교 천체 관측 동아리를 위한 방과 후 학교 프로그램 개발 및 적용: 관측지 주변 환경을 고려한 표준화 계수 결정 프로그램)

  • Kim, Seung-Hwan;Lee, Hyo-Nyong;Lee, Hyun-Dong;Jeong, Jae-Hwa
    • Journal of the Korean earth science society
    • /
    • v.29 no.6
    • /
    • pp.495-505
    • /
    • 2008
  • The main purposes of this study are to: (1) to develop astronomy observation program based on a standardized coefficient decision program; and (2) to apply the developed program to after-school or club activities. As a first step, we analyzed activities related to astronomy in the authorized textbooks that are currently adopted in high schools. based on the analysis, we developed an astronomy observation program according to the standardized coefficient decision program, and the program was applied to students' astronomical observations as part of the club activities. Specifically, this program used a 102 mm refracting telescope and digital camera. we took into account the observation site's environment of the urban areas in which many school were located and then developed a the computer program for observation activities. The results of this study are as follows. First, the current astronomical education in schools was based off of the textbooks. Specifically, it was mostly about analyzing the materials and making simulated experiments. Second, most schools participated in this study were located in urban areas where students had more difficulty in observation than in rural areas. Third, an exemplary method was investigated in order to make an astronomical observation efficiently in urban areas with the existing devices. In addition, the standardized coefficient decision program was developed to standardize the magnitude of stars according to the observed value. Finally, based on the students' observations, we found that there was no difference between the magnitude of a star in urban sites and in rural sites. The current astronomical education in schools lacks an activity of practical experiments, and many schools have not good observational sites because they are located in urban areas. However, use of this program makes it possible to collect significant data after a series of standardized corrections. In conclusion, this program not only helps schools to create an active astronomy observation activity in fields, but also promotes students to be more interested in astronomical observation through a series of field-based activities.

Usefulness of Data Mining in Criminal Investigation (데이터 마이닝의 범죄수사 적용 가능성)

  • Kim, Joon-Woo;Sohn, Joong-Kweon;Lee, Sang-Han
    • Journal of forensic and investigative science
    • /
    • v.1 no.2
    • /
    • pp.5-19
    • /
    • 2006
  • Data mining is an information extraction activity to discover hidden facts contained in databases. Using a combination of machine learning, statistical analysis, modeling techniques and database technology, data mining finds patterns and subtle relationships in data and infers rules that allow the prediction of future results. Typical applications include market segmentation, customer profiling, fraud detection, evaluation of retail promotions, and credit risk analysis. Law enforcement agencies deal with mass data to investigate the crime and its amount is increasing due to the development of processing the data by using computer. Now new challenge to discover knowledge in that data is confronted to us. It can be applied in criminal investigation to find offenders by analysis of complex and relational data structures and free texts using their criminal records or statement texts. This study was aimed to evaluate possibile application of data mining and its limitation in practical criminal investigation. Clustering of the criminal cases will be possible in habitual crimes such as fraud and burglary when using data mining to identify the crime pattern. Neural network modelling, one of tools in data mining, can be applied to differentiating suspect's photograph or handwriting with that of convict or criminal profiling. A case study of in practical insurance fraud showed that data mining was useful in organized crimes such as gang, terrorism and money laundering. But the products of data mining in criminal investigation should be cautious for evaluating because data mining just offer a clue instead of conclusion. The legal regulation is needed to control the abuse of law enforcement agencies and to protect personal privacy or human rights.

  • PDF

A Study on the Health Insurance Management System; With Emphasis on the Management Operating Cost (의료보험 관리체계에 대한 연구 - 관리비용을 중심으로 -)

  • 남광성
    • Korean Journal of Health Education and Promotion
    • /
    • v.6 no.2
    • /
    • pp.23-39
    • /
    • 1989
  • There have been a lot of considerable. discussion and debate surrounding the management model in the health insurance management system and opinions regarding the management operating cost. It is a well known fact that there have always been dissenting opinions and debates surrounding the issue. The management operating cost varies according to the scale of the management organization and component members characteristics of the insurance carrier. Therefore, it is necessary to examine and compare the management operating cost to the simulated management models developed to cover those eligible for the health insurance scheme in this country. Since the management operating cost can vary according to the different models of management, four alternative management models have been established based on the critical evaluation of existing theories concerned, as well as on the basis of the survey results and simulation attempts. The first alternative model is the Unique Insurance Carrier Model(Ⅰ) ; desigened to cover all of the people with no classification of insurance qualifications and finances from the source of contribution of the insured, nationwide. The second is the Management Model of Large-scale District Insurance Carrier(Ⅱ) ; this means the Korean society would be divided into 21 large districts; each having its own insurance carrier that would cover the people in that particular district with no classification of insurance qualifications arid finances as in Model I. The third is the Management Model of Insurance Carrier Divided by Area and Classified with Occupation if Largescale (Ⅲ) ; to serve the self-employed in the 21 districts divided as in Model Ⅱ. It would serve the employees and their dependents by separate insurance carriers in large-scale similar to the area of the district-scale for the self-employed, so that the insurance qualifications and finances would be classified with each of the insurance carriers: The last is the Management Model of the Multi - insurance Carrier (Ⅳ) based on the Si. Gun. Gu area which will cover their own self- employed people in the area with more than 150 additional insurance carriers covering the employees and their dependents. The manpower necessary to provide services to all of the people according to the four models is calculated through simulation trials. It indicates that the Management Model of Large-scale District Insurance Carrier requires the most manpower among the four alternative models. The unit management operating costs per the insured individuals and covered persons are leveled with several intervals based on the insurance recipients. in their characteristics. The interval levels derived from the regression analysis reveal that the larger the scale of the insurance carriers is in the number of those insured and covered. the more the unit management operating cost decreases. significantly. Moreover. the result of the quadratic functional formula also shows the U-shape significantly. The management operating costs derived from the simulated calculation. on the basis of the average salary and related cost per staff- member of the Health Insurance Societies for Occupational Labours and Korean Medical Insurance Corporation for the Official Servants and Private School Teachers in 1987 fiscal year. show that the Model of Multi-insurance Carrier warrants the highest management operating cost. Meanwhile the least expensive management operating cost is the Management Model of Unique Insurance Carrier. Insurance Carrier Divided by Area and Classified with Occupation in Large-scale. and Large-scale District Insurance Carrier. in order. Therefore. it is feasible to select the Unique Insurance Carrier Model among the four alternatives from the viewpoint of the management operating cost and in the sense of the flexibility in promoting the productivity of manpower in the human services field. However. the choice of the management model for health insurance systems and its application should be examined further utilizing the operation research analysis for such areas as the administrative efficiency and factors related to computer cost etc.

  • PDF

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.

Pyrite Content using Quantitative X-Ray Diffraction Analysis and Its Application to Prediction of Acid Rock Drainage (정량 X-선회절분석을 이용한 황철석 함량 결정과 산성 암반 배수 발생 평가에의 응용)

  • Chon, Chul-Min;Kim, Jae-Gon;Lee, Gyoo-Ho
    • Journal of the Mineralogical Society of Korea
    • /
    • v.19 no.2 s.48
    • /
    • pp.71-80
    • /
    • 2006
  • We examined the mineralogical composition of pyrite-bearing rocks by quantitative powder X-ray diffraction analysis using the matrix-flushing method and ROCKJOCK (a full pattern fitting computer program). The neutralization potential (NP) and acid generating potential (AP) were calculated on the basis of mineralogical compositions. The mineralogical AP was compared with the conventional AP calculated from bulk sulfur concentration to assess the applicability to the prediction of acid rock drainage(ARD). The pyrite content calculated by matrix-flushing method showed a high positive correlation($r^2$=0.95) with those by ROCKJOCK. The pyrite contents by matrix-flushing method was 1.45 times larger than those by ROCKJOCK. The pyrite content and mineralogical AP obtained by the matrix-flushing method had a better correlation($r^2$=0.98) with those by the total sulfur concentrations in the all samples except KB sample. The mineralogical NPs of YJ sample were 23.0 and 34.0(kg $CaCO_3$ equivalent per tonne) by matrix-flushing method and ROCKJOCK, respectively. The AP calculated by matrix-flushing method and ROCKJOCK program were 47% and 72% of those by the conventional ABA test. We hereby suggested that the quantitative analysis using XRD data can be applied to prediction of ARD. For more reliable calculation of the mineralogical NP and AP, other sulfide and carbonate minerals such as pyrrhotite, dolomite, ankerite, siderite, rhodochrosite which can affact the mineralogical NP and AP should be considered.

The Evaluation of Reconstructed Images in 3D OSEM According to Iteration and Subset Number (3D OSEM 재구성 법에서 반복연산(Iteration) 횟수와 부분집합(Subset) 개수 변경에 따른 영상의 질 평가)

  • Kim, Dong-Seok;Kim, Seong-Hwan;Shim, Dong-Oh;Yoo, Hee-Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.17-24
    • /
    • 2011
  • Purpose: Presently in the nuclear medicine field, the high-speed image reconstruction algorithm like the OSEM algorithm is widely used as the alternative of the filtered back projection method due to the rapid development and application of the digital computer. There is no to relate and if it applies the optimal parameter be clearly determined. In this research, the quality change of the Jaszczak phantom experiment and brain SPECT patient data according to the iteration times and subset number change try to be been put through and analyzed in 3D OSEM reconstruction method of applying 3D beam modeling. Materials and Methods: Patient data from August, 2010 studied and analyzed against 5 patients implementing the brain SPECT until september, 2010 in the nuclear medicine department of ASAN medical center. The phantom image used the mixed Jaszczak phantom equally and obtained the water and 99mTc (500 MBq) in the dual head gamma camera Symbia T2 of Siemens. When reconstructing each image altogether with patient data and phantom data, we changed iteration number as 1, 4, 8, 12, 24 and 30 times and subset number as 2, 4, 8, 16 and 32 times. We reconstructed in reconstructed each image, the variation coefficient for guessing about noise of images and image contrast, FWHM were produced and compared. Results: In patients and phantom experiment data, a contrast and spatial resolution of an image showed the tendency to increase linearly altogether according to the increment of the iteration times and subset number but the variation coefficient did not show the tendency to be improved according to the increase of two parameters. In the comparison according to the scan time, the image contrast and FWHM showed altogether the result of being linearly improved according to the iteration times and subset number increase in projection per 10, 20 and 30 second image but the variation coefficient did not show the tendency to be improved. Conclusion: The linear relationship of the image contrast improved in 3D OSEM reconstruction method image of applying 3D beam modeling through this experiment like the existing 1D and 2D OSEM reconfiguration method according to the iteration times and subset number increase could be confirmed. However, this is simple phantom experiment and the result of obtaining by the some patients limited range and the various variables can be existed. So for generalizing this based on this results of this experiment, there is the excessiveness and the evaluation about 3D OSEM reconfiguration method should be additionally made through experiments after this.

  • PDF

A Study on Protection Performance of Radiation Protective Aprons classified by Manufacturers and Lead Equivalent using Over Tube Type Fluoroscopy (Over Tube Type의 투시촬영장치를 이용한 제조사별, 납당량별 엑스선방어 앞치마의 Protection 성능 평가에 관한 연구)

  • Song, Jong-Nam;Seol, Gwang-Wook;Hong, Seong-Il;Choi, Jeong-Gu
    • Journal of the Korean Society of Radiology
    • /
    • v.5 no.3
    • /
    • pp.135-141
    • /
    • 2011
  • If protective performance of apron cannot be good, radiation exposure of an guardian or a patient, a person engaged in radiation related industry cannot rise. Therefore, It will be evaluated protection performance to radiation protection aprons by manufacturers and lead equivalent more than 0.25mm lead equivalent. And, will show in the direction of application to clinic. The new aprons by manufacturers(H, X, I, J company) and lead equivalent(0.50mmPb, 0.35mmPb, 0.25mmPb) measured transmitted dose rate and shielding rate, uniformity under fluoroscopy and general radiography using to fluoroscopy system and digital radiography system, x-ray multifunction meter. The shielding rate measurement results, 0.5mmPb apron was Shielding rate of apron of a I company(fluoroscopy : 97.96%) was the best under six companies, and shielding rate of apron of a J company(fluoroscopy : 96.25%) was worst. 0.35mmPb Apron was Shielding rate of a I company(fluoroscopy : 96.79%) was the best under the three companies, and shielding rate of an H company(fluoroscopy : 95.81%) was the worst. 0.25mmPb Apron was Shielding rate of X company apron(fluoroscopy : 90.908%) was better than H company apron(fluoroscopy : 88.82%) than two companies. The uniformity measurement results, 0.5mmPb Aprons of X company(fluoroscopy : 0.13) and I company(fluoroscopy : 0.19) was the best under the six companies, and J company apron(fluoroscopy : 0.45) was the worst. 0.35mmPb. Along a manufacturer and lead equivalent performance of apron protection is distinguished certainly. Therefore, a patient, guardian or a person engaged in radiation related industry shall enforce experiment of a lot of ways defined or evaluation so that the maximum reduces radiation exposure. Buy the apron that protective performance is good, It will be performed through experiment and evaluation.