• Title/Summary/Keyword: data memory

Search Result 3,307, Processing Time 0.04 seconds

The Adoption and Diffusion of Semantic Web Technology Innovation: Qualitative Research Approach (시맨틱 웹 기술혁신의 채택과 확산: 질적연구접근법)

  • Joo, Jae-Hun
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.33-62
    • /
    • 2009
  • Internet computing is a disruptive IT innovation. Semantic Web can be considered as an IT innovation because the Semantic Web technology possesses the potential to reduce information overload and enable semantic integration, using capabilities such as semantics and machine-processability. How should organizations adopt the Semantic Web? What factors affect the adoption and diffusion of Semantic Web innovation? Most studies on adoption and diffusion of innovation use empirical analysis as a quantitative research methodology in the post-implementation stage. There is criticism that the positivist requiring theoretical rigor can sacrifice relevance to practice. Rapid advances in technology require studies relevant to practice. In particular, it is realistically impossible to conduct quantitative approach for factors affecting adoption of the Semantic Web because the Semantic Web is in its infancy. However, in an early stage of introduction of the Semantic Web, it is necessary to give a model and some guidelines and for adoption and diffusion of the technology innovation to practitioners and researchers. Thus, the purpose of this study is to present a model of adoption and diffusion of the Semantic Web and to offer propositions as guidelines for successful adoption through a qualitative research method including multiple case studies and in-depth interviews. The researcher conducted interviews with 15 people based on face-to face and 2 interviews by telephone and e-mail to collect data to saturate the categories. Nine interviews including 2 telephone interviews were from nine user organizations adopting the technology innovation and the others were from three supply organizations. Semi-structured interviews were used to collect data. The interviews were recorded on digital voice recorder memory and subsequently transcribed verbatim. 196 pages of transcripts were obtained from about 12 hours interviews. Triangulation of evidence was achieved by examining each organization website and various documents, such as brochures and white papers. The researcher read the transcripts several times and underlined core words, phrases, or sentences. Then, data analysis used the procedure of open coding, in which the researcher forms initial categories of information about the phenomenon being studied by segmenting information. QSR NVivo version 8.0 was used to categorize sentences including similar concepts. 47 categories derived from interview data were grouped into 21 categories from which six factors were named. Five factors affecting adoption of the Semantic Web were identified. The first factor is demand pull including requirements for improving search and integration services of the existing systems and for creating new services. Second, environmental conduciveness, reference models, uncertainty, technology maturity, potential business value, government sponsorship programs, promising prospects for technology demand, complexity and trialability affect the adoption of the Semantic Web from the perspective of technology push. Third, absorptive capacity is an important role of the adoption. Fourth, suppler's competence includes communication with and training for users, and absorptive capacity of supply organization. Fifth, over-expectance which results in the gap between user's expectation level and perceived benefits has a negative impact on the adoption of the Semantic Web. Finally, the factor including critical mass of ontology, budget. visible effects is identified as a determinant affecting routinization and infusion. The researcher suggested a model of adoption and diffusion of the Semantic Web, representing relationships between six factors and adoption/diffusion as dependent variables. Six propositions are derived from the adoption/diffusion model to offer some guidelines to practitioners and a research model to further studies. Proposition 1 : Demand pull has an influence on the adoption of the Semantic Web. Proposition 1-1 : The stronger the degree of requirements for improving existing services, the more successfully the Semantic Web is adopted. Proposition 1-2 : The stronger the degree of requirements for new services, the more successfully the Semantic Web is adopted. Proposition 2 : Technology push has an influence on the adoption of the Semantic Web. Proposition 2-1 : From the perceptive of user organizations, the technology push forces such as environmental conduciveness, reference models, potential business value, and government sponsorship programs have a positive impact on the adoption of the Semantic Web while uncertainty and lower technology maturity have a negative impact on its adoption. Proposition 2-2 : From the perceptive of suppliers, the technology push forces such as environmental conduciveness, reference models, potential business value, government sponsorship programs, and promising prospects for technology demand have a positive impact on the adoption of the Semantic Web while uncertainty, lower technology maturity, complexity and lower trialability have a negative impact on its adoption. Proposition 3 : The absorptive capacities such as organizational formal support systems, officer's or manager's competency analyzing technology characteristics, their passion or willingness, and top management support are positively associated with successful adoption of the Semantic Web innovation from the perceptive of user organizations. Proposition 4 : Supplier's competence has a positive impact on the absorptive capacities of user organizations and technology push forces. Proposition 5 : The greater the gap of expectation between users and suppliers, the later the Semantic Web is adopted. Proposition 6 : The post-adoption activities such as budget allocation, reaching critical mass, and sharing ontology to offer sustainable services are positively associated with successful routinization and infusion of the Semantic Web innovation from the perceptive of user organizations.

A Comparative Study of Korean Home Economic Curriculum and American Practical Problem Focused Family & Consumer Sciences Curricula (우리나라 가정과 교육과정과 미국의 실천적 문제 중심 교육과정과의 비교고찰)

  • Kim, Hyun-Sook;Yoo, Tae-Myung
    • Journal of Korean Home Economics Education Association
    • /
    • v.19 no.4
    • /
    • pp.91-117
    • /
    • 2007
  • This study was to compare the contents and practical problems addressed, the process of teaching-learning method, and evaluation method of Korean Home Economics curriculum and of the Oregon and Ohio's Practical Problem Focused Family & Consumer Sciences Curricula. The results are as follows. First, contents of Korean curriculum are organized by major sub-concepts of Home Economics academic discipline whereas curricular of both Oregon and Ohio states are organized by practical problems. Oregon uses the practical problems which integrate multi-subjects and Ohio uses ones which are good for the contents of the module by integrating concerns or interests which are lower or detailed level (related interests). Since it differentiates interest and module and used them based on the basic concept of Family and Consumer Science, Ohio's approach could be easier for Korean teachers and students to adopt. Second, the teaching-learning process in Korean home economics classroom is mostly teacher-centered which hinders students to develop higher order thinking skills. It is recommended to use student-centered learning activities. State of Oregon and Ohio's teaching-learning process brings up the ability of problem-solving by letting students clearly analyze practical problems proposed, solve problems by themselves through group discussions and various activities, and apply what they learn to other problems. Third, Korean evaluation system is heavily rely on summative evaluation such as written tests. It is highly recommended to facilitate various performance assessment tools. Since state of Oregon and Ohio both use practical problems, they evaluate students mainly based on their activity rather than written tests. The tools for evaluation include project documents, reports of learning activity, self-evaluation, evaluation of discussion activity, peer evaluation in a group for each students for their performance, assessment about module, and written tests as well.

  • PDF

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

Comparative Analysis and Performance Evaluation of New Low-Power, Low-Noise, High-Speed CMOS LVDS I/O Circuits (저 전력, 저 잡음, 고속 CMOS LVDS I/O 회로에 대한 비교 분석 및 성능 평가)

  • Byun, Young-Yong;Kim, Tae-Woong;Kim, Sam-Dong;Hwang, In-Seok
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.2
    • /
    • pp.26-36
    • /
    • 2008
  • Due to the differential and low voltage swing, Low Voltage Differential Signaling(LVDS) has been widely used for high speed data transmission with low power consumption. This paper proposes new LVDS I/O interface circuits for more than 1.3 Gb/s operation. The LVDS receiver proposed in this paper utilizes a sense amp for the pre-amp instead of a conventional differential pre-amp. The proposed LVDS allows more than 1.3 Gb/s transmission speed with significantly reduced driver output voltage. Also, in order to further improve the power consumption and noise performance, this paper introduces an inductance impedance matching technique which can eliminate the termination resistor. A new form of unfolded impedance matching method has been developed to accomplish the impedance matching for LVDS receivers with a sense amplifier as well as with a differential amplifier. The proposed LVDS I/O circuits have been extensively simulated using HSPICE based on 0.35um TSMC CMOS technology. The simulation results show improved power gain and transmission rate by ${\sim}12%$ and ${\sim}18%$, respectively.

Association between Subjective Distress Symptoms and Argon Welding among Shipyard Workers in Gyeongnam Province (경남소재 일개조선소 근로자의 건강이상소견과 아르곤 용접과의 관련성)

  • Choi, Woo-Ho;Jin, Seong-Mi;Kweon, Deok-Heon;Kim, Jang-Rak;Kang, Yune-Sik;Jeong, Baek-Geum;Park, Ki-Soo;Hwang, Young-Sil;Hong, Dae-Yong
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.24 no.4
    • /
    • pp.547-555
    • /
    • 2014
  • Objective: This study was conducted to investigate the association between subjective distress symptoms and argon welding among workers in Gyeongnam Province shipyard. Method: 31 argon and 29 non-argon welding workers were selected as study subjects in order to measure concentrations of personal dust, welding fumes and other hazardous materials such as ZnO, Pb, Cr, FeO, MnO, Cu, Ni, $TiO_2$, MgO, NO, $NO_2$, $O_3$, $O_2$, $CO_2$, CO and Ar. An interviewer-administered questionnaire survey was also performed on the same subjects. The items queried were as follows: age, height, weight, working duration, welding time, welding rod amounts used, drinking, smoking, and rate of subjective distress symptoms including headache and other symptoms such as fever, vomiting and nausea, metal fume fever, dizziness, tingling sensations, difficulty in breathing, memory loss, sleep disorders, emotional disturbance, hearing loss, hand tremors, visual impairment, neural abnormality, allergic reaction, runny nose and stuffiness, rhinitis, and suffocation. Statistical analysis was performed using SPSS software, version 18. Data are expressed as the mean ${\pm}SD$. An ${\chi}^2$-test and a normality test using a Shapiro wilk test were performed for the above variables. Logistic regression analysis was also conducted to identify the factors that affect the total score for subjective distress symptoms. Result: An association was shown between welding type (argon or non-argon welding) and the total score for subjective distress symptoms. Among the rate of complaining of subjective distress symptoms, vomiting and nausea, difficulty breathing, and allergic reactions were all significantly higher in the argon welding group. Only the concentration of dust and welding fumes was shown to be distributed normally after natural log transformation. According to logistic regression analysis, the correlations of working duration and welding type (argon or non-argon) between the total score of subjective distress symptoms were found to be statistically significant (p=0.041, p=0.049, respectively). Conclusion: Our results suggest that argon welding could cause subjective distress symptoms in shipyard workers.

A Systematic Review of the Dual-Task Training for Stroke With Hemiplegia (뇌졸중 환자에게 적용한 이중과제 훈련이 미치는 효과에 대한 체계적 고찰)

  • Lee, Yei-Jin;Jung, Min-Ye
    • Therapeutic Science for Rehabilitation
    • /
    • v.5 no.1
    • /
    • pp.23-32
    • /
    • 2016
  • Objective : To investigate the current international researches which identify the effect of stroke with hemiplegia after dual-task training Methods : We systematically examined journals published from 2007 to 2015, searching PubMed. Total 5 researches were selected for the analyses. Results : Selected studies were all in international journal and they used two group experimental design. In addition, all the paper got PEDro scores above 6. They conducted gait task for motor task, at the same time used various domain of cognitive task such as from sustain attention to working memory. The outcome measure tools used for evaluation by the standardized assessment tool and operational definition, further also included the assessment tool designed for the dual-task training such as a variety of tools make possible to assess various aspects of effect. Discussion : Dual-task training in this study was found to give a positive effect on the dual-task performance, as well as improving the motor and cognitive function in patients with stroke. However there were also limitation to the studies conducted so far. In order to apply the occupational therapy, this results can consider as the preliminary data which suggest supplement point and can be a evidence for effective treatment for stroke with hemiplegia.

Joint Demosaicking and Arbitrary-ratio Down Sampling Algorithm for Color Filter Array Image (컬러 필터 어레이 영상에 대한 공동의 컬러보간과 임의 배율 다운샘플링 알고리즘)

  • Lee, Min Seok;Kang, Moon Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.4
    • /
    • pp.68-74
    • /
    • 2017
  • This paper presents a joint demosaicking and arbitrary-ratio down sampling algorithm for color filter array (CFA) images. Color demosaiking is a necessary part of image signal processing pipeline for many types of digital image recording system using single sensor. Also, such as smart phone, obtained high resolution image from image sensor has to be down-sampled to be displayed on the screen. The conventional solution is "Demosaicking first and down sampling later". However, this scheme requires a significant amount of memory and computational cost. Also, artifacts can be introduced or details get damaged during demosaicking and down sampling process. In this paper, we propose a method in which demosaicking and down sampling are working simultaneously. We use inverse mapping of Bayer CFA and then joint demosaicking and down sampling with arbitrary-ratio scheme based on signal decomposition of high and low frequency component in input data. Experimental results show that our proposed algorithm has better image quality performance and much less computational cost than those of conventional solution.

Activation of the M1 Muscarinic Acetylcholine Receptor Induces GluA2 Internalization in the Hippocampus (쥐 해마에서 M1 무스카린 아세틸콜린 수용체의 활성에 의한 GluA2 세포내이입 연구)

  • Ryu, Keun Oh;Seok, Heon
    • Journal of Life Science
    • /
    • v.25 no.10
    • /
    • pp.1103-1109
    • /
    • 2015
  • Cholinergic innervation of the hippocampus is known to be correlated with learning and memory. The cholinergic agonist carbachol (CCh) modulate synaptic plasticity and produced long-term synaptic depression (LTD) in the hippocampus. However, the exact mechanisms by which the cholinergic system modifies synaptic functions in the hippocampus have yet to be determined. This study introduces an acetylcholine receptor-mediated LTD that requires internalization of alpha-amino-3-hydroxy-5-methylisoxazole-4-propionate (AMPA) receptors on the postsynaptic surface and their intracellular mechanism in the hippocampus. In the present study, we showed that the application of the cholinergic agonist CCh reduced the surface expression of GluA2 on synapses and that this reduction was prevented by the M1 muscarinic acetylcholine receptor antagonist pirenzepine in primary hippocampal neurons. The interaction between GluA2 and the glutamate receptor-interacting protein 1 (GRIP1) was disrupted in a hippocampal slice from a rat upon CCh simulation. Under the same conditions, the binding of GluA2 to adaptin-α, a protein involved in clathrin-mediated endocytosis, was enhanced. The current data suggest that the activation of LTD, mediated by the acetylcholine receptor, requires the internalization of the GluA2 subunits of AMPA receptors and that this may be controlled by the disruption of GRIP1 in the PDZ ligand domain of GluA2. Therefore, we can hypothesize that one mechanism underlying the LTD mediated by the M1 mAChR is the internalization of the GluA2 AMPAR subunits from the plasma membrane in the hippocampal cholinergic system.

A research on improving client based detection feature by using server log analysis in FPS games (FPS 게임 서버 로그 분석을 통한 클라이언트 단 치팅 탐지 기능 개선에 관한 연구)

  • Kim, Seon Min;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.6
    • /
    • pp.1465-1475
    • /
    • 2015
  • Cheating detection models in the online games can be divided into two parts. The one is on client based model, which is designed to detect malicious programs not to be run while playing the games. The other one is server based model, which distinguishes the difference between benign users and cheaters by the server log analysis. The client based model provides various features to prevent games from cheating, For instance, Anti-reversing, memory manipulation and so on. However, being deployed and operated on the client side is a huge weak point as cheaters can analyze and bypass the detection features. That Is why the server based model is an emerging way to detect cheating users in online games. But the simple log data such as FPS's one can be hard to find validate difference between two of them. In this paper, In order to compensate for the disadvantages of the two detection model above, We use the existing game security solution log as well as the server one to bring high performance as well as detection ratio compared to the existing detection models in the market.

Hardware-Based High Performance XML Parsing Technique Using an FPGA (FPGA를 이용한 하드웨어 기반 고성능 XML 파싱 기법)

  • Lee, Kyu-hee;Seo, Byeong-seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2469-2475
    • /
    • 2015
  • A structured XML has been widely used to present services on various Web-services. The XML is also used for digital documents and digital signatures and for the representation of multimedia files in email systems. The XML document should be firstly parsed to access elements in the XML. The parsing is the most compute-instensive task in the use of XML documents. Most of the previous work has focused on hardware based XML parsers in order to improve parsing performance, while a little work has studied parsing techniques. We present the high performance parsing technique which can be used all of XML parsers and design hardware based XML parser using an FPGA. The proposed parsing technique uses element analyzers instead of the state machine and performs multibyte-based element matching. As a result, our parsing technique can reduce the number of clock cycles per byte(CPB) and does not need to require any preprocessing, such as loading XML data into memory. Compared to other parsers, our parser acheives 1.33~1.82 times improvement in the system performance. Therefore, the proposed parsing technique can process XML documents in real time and is suitable for applying to all of XML parsers.