• Title/Summary/Keyword: Test Network

Search Result 3,519, Processing Time 0.027 seconds

The Role of Social Capital and Identity in Knowledge Contribution in Virtual Communities: An Empirical Investigation (가상 커뮤니티에서 사회적 자본과 정체성이 지식기여에 미치는 역할: 실증적 분석)

  • Shin, Ho Kyoung;Kim, Kyung Kyu;Lee, Un-Kon
    • Asia pacific journal of information systems
    • /
    • v.22 no.3
    • /
    • pp.53-74
    • /
    • 2012
  • A challenge in fostering virtual communities is the continuous supply of knowledge, namely members' willingness to contribute knowledge to their communities. Previous research argues that giving away knowledge eventually causes the possessors of that knowledge to lose their unique value to others, benefiting all except the contributor. Furthermore, communication within virtual communities involves a large number of participants with different social backgrounds and perspectives. The establishment of mutual understanding to comprehend conversations and foster knowledge contribution in virtual communities is inevitably more difficult than face-to-face communication in a small group. In spite of these arguments, evidence suggests that individuals in virtual communities do engage in social behaviors such as knowledge contribution. It is important to understand why individuals provide their valuable knowledge to other community members without a guarantee of returns. In virtual communities, knowledge is inherently rooted in individual members' experiences and expertise. This personal nature of knowledge requires social interactions between virtual community members for knowledge transfer. This study employs the social capital theory in order to account for interpersonal relationship factors and identity theory for individual and group factors that may affect knowledge contribution. First, social capital is the relationship capital which is embedded within the relationships among the participants in a network and available for use when it is needed. Social capital is a productive resource, facilitating individuals' actions for attainment. Nahapiet and Ghoshal (1997) identify three dimensions of social capital and explain theoretically how these dimensions affect the exchange of knowledge. Thus, social capital would be relevant to knowledge contribution in virtual communities. Second, existing research has addressed the importance of identity in facilitating knowledge contribution in a virtual context. Identity in virtual communities has been described as playing a vital role in the establishment of personal reputations and in the recognition of others. For instance, reputation systems that rate participants in terms of the quality of their contributions provide a readily available inventory of experts to knowledge seekers. Despite the growing interest in identities, however, there is little empirical research about how identities in the communities influence knowledge contribution. Therefore, the goal of this study is to better understand knowledge contribution by examining the roles of social capital and identity in virtual communities. Based on a theoretical framework of social capital and identity theory, we develop and test a theoretical model and evaluate our hypotheses. Specifically, we propose three variables such as cohesiveness, reciprocity, and commitment, referring to the social capital theory, as antecedents of knowledge contribution in virtual communities. We further posit that members with a strong identity (self-presentation and group identification) contribute more knowledge to virtual communities. We conducted a field study in order to validate our research model. We collected data from 192 members of virtual communities and used the PLS method to analyse the data. The tests of the measurement model confirm that our data set has appropriate discriminant and convergent validity. The results of testing the structural model show that cohesion, reciprocity, and self-presentation significantly influence knowledge contribution, while commitment and group identification do not significantly influence knowledge contribution. Our findings on cohesion and reciprocity are consistent with the previous literature. Contrary to our expectations, commitment did not significantly affect knowledge contribution in virtual communities. This result may be due to the fact that knowledge contribution was voluntary in the virtual communities in our sample. Another plausible explanation for this result may be the self-selection bias for the survey respondents, who are more likely to contribute their knowledge to virtual communities. The relationship between self-presentation and knowledge contribution was found to be significant in virtual communities, supporting the results of prior literature. Group identification did not significantly affect knowledge contribution in this study, inconsistent with the wealth of research that identifies group identification as an important factor for knowledge sharing. This conflicting result calls for future research that examines the role of group identification in knowledge contribution in virtual communities. This study makes a contribution to theory development in the area of knowledge management in general and virtual communities in particular. For practice, the results of this study identify the circumstances under which individual factors would be effective for motivating knowledge contribution to virtual communities.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

Comparative Study on the Methodology of Motor Vehicle Emission Calculation by Using Real-Time Traffic Volume in the Kangnam-Gu (자동차 대기오염물질 산정 방법론 설정에 관한 비교 연구 (강남구의 실시간 교통량 자료를 이용하여))

  • 박성규;김신도;이영인
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.4
    • /
    • pp.35-47
    • /
    • 2001
  • Traffic represents one of the largest sources of primary air pollutants in urban area. As a consequence. numerous abatement strategies are being pursued to decrease the ambient concentration of pollutants. A characteristic of most of the these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emission inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for vehicle types. The majority of inventories are compiled using passive data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. The study of current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this study, a methodology of motor vehicle emission calculation by using real-time traffic data was studied. A methodology for estimating emissions of CO at a test area in Seoul. Traffic data, which are required on a street-by-street basis, is obtained from induction loops of traffic control system. It was calculated speed-related mass of CO emission from traffic tail pipe of data from traffic system, and parameters are considered, volume, composition, average velocity, link length. And, the result was compared with that of a method of emission calculation by VKT(Vehicle Kilometer Travelled) of vehicles of category.

  • PDF

The Effect of EDTA, Tetracycline-HCl, and Citric Acid on Diseased Root Surfaces; The SEM Study (EDTA, 염산 테트라싸이클린, 구연산 처치가 치근면에 미치는 영향)

  • Ahn, Seong-Hee;Chai, Jung-Kiu;Kim, Chong-Kwan;Cho, Kyoo-Sung
    • Journal of Periodontal and Implant Science
    • /
    • v.29 no.3
    • /
    • pp.561-578
    • /
    • 1999
  • The goal of periodontal therapy is the periodontal regeneration by the removal of microorganisms and their toxic products from the periodontally diseased root surface. To achieve periodontal regeneration, root conditioning as an adjunct to root planing has been done. There are low pH etchants such as citric acid, tetracycline-HCl, and EDTA solution which is a neutral chelating agent. The purpose of present study was to examine the effect of root conditioning by citric acid, tetracycline HCl, and EDTA. Total 35 root specimens(6${\times}$3${\times}$2mm) were prepared from the periodontally diseased teeth, scaled and root planed. The specimens were treated with normal saline for 1 minute, saturated citric acid(pH 1) for 3 minutes, 50mg/ml tetracycline-HCl(pH 2) for 5 minutes, 15% EDTA(pH 7) for 5 minutes using rubbing technique. The specimens were examined under scanning electron microscopy at 1000, and 3000 magnification. On the microphotographs taken at 1000 magnification, the numbers of opened and patent dentinal tubules per unit area(10,640${\mu}m^2$) were counted. And the diameters of opened dentinal tubules per unit are (10,640${\mu}m^2$) were measured. The differences of number and diameter among all groups were statistically analyzed by Kruskal Wallis Test. The results were as follows; 1. In the specimens applied with normal saline(control group), the root surface was finely cracked, and was covered by irregular smear layer. Neither exposed dentinal tubules nor any patent dentinal tubules could be seen. 2. In the specimens applied with saturated citric acid(experimental 1 group), the globular collagen fibers were exposed around the peritubular space, and many dentinal tubules were revealed. 3. In the specimens applied with tetracycline-HCl(experimental 2 group), the process-like collagen fibers were exposed around the peritubular space, and some dentinal tubules were revealed. 4. In the specimens applied with 15% EDTA(experimental 3 group), the root surface was covered by the collagenous fibrillar network, and many dentinal tubules were revealed. 5. The numbers of opened and patent dentinal tubules were significantly more in exp. 1 group and exp. 3 group than in exp. 2 group(P<0.05). But there was no significant difference between exp. 1 group and exp. 3 group. In control group, the number of opened and patent dentinal tubules could not be counted because any dentinal tubules couldn't be seen. 6 . The diameter of opened dentinal tubules was significantly smaller in exp. 1 group and exp. 3 group than in exp. 2 group(P<0.05). But there was no significant difference between exp. 1 group and exp. 3 group. In control group, the diameter of opened dentinal tubules could not be measured because any dentinal tubules couldn't be seen. The results demonstrate that root conditioning with citric acid, tetracycline- HCl, and EDTA is more effective in periodontal healing than only root planing, and 15% EDTA solution can replace low pH etching agents such as citric acid, tetracycline-HCl for root conditioning.

  • PDF

A Study on Strategy for developing LBS Entertainment content based on local tourist information (지역 관광 정보를 활용한 LBS 엔터테인먼트 컨텐츠 개발 방안에 관한 연구)

  • Kim, Hyun-Jeong
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.151-162
    • /
    • 2007
  • How can new media devices and networks provide an effective response to the world's growing sector of cultural and historically-minded travelers? This study emerged from the question of how mobile handsets can change the nature of cultural and historical tourism in ubiquitous city environments. As wireless network and mobile IT have rapidly developed, it becomes possible to deliver cultural and historical information on the site through mobile handset as a tour guidance system. The paper describes the development of a new type of mobile tourism platform for site-specific cultural and historical information. The central objective of the project was to organize this cultural and historical walking tour around the mobile handset and its unique advantages (i.e. portability, multi-media capacity, access to wireless internet, and location-awareness potential) and then integrate the tour with a historical story and role-playing game that would deepen the mobile user's interest in the sites being visited, and enhance his or her overall experience of the area. The project was based on twelve locations that were culturally and historically significant to Korean War era in Busan. After the mobile tour game prototype was developed for this route, it was evaluated at the 10th PIFF (Pusan International Film Festival). After use test, some new strategies for developing mobile "edutainment content" to deliver cultural historical contents of the location were discussed. Combining 'edutainment' with a cultural and historical mobile walking tour brings a new dimension to existing approaches of the tourism and mobile content industry.

  • PDF

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.

Sensitivity of Aerosol Optical Parameters on the Atmospheric Radiative Heating Rate (에어로졸 광학변수가 대기복사가열률 산정에 미치는 민감도 분석)

  • Kim, Sang-Woo;Choi, In-Jin;Yoon, Soon-Chang;Kim, Yumi
    • Atmosphere
    • /
    • v.23 no.1
    • /
    • pp.85-92
    • /
    • 2013
  • We estimate atmospheric radiative heating effect of aerosols, based on AErosol RObotic NETwork (AERONET) and lidar observations and radiative transfer calculations. The column radiation model (CRM) is modified to ingest the AERONET measured variables (aerosol optical depth, single scattering albedo, and asymmetric parameter) and subsequently calculate the optical parameters at the 19 bands from the data obtained at four wavelengths. The aerosol radiative forcing at the surface and the top of the atmosphere, and atmospheric absorption on pollution (April 15, 2001) and dust (April 17~18, 2001) days are 3~4 times greater than those on clear-sky days (April 14 and 16, 2001). The atmospheric radiative heating rate (${\Delta}H$) and heating rate by aerosols (${\Delta}H_{aerosol}$) are estimated to be about $3\;K\;day^{-1}$ and $1{\sim}3\;K\;day^{-1}$ for pollution and dust aerosol layers. The sensitivity test showed that a 10% uncertainty in the single scattering albedo results in 30% uncertainties in aerosol radiative forcing at the surface and at the top of the atmosphere and 60% uncertainties in atmospheric forcing, thereby translated to about 35% uncertainties in ${\Delta}H$. This result suggests that atmospheric radiative heating is largely determined by the amount of light-absorbing aerosols.

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.

OD matrix estimation using link use proportion sample data as additional information (표본링크이용비를 추가정보로 이용한 OD 행렬 추정)

  • 백승걸;김현명;신동호
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.4
    • /
    • pp.83-93
    • /
    • 2002
  • To improve the performance of estimation, the research that uses additional information addition to traffic count and target OD with additional survey cost have been studied. The purpose of this paper is to improve the performance of OD estimation by reducing the feasible solutions with cost-efficiently additional information addition to traffic counts and target OD. For this purpose, we Propose the OD estimation method with sample link use proportion as additional information. That is, we obtain the relationship between OD trip and link flow from sample link use proportion that is high reliable information with roadside survey, not from the traffic assignment of target OD. Therefore, this paper proposes OD estimation algorithm in which the conservation of link flow rule under the path-based non-equilibrium traffic assignment concept. Numerical result with test network shows that it is possible to improve the performance of OD estimation where the precision of additional data is low, since sample link use Proportion represented the information showing the relationship between OD trip and link flow. And this method shows the robust performance of estimation where traffic count or OD trip be changed, since this method did not largely affected by the error of target OD and the one of traffic count. In addition to, we also propose that we must set the level of data precision by considering the level of other information precision, because "precision problem between information" is generated when we use additional information like sample link use proportion etc. And we Propose that the method using traffic count as basic information must obtain the link flow to certain level in order to high the applicability of additional information. Finally, we propose that additional information on link have a optimal counting location problem. Expecially by Precision of information side it is possible that optimal survey location problem of sample link use proportion have a much impact on the performance of OD estimation rather than optimal counting location problem of link flow.