• Title/Summary/Keyword: a inference

Search Result 2,820, Processing Time 0.035 seconds

The Influence of Sexual Violence on the Relationship Between Internet Pornography Experience and Self-Control

  • Seo, Gang Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.3
    • /
    • pp.191-198
    • /
    • 2020
  • In this paper we propose a for high school students who are attending a nationwide city with experience in Internet pornography, we would like to find out the impact of Internet pornography experience and self-regulation on sex crime harmful behavior. For this study, an Internet panel survey was conducted using a purposeful method of significant allocation inference. During the period, 246 copies of the questionnaire were distributed for about a month from May to June 2018 and 210 parts were analyzed except for 36 parts with no experience of pornographic material, and further analysis was conducted on 85 respondents with experience in harmful behavior of sexual violence. To this end, analysis tools used the SPS WIN 20.0 program version. The research results are as follows. First, we could find that Internet pornography has a negative effect on teenagers. This shows the probability of developing sexual violence into behavior as people can experience pornographic material regardless of their will due to the high Internet access. Second, the self-regulation of sexual violence behavior is found to have no direct impact. This is not just the adolescent's will to do so, but it is affected by the external environment. Third, self-regulation has proven its role as a modulator to mitigate negative perceptions of Internet pornography. Based on this, the proposal for limiting current prices was discussed.

GIS-based Data-driven Geological Data Integration using Fuzzy Logic: Theory and Application (퍼지 이론을 이용한 GIS기반 자료유도형 지질자료 통합의 이론과 응용)

  • ;;Chang-Jo F. Chung
    • Economic and Environmental Geology
    • /
    • v.36 no.3
    • /
    • pp.243-255
    • /
    • 2003
  • The mathematical models for GIS-based spatial data integration have been developed for geological applications such as mineral potential mapping or landslide susceptibility analysis. Among various models, the effectiveness of fuzzy logic based integration of multiple sets of geological data is investigated and discussed. Unlike a traditional target-driven fuzzy integration approach, we propose a data-driven approach that is derived from statistical relationships between the integration target and related spatial geological data. The proposed approach consists of four analytical steps; data representation, fuzzy combination, defuzzification and validation. For data representation, the fuzzy membership functions based on the likelihood ratio functions are proposed. To integrate them, the fuzzy inference network is designed that can combine a variety of different fuzzy operators. Defuzzification is carried out to effectively visualize the relative possibility levels from the integrated results. Finally, a validation approach based on the spatial partitioning of integration targets is proposed to quantitatively compare various fuzzy integration maps and obtain a meaningful interpretation with respect to future events. The effectiveness and some suggestions of the schemes proposed here are illustrated by describing a case study for landslide susceptibility analysis. The case study demonstrates that the proposed schemes can effectively identify areas that are susceptible to landslides and ${\gamma}$ operator shows the better prediction power than the results using max and min operators from the validation procedure.

Performance of a Bayesian Design Compared to Some Optimal Designs for Linear Calibration (선형 캘리브레이션에서 베이지안 실험계획과 기존의 최적실험계획과의 효과비교)

  • 김성철
    • The Korean Journal of Applied Statistics
    • /
    • v.10 no.1
    • /
    • pp.69-84
    • /
    • 1997
  • We consider a linear calibration problem, $y_i = $$\alpha + \beta (x_i - x_0) + \epsilon_i$, $i=1, 2, {\cdot}{\cdot},n$ $y_f = \alpha + \beta (x_f - x_0) + \epsilon, $ where we observe $(x_i, y_i)$'s for the controlled calibration experiments and later we make inference about $x_f$ from a new observation $y_f$. The objective of the calibration design problem is to find the optimal design $x = (x_i, \cdots, x_n$ that gives the best estimates for $x_f$. We compare Kim(1989)'s Bayesian design which minimizes the expected value of the posterior variance of $x_f$ and some optimal designs from literature. Kim suggested the Bayesian optimal design based on the analysis of the characteristics of the expected loss function and numerical must be equal to the prior mean and that the sum of squares be as large as possible. The designs to be compared are (1) Buonaccorsi(1986)'s AV optimal design that minimizes the average asymptotic variance of the classical estimators, (2) D-optimal and A-optimal design for the linear regression model that optimize some functions of $M(x) = \sum x_i x_i'$, and (3) Hunter & Lamboy (1981)'s reference design from their paper. In order to compare the designs which are optimal in some sense, we consider two criteria. First, we compare them by the expected posterior variance criterion and secondly, we perform the Monte Carlo simulation to obtain the HPD intervals and compare the lengths of them. If the prior mean of $x_f$ is at the center of the finite design interval, then the Bayesian, AV optimal, D-optimal and A-optimal designs are indentical and they are equally weighted end-point design. However if the prior mean is not at the center, then they are not expected to be identical.In this case, we demonstrate that the almost Bayesian-optimal design was slightly better than the approximate AV optimal design. We also investigate the effects of the prior variance of the parameters and solution for the case when the number of experiments is odd.

  • PDF

A Report on the Inter-Gene Correlations in cDNA Microarray Data Sets (cDNA 마이크로어레이에서 유전자간 상관 관계에 대한 보고)

  • Kim, Byung-Soo;Jang, Jee-Sun;Kim, Sang-Cheol;Lim, Jo-Han
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.3
    • /
    • pp.617-626
    • /
    • 2009
  • A series of recent papers reported that the inter-gene correlations in Affymetrix microarray data sets were strong and long-ranged, and the assumption of independence or weak dependence among gene expression signals which was often employed without justification was in conflict with actual data. Qui et al. (2005) indicated that applying the nonparametric empirical Bayes method in which test statistics were pooled across genes for performing the statistical inference resulted in the large variance of the number of differentially expressed genes. Qui et al. (2005) attributed this effect to strong and long-ranged inter-gene correlations. Klebanov and Yakovlev (2007) demonstrated that the inter-gene correlations provided a rich source of information rather than being a nuisance in the statistical analysis and they developed, by transforming the original gene expression sequence, a sequence of independent random variables which they referred to as a ${\delta}$-sequence. We note in this report using two cDNA microarray data sets experimented in this country that the strong and long-ranged inter-gene correlations were still valid in cDNA microarray data and also the ${\delta}$-sequence of independence could be derived from the cDNA microarray data. This note suggests that the inter-gene correlations be considered in the future analysis of the cDNA microarray data sets.

Analysis on Students' Abilities of Proof in Middle School (중학교 학생의 증명 능력 분석)

  • 서동엽
    • Journal of Educational Research in Mathematics
    • /
    • v.9 no.1
    • /
    • pp.183-203
    • /
    • 1999
  • In this study, we analysed the constituents of proof and examined into the reasons why the students have trouble in learning the proof, and proposed directions for improving the teaming and teaching of proof. Through the reviews of the related literatures and the analyses of textbooks, the constituents of proof in the level of middle grades in our country are divided into two major categories 'Constituents related to the construction of reasoning' and 'Constituents related to the meaning of proof. 'The former includes the inference rules(simplification, conjunction, modus ponens, and hypothetical syllogism), symbolization, distinguishing between definition and property, use of the appropriate diagrams, application of the basic principles, variety and completeness in checking, reading and using the basic components of geometric figures to prove, translating symbols into literary compositions, disproof using counter example, and proof of equations. The latter includes the inferences, implication, separation of assumption and conclusion, distinguishing implication from equivalence, a theorem has no exceptions, necessity for proof of obvious propositions, and generality of proof. The results from three types of examinations; analysis of the textbooks, interview, writing test, are summarized as following. The hypothetical syllogism that builds the main structure of proofs is not taught in middle grades explicitly, so students have more difficulty in understanding other types of syllogisms than the AAA type of categorical syllogisms. Most of students do not distinguish definition from property well, so they find difficulty in symbolizing, separating assumption from conclusion, or use of the appropriate diagrams. The basic symbols and principles are taught in the first year of the middle school and students use them in proving theorems after about one year. That could be a cause that the students do not allow the exact names of the principles and can not apply correct principles. Textbooks do not describe clearly about counter example, but they contain some problems to solve only by using counter examples. Students have thought that one counter example is sufficient to disprove a false proposition, but in fact, they do not prefer to use it. Textbooks contain some problems to prove equations, A=B. Proving those equations, however, students do not perceive that writing equation A=B, the conclusion of the proof, in the first line and deforming the both sides of it are incorrect. Furthermore, students prefer it to developing A to B. Most of constituents related to the meaning of proof are mentioned very simply or never in textbooks, so many students do not know them. Especially, they accept the result of experiments or measurements as proof and prefer them to logical proof stated in textbooks.

  • PDF

Analysis on the Changes of Choices according to the Conditions in the Realistic Probability Problem of the Elementary Gifted Students (확률 판단 문제에서 초등 수학영재들의 선택에 미친 요인 분석과 교육적 시사점)

  • Lee, Seung Eun;Song, Sang Hun
    • School Mathematics
    • /
    • v.15 no.3
    • /
    • pp.603-617
    • /
    • 2013
  • The major purpose of this article is to examine what kind of gap exists between mathematically gifted students' probability knowledge and the reality actually applying that knowledge and then analyze the cause of the gap. To attain the goal, 23 elementary mathematically gifted students at the highest level from G region were provided with problem situations internalizing a probability and expectation, and the problems are in series in which conditions change one by one. The study task is in a gaming situation where there can be the most reasonable answer mathematically, but the choice may differ by how much they consider a certain condition. To collect data, the students' individual worksheets are collected, and all the class procedures are recorded with a camcorder, and the researcher writes a class observation report. The biggest reason why the students do not make a decision solely based on their own mathematical knowledge is because of 'impracticality', one of the properties of probability, that in reality, all things are not realized according to the mathematical calculation and are impossible to be anticipated and also their own psychological disposition to 'avoid loss' about their entry fee paid. In order to provide desirable probability education, we should not be limited to having learners master probability knowledge included in the textbook by solving the problems based on algorithmic knowledge but provide them with plenty of experience to apply probabilistic inference with which they should make their own choice in diverse situations having context.

  • PDF

Design and Implementation of Sensibilities Lighting LED Controller using Modbus for a Ship (Modbus를 이용한 선박용 감성조명 LED 제어기의 설계 및 구현)

  • Jeong, Jeong-Soo;Lee, Sang-Bae
    • Journal of Navigation and Port Research
    • /
    • v.39 no.4
    • /
    • pp.299-305
    • /
    • 2015
  • Modbus is a serial communications protocol, it has since become a practically standard communication protocol, and it is now a commonly available means of connecting industrial electronic devices. Therefore, it can be connected with all devices using Modbus protocol to the measurement and remote control on the ships, buildings, trains, airplanes and etc.. In this paper, we add the Modbus communication protocol to the existing lighting controller sensitivity to enable verification and remote control by external environmental factors, and also introduces a fuzzy inference system was configured by external environmental factors to control LED lighting. External environmental factors of temperature, humidity, illuminance value represented by the LED through a fuzzy control algorithm, the values accepted by the controller through the sensor. Modbus is using the RS485 Serial communication with other devices connected to the temperature, humidity, illumination and LED output status check is possible. In addition, the remote user is changed to enable it is possible to change the RGB values in the desired color change. Produced was confirmed that the LED controller output is based on the temperature, humidity and illumination.

Performance Improvement of Spam Filtering Using User Actions (사용자 행동을 이용한 쓰레기편지 여과의 성능 개선)

  • Kim Jae-Hoon;Kim Kang-Min
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.163-170
    • /
    • 2006
  • With rapidly developing Internet applications, an e-mail has been considered as one of the most popular methods for exchanging information. The e-mail, however, has a serious problem that users ran receive a lot of unwanted e-mails, what we called, spam mails, which cause big problems economically as well as socially. In order to block and filter out the spam mails, many researchers and companies have performed many sorts of research on spam filtering. In general, users of e-mail have different criteria on deciding if an e-mail is spam or not. Furthermore, in e-mail client systems, users do different actions according to a spam mail or not. In this paper, we propose a mail filtering system using such user actions. The proposed system consists of two steps: One is an action inference step to draw user actions from an e-mail and the other is a mail classification step to decide if the e-mail is spam or not. All the two steps use incremental learning, of which an algorithm is IB2 of TiMBL. To evaluate the proposed system, we collect 12,000 mails of 12 persons. The accuracy is $81{\sim}93%$ according to each person. The proposed system outperforms, at about 14% on the average, a system that does not use any information about user actions.

Strawberry Pests and Diseases Detection Technique Optimized for Symptoms Using Deep Learning Algorithm (딥러닝을 이용한 병징에 최적화된 딸기 병충해 검출 기법)

  • Choi, Young-Woo;Kim, Na-eun;Paudel, Bhola;Kim, Hyeon-tae
    • Journal of Bio-Environment Control
    • /
    • v.31 no.3
    • /
    • pp.255-260
    • /
    • 2022
  • This study aimed to develop a service model that uses a deep learning algorithm for detecting diseases and pests in strawberries through image data. In addition, the pest detection performance of deep learning models was further improved by proposing segmented image data sets specialized in disease and pest symptoms. The CNN-based YOLO deep learning model was selected to enhance the existing R-CNN-based model's slow learning speed and inference speed. A general image data set and a proposed segmented image dataset was prepared to train the pest and disease detection model. When the deep learning model was trained with the general training data set, the pest detection rate was 81.35%, and the pest detection reliability was 73.35%. On the other hand, when the deep learning model was trained with the segmented image dataset, the pest detection rate increased to 91.93%, and detection reliability was increased to 83.41%. This study concludes with the possibility of improving the performance of the deep learning model by using a segmented image dataset instead of a general image dataset.

Efficient Poisoning Attack Defense Techniques Based on Data Augmentation (데이터 증강 기반의 효율적인 포이즈닝 공격 방어 기법)

  • So-Eun Jeon;Ji-Won Ock;Min-Jeong Kim;Sa-Ra Hong;Sae-Rom Park;Il-Gu Lee
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.25-32
    • /
    • 2022
  • Recently, the image processing industry has been activated as deep learning-based technology is introduced in the image recognition and detection field. With the development of deep learning technology, learning model vulnerabilities for adversarial attacks continue to be reported. However, studies on countermeasures against poisoning attacks that inject malicious data during learning are insufficient. The conventional countermeasure against poisoning attacks has a limitation in that it is necessary to perform a separate detection and removal operation by examining the training data each time. Therefore, in this paper, we propose a technique for reducing the attack success rate by applying modifications to the training data and inference data without a separate detection and removal process for the poison data. The One-shot kill poison attack, a clean label poison attack proposed in previous studies, was used as an attack model. The attack performance was confirmed by dividing it into a general attacker and an intelligent attacker according to the attacker's attack strategy. According to the experimental results, when the proposed defense mechanism is applied, the attack success rate can be reduced by up to 65% compared to the conventional method.