• Title/Summary/Keyword: 계산 논리

Search Result 240, Processing Time 0.025 seconds

Wire Recognition on the Chip Photo based on Histogram (칩 사진 상의 와이어 인식 방법)

  • Jhang, Kyoungson
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.5
    • /
    • pp.111-120
    • /
    • 2016
  • Wire recognition is one of the important tasks in chip reverse engineering since connectivity comes from wires. Recognized wires are used to recover logical or functional representation of the corresponding circuit. Though manual recognition provides accurate results, it becomes impossible, as the number of wires is more than hundreds of thousands. Wires on a chip usually have specific intensity or color characteristics since they are made of specific materials. This paper proposes two stage wire recognition scheme; image binarization and then the process of determining whether regions in binary image are wires or not. We employ existing techniques for two processes. Since the second process requires the characteristics of wires, the users needs to select the typical wire region in the given image. The histogram characteristic of the selected region is used in calculating histogram similarity between the typical wire region and the other regions. The first experiment is to select the most appropriate binarization scheme for the second process. The second experiment on the second process compares three proposed methods employing histogram similarity of grayscale or HSV color since there have not been proposed any wire recognition method comparable by experiment. The best method shows more than 98% of true positive rate for 25 test examples.

Geometrically and Topographically Consistent Map Conflation for Federal and Local Governments (Geometry 및 Topology측면에서 일관성을 유지한 방법을 이용한 연방과 지방정부의 공간데이터 융합)

  • Kang, Ho-Seok
    • Journal of the Korean Geographical Society
    • /
    • v.39 no.5 s.104
    • /
    • pp.804-818
    • /
    • 2004
  • As spatial data resources become more abundant, the potential for conflict among them increases. Those conflicts can exist between two or many spatial datasets covering the same area and categories. Therefore, it becomes increasingly important to be able to effectively relate these spatial data sources with others then create new spatial datasets with matching geometry and topology. One extensive spatial dataset is US Census Bureau's TIGER file, which includes census tracts, block groups, and blocks. At present, however, census maps often carry information that conflicts with municipally-maintained detailed spatial information. Therefore, in order to fully utilize census maps and their valuable demographic and economic information, the locational information of the census maps must be reconciled with the more accurate municipally-maintained reference maps and imagery. This paper formulates a conceptual framework and two map models of map conflation to make geometrically and topologically consistent source maps according to the reference maps. The first model is based on the cell model of map in which a map is a cell complex consisting of 0-cells, 1-cells, and 2-cells. The second map model is based on a different set of primitive objects that remain homeomorphic even after map generalization. A new hierarchical based map conflation is also presented to be incorporated with physical, logical, and mathematical boundary and to reduce the complexity and computational load. Map conflation principles with iteration are formulated and census maps are used as a conflation example. They consist of attribute embedding, find meaning node, cartographic 0-cell match, cartographic 1-cell match, and map transformation.

DOVE : A Distributed Object System for Virtual Computing Environment (DOVE : 가상 계산 환경을 위한 분산 객체 시스템)

  • Kim, Hyeong-Do;Woo, Young-Je;Ryu, So-Hyun;Jeong, Chang-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.2
    • /
    • pp.120-134
    • /
    • 2000
  • In this paper we present a Distributed Object oriented Virtual computing Environment, called DOVE which consists of autonomous distributed objects interacting with one another via method invocations based on a distributed object model. DOVE appears to a user logically as a single virtual computer for a set of heterogeneous hosts connected by a network as if objects in remote site reside in one virtual computer. By supporting efficient parallelism, heterogeneity, group communication, single global name service and fault-tolerance, it provides a transparent and easy-to-use programming environment for parallel applications. Efficient parallelism is supported by diverse remote method invocation, multiple method invocation for object group, multi-threaded architecture and synchronization schemes. Heterogeneity is achieved by automatic data arshalling and unmarshalling, and an easy-to-use and transparent programming environment is provided by stub and skeleton objects generated by DOVE IDL compiler, object life control and naming service of object manager. Autonomy of distributed objects, multi-layered architecture and decentralized approaches in hierarchical naming service and object management make DOVE more extensible and scalable. Also,fault tolerance is provided by fault detection in object using a timeout mechanism, and fault notification using asynchronous exception handling methods

  • PDF

Experimental Analysis of Korean and CPMP Textbooks: A Comparative Study (한국과 미국의 교과서 체제 비교분석)

  • Shin, Hyun-Sung;Han, Hye-Sook
    • Journal of the Korean School Mathematics Society
    • /
    • v.12 no.2
    • /
    • pp.309-325
    • /
    • 2009
  • The purpose of the study was to investigate the differences between Korean mathematics textbooks and CPMP textbooks in the view of conceptual network, structure of mathematical contents, instructional design, and teaching and learning environment to explore the implications for mathematics education in Korea. According to the results, Korean textbooks emphasized the mathematical structures and conceptual network, on the other hand, CPMP textbooks focused on making connections between mathematical concepts and corresponding real life situations as well as mathematical structures. And generalizing mathematical concepts at the symbolic level was very important objective in Korean textbooks, but in the CPMP textbooks, investigating mathematical ideas and solving problems in diverse contexts including real- life situations were considered very important. Teachers using Korean textbooks preferred an explanatory teaching method with the use of concrete manipulatives and student worksheet, however, teachers using CPMP textbooks emphasized collaborative group activities to communicate mathematical ideas and encouraged students to use graphing calculators when they explore mathematical concepts and solve problems.

  • PDF

A Fashion Design Recommender Agent System using Collaborative Filtering and Sensibilities related to Textile Design Factors (텍스타일 기반의 협력적 필터링 기술과 디자인 요소에 따른 감성 분석을 이용한 패션 디자인 추천 에이전트 시스템)

  • 정경용;나영주;이정현
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.2
    • /
    • pp.174-188
    • /
    • 2004
  • In the life environment changed with not only the quality and the price of the products but also the material abundance, it is the most crucial factor for the strategy of product sales to investigate consumer's sensibility and preference degree. In this perspective, it is necessary to design and merchandise the products in cope with each consumer's sensibility and needs as well as its functional aspects. In this paper, we propose the Fashion Design Recommender Agent System (FDRAS-pro) for textile design applying collaborative filtering personalization technique as one of the methods of material development centered on consumer's sensibility and preference. For a collaborative filtering system based on textile, Representative-Attribute Neighborhood is adopted to determine the number or neighbors that will be used for preferences estimation. Pearson's Correlation Coefficient is used to calculate similarity weights among users. We build a database founded on the sensibility adjectives to develop textile designs by extracting the representative sensibility adjectives from users' sensibility and preferences about textile designs. FDRAS-pro recommends textile designs to a customer who has a similar propensity about textile. To investigate the sensibility and emotion according to the effect of design factors, fertile designs were analyzed in terms of 9 design factors, such as, motif source, motif-background ratio, motif variation, motif interpretation, motif arrangement, motif articulation, hue contrast, value contrast, chroma contrast. Finally, we plan to conduct empirical applications to verify the adequacy and the validity of our system.

A Parallel Processing Technique for Large Spatial Data (대용량 공간 데이터를 위한 병렬 처리 기법)

  • Park, Seunghyun;Oh, Byoung-Woo
    • Spatial Information Research
    • /
    • v.23 no.2
    • /
    • pp.1-9
    • /
    • 2015
  • Graphical processing unit (GPU) contains many arithmetic logic units (ALUs). Because many ALUs can be exploited to process parallel processing, GPU provides efficient data processing. The spatial data require many geographic coordinates to represent the shape of them in a map. The coordinates are usually stored as geodetic longitude and latitude. To display a map in 2-dimensional Cartesian coordinate system, the geodetic longitude and latitude should be converted to the Universal Transverse Mercator (UTM) coordinate system. The conversion to the other coordinate system and the rendering process to represent the converted coordinates to screen use complex floating-point computations. In this paper, we propose a parallel processing technique that processes the conversion and the rendering using the GPU to improve the performance. Large spatial data is stored in the disk on files. To process the large amount of spatial data efficiently, we propose a technique that merges the spatial data files to a large file and access the file with the method of memory mapped file. We implement the proposed technique and perform the experiment with the 747,302,971 points of the TIGER/Line spatial data. The result of the experiment is that the conversion time for the coordinate systems with the GPU is 30.16 times faster than the CPU only method and the rendering time is 80.40 times faster than the CPU.

The High Cost of Fear (리포트 - 공포의 값비싼 대가)

  • Shellenberger, Michael
    • Nuclear industry
    • /
    • v.37 no.9
    • /
    • pp.58-90
    • /
    • 2017
  • '공포의 값비싼 대가(The High Cost of Fear)'는 공개된 자료 중 동료 평가를 마친 최신의 자료와 간단한 계산 방법을 통해 한국의 탈원전 정책이 가져올 경제적, 환경적 영향을 분석한 보고서이다. 우리는 탈원전 정책이 다음과 같은 영향을 미칠 것으로 예측한다. ${\cdot}$천연가스 구매에만 최소 매년 100억 달러의 비용이 들 것이다. 이는 한국 평균임금인 연소득 29,125달러를 받는 일자리 343,000개에 해당하는 금액이다. ${\cdot}$비용의 대부분은 연료 수입에 사용될 것이며, 한국의 무역 수지가 악화될 것이다. ${\cdot}$한국의 부족한 재생에너지 자원을 고려할 때, 상당한 양의 화석 연료를 추가로 사용하게 될 것이다. ${\cdot}$LNG 발전소가 석탄 발전소를 대체하지 못하고 원자력발전소를 대체하면서 대기 오염으로 인한 조기 사망자 수가 증가할 것이다. ${\cdot}$한국의 전도유망한 원전 수출 산업이 아예 붕괴되거나 큰 타격을 입을 것이다. ${\cdot}$평균적 미국 자동차의 연간 주행거리를 기준으로 150만대에서 270만대의 미국 자동차가 배출하는 배기가스의 양만큼 연간 탄소 배출이 증가할 것이고, 한국은 파리기후협정에서 약속한 탄소 배출 감축 목표를 달성할 수 없게 된다. 본 보고서는 현재 계획된 탈원전 정책의 역사적 사회적 배경을 분석하여 다음과 같은 결론을 도출하였다. ${\cdot}$'그린피스(Greenpeace)', '지구의 친구들(Friends for the Earth)' 등 막대한 자금 지원을 받는 해외 환경단체들은 탈원전 거짓 정보의 근원이며, 이들은 저렴하고 풍부한 에너지라는 개념을 반대한다. ${\cdot}$후쿠시마 원전 사고와 그 여파의 주된 원인은 일본 원자력산업계의 오만과 원자력에 대한 과장된 집단 공포이다. ${\cdot}$반핵 진영의 논리에는 산업계와 정부에 대한 불신과 원자력, 방사선에 대한 몰이해가 반영되어 있다. ${\cdot}$반핵 진영은 후쿠시마 사고를 2014년 한수원 납품 비리 사태의 심각성을 과장하는 데 사용하고 있다. 2014년의 비리 사태는 한국 원자력 규제기관의 독립성을 증명했으며, 2016년의 경주 지진은 2011년 후쿠시마에서 쓰나미와 노심 용융을 초래한 동일본 대지진의 1/350,000의 크기밖에 되지 않는다. 본 보고서는 한국과 타국가의 반핵 운동이 주는 교훈을 다음과 같이 정리하였다. ${\cdot}$어떠한 국가도 에너지 자원 최빈국인 프랑스나 한국 같은 국가조차도 탈원전 '전쟁'에서 자유롭지 않으며, 이는 전 세계적으로 원자력산업이 쇠퇴하는 원인이다. ${\cdot}$원자력산업계, 정부, IAEA 등은 한국과 세계 여러 국가에서- 문화적, 제도적, 재정적 원인으로 원자력산업의 보호와 확대라는 목표를 달성할 수 없다. ${\cdot}$원자력산업을 구하기 위해서는 새로운 비전과 새로운 제도, 그리고 새로운 리더십이 필요하다. ${\cdot}$원자력의 근원적이고 혁신적인 비전 원자력 인본주의(atomic humanism)에 대한 재조명이 필요하다. ${\cdot}$원자력을 지키고 대중과 소통하기 위해 과학 연구단체, 대학교, 사단법인, NGO 등의 새로운 기관들을 후원해야 한다. ${\cdot}$공포를 조장하는 반원전 세력에 맞서 공포를 극복해야 하고, 대중의 공포를 극복해왔던 다른 기술들의 사례에서 교훈을 얻어야 한다.

  • PDF

The Cell Resequencing Buffer for the Cell Sequence Integrity Guarantee for the Cyclic Banyan Network (사이클릭 벤얀 망의 셀 순서 무결성 보장을 위한 셀 재배열 버퍼)

  • 박재현
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.9
    • /
    • pp.73-80
    • /
    • 2004
  • In this paper, we present the cell resequencing buffer to solve the cell sequence integrity problem of the Cyclic banyan network that is a high-performance fault-tolerant cell switch. By offering multiple paths between input ports and output ports, using the deflection self-routing, the Cyclic banyan switch offer high reliability, and it also solves congestion problem for the internal links of the switch. By the way, these multiple paths can be different lengths for each other. Therefore, the cells departing from an identical source port and arriving at an identical destination port can reach to the output port as the order that is different from the order arriving at input port. The proposed cell resequencing buffer is a hardware sliding window mechanism. to solve such cell sequence integrity problem. To calculate the size of sliding window that cause the prime cost of the presented device, we analyzed the distribution of the cell delay through the simulation analyses under traffic load that have a nonuniform address distribution that express tile Property of traffic of the Internet. Through these analyses, we found out that we can make a cell resequencing buffer by which the cell sequence integrity is to be secured, by using a, few of ordinary memory and control logic. The cell resequencing buffer presented in this paper can be used for other multiple paths switching networks.

A Passport Recognition and face Verification Using Enhanced fuzzy ART Based RBF Network and PCA Algorithm (개선된 퍼지 ART 기반 RBF 네트워크와 PCA 알고리즘을 이용한 여권 인식 및 얼굴 인증)

  • Kim Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.17-31
    • /
    • 2006
  • In this paper, passport recognition and face verification methods which can automatically recognize passport codes and discriminate forgery passports to improve efficiency and systematic control of immigration management are proposed. Adjusting the slant is very important for recognition of characters and face verification since slanted passport images can bring various unwanted effects to the recognition of individual codes and faces. Therefore, after smearing the passport image, the longest extracted string of characters is selected. The angle adjustment can be conducted by using the slant of the straight and horizontal line that connects the center of thickness between left and right parts of the string. Extracting passport codes is done by Sobel operator, horizontal smearing, and 8-neighborhood contour tracking algorithm. The string of codes can be transformed into binary format by applying repeating binary method to the area of the extracted passport code strings. The string codes are restored by applying CDM mask to the binary string area and individual codes are extracted by 8-neighborhood contour tracking algerian. The proposed RBF network is applied to the middle layer of RBF network by using the fuzzy logic connection operator and proposing the enhanced fuzzy ART algorithm that dynamically controls the vigilance parameter. The face is authenticated by measuring the similarity between the feature vector of the facial image from the passport and feature vector of the facial image from the database that is constructed with PCA algorithm. After several tests using a forged passport and the passport with slanted images, the proposed method was proven to be effective in recognizing passport codes and verifying facial images.

  • PDF

Features of sample concepts in the probability and statistics chapters of Korean mathematics textbooks of grades 1-12 (초.중.고등학교 확률과 통계 단원에 나타난 표본개념에 대한 분석)

  • Lee, Young-Ha;Shin, Sou-Yeong
    • Journal of Educational Research in Mathematics
    • /
    • v.21 no.4
    • /
    • pp.327-344
    • /
    • 2011
  • This study is the first step for us toward improving high school students' capability of statistical inferences, such as obtaining and interpreting the confidence interval on the population mean that is currently learned in high school. We suggest 5 underlying concepts of 'discretion of contingency and inevitability', 'discretion of induction and deduction', 'likelihood principle', 'variability of a statistic' and 'statistical model', those are necessary to appreciate statistical inferences as a reliable arguing tools in spite of its occasional erroneous conclusions. We assume those 5 concepts above are to be gradually developing in their school periods and Korean mathematics textbooks of grades 1-12 were analyzed. Followings were found. For the right choice of solving methodology of the given problem, no elementary textbook but a few high school textbooks describe its difference between the contingent circumstance and the inevitable one. Formal definitions of population and sample are not introduced until high school grades, so that the developments of critical thoughts on the reliability of inductive reasoning could not be observed. On the contrary of it, strong emphasis lies on the calculation stuff of the sample data without any inference on the population prospective based upon the sample. Instead of the representative properties of a random sample, more emphasis lies on how to get a random sample. As a result of it, the fact that 'the random variability of the value of a statistic which is calculated from the sample ought to be inherited from the randomness of the sample' could neither be noticed nor be explained as well. No comparative descriptions on the statistical inferences against the mathematical(deductive) reasoning were found. Few explanations on the likelihood principle and its probabilistic applications in accordance with students' cognitive developmental growth were found. It was hard to find the explanation of a random variability of statistics and on the existence of its sampling distribution. It is worthwhile to explain it because, nevertheless obtaining the sampling distribution of a particular statistic, like a sample mean, is a very difficult job, mere noticing its existence may cause a drastic change of understanding in a statistical inference.

  • PDF