• Title/Summary/Keyword: time domain data

Search Result 1,310, Processing Time 0.035 seconds

Design of Personalization Service System in Mobile GIS (모바일 GIS에서의 개인화 서비스 시스템 설계)

  • Park, Key-Ho;Jung, Jae-Gon
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2008.10a
    • /
    • pp.106-112
    • /
    • 2008
  • Personalization is user oriented dynamic method based on user preferences for easy access to what users want to view or get. It has become more important in mobile domain with rapid growth of wireless Internet and mobile phone market after success of web based market and therefore, it can be applied to service of spatial analysis result. In this paper, spatial analysis using user profile and notification service methods are proposed as one of personalized spatial data service methods for mobile users. A service system for spatial analysis with user profile is designed to prove possibility of spatial analysis based on user preferences and notification service is also designedto show generated output can be sent to user's mobile devices efficiently to make users informed of preferred information. Prototype system is implemented and it is applied to real estate data that has many selectable conditions by users. Information service based on user preferences can be applied to spatial data by using proposed system and it is efficient when cache module is used to shorten response time. Various user models for application domains and performance evaluation methods need to be developed in the future.

  • PDF

Moving Image Compression with Splitting Sub-blocks for Frame Difference Based on 3D-DCT (3D-DCT 기반 프레임 차분의 부블록 분할 동영상 압축)

  • Choi, Jae-Yoon;Park, Dong-Chun;Kim, Tae-Hyo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.1
    • /
    • pp.55-63
    • /
    • 2000
  • This paper investigated the sub-region compression effect of the three dimensional DCT(3D-DCT) using the difference component(DC) of inter-frame in images. The proposed algorithm are the method that obtain compression effect to divide the information into subband after 3D-DCT, the data appear the type of cubic block(8${\times}$8${\times}$8) in eight difference components per unit. In the frequence domain that transform the eight differential component frames into eight DCT frames with components of both spatial and temporal frequencies of inter-frame, the image data are divided into frame component(8${\times}$8 block) of time-axis direction into 4${\times}$4 sub block in order to effectively obtain compression data because image components are concentrate in corner region with low-frequency of cubic block. Here, using the weight of sub block, we progressed compression ratio as consider to adaptive sub-region of low frequency part. In simulation, we estimated compression ratio, reconstructed image resolution(PSNR) with the simpler image and the complex image contained the higher frequency component. In the result, we could obtain the high compression effect of 30.36dB(average value in the complex-image) and 34.75dB(average value in the simple-image) in compression range of 0.04~0.05bpp.

  • PDF

Effects of Differential Heating by Land-Use types on flow and air temperature in an urban area (토지 피복별 차등 가열이 도시 지역의 흐름과 기온에 미치는 영향)

  • Park, Soo-Jin;Choi, So-Hee;Kang, Jung-Eun;Kim, Dong-Ju;Moon, Da-Som;Choi, Wonsik;Kim, Jae-Jin;Lee, Young-Gon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.603-616
    • /
    • 2016
  • In this study, the effects of differential heating by land-use types on flow and air temperature at an Seoul Automated Synoptic Observing Systems (ASOS) located at Songwol-dong, Jongno-gu, Seoul was analyzed. For this, a computation fluid dynamics (CFD) model was coupled to the local data assimilation and prediction system (LDAPS) for reflecting the local meteorological characteristics at the boundaries of the CFD model domain. Time variation of temperatures on solid surfaces was calculated using observation data at El-Oued, Algeria of which latitude is similar to that of the target area. Considering land-use type and shadow, surface temperatures were prescribed in the LDAPS-CFD coupled model. The LDAPS overestimated wind speeds and underestimated air temperature compared to the observations. However, a coupled LDAPS-CFD model relatively well reproduced the observed wind speeds and air temperature, considering complicated flows and surface temperatures in the urban area. In the morning when the easterly was dominant around the target area, both the LDAPS and coupled LDAPS-CFD model underestimated the observed temperatures at the Seoul ASOS. This is because the Kyunghee Palace located at the upwind region was composed of green area and its surface temperature was relatively low. However, in the afternoon when the southeasterly was dominant, the LDAPS still underestimated, on the while, the coupled LDAPS-CFD model well reproduced the observed temperatures at the Seoul ASOS by considering the building-surface heating.

Measurement System of Dynamic Liquid Motion using a Laser Doppler Vibrometer and Galvanometer Scanner (액체거동의 비접촉 다점측정을 위한 레이저진동계와 갈바노미터스캐너 계측시스템)

  • Kim, Junhee;Shin, Yoon-Soo;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.5
    • /
    • pp.227-234
    • /
    • 2018
  • Researches regarding measurement and control of the dynamic behavior of liquid such as sloshing have been actively on undertaken in various engineering fields. Liquid vibration is being measured in the study of tuned liquid dampers(TLDs), which attenuates wind motion of buildings even in building structures. To overcome the limitations of existing wave height measurement sensors, a method of measuring liquid vibration in a TLD using a laser Doppler vibrometer(LDV) and galvanometer scanner is proposed in this paper: the principle of measuring speed and displacement is discussed; a system of multi-point measurement with a single point of LDV according to the operating principles of the galvanometer scanner is established. 4-point liquid vibration on the TLD is measured, and the time domain data of each point is compared with the conventional video sensing data. It was confirmed that the waveform is transformed into the traveling wave and the standing wave. In addition, the data with measurement delay are cross-correlated to perform singular value decomposition. The natural frequencies and mode shapes are compared using theoretical and video sensing results.

An Ethnographic Study about Taegyo Practice in Korea (태교 실천에 대한 일상생활 기술적 연구)

  • 김현옥
    • Journal of Korean Academy of Nursing
    • /
    • v.27 no.2
    • /
    • pp.411-422
    • /
    • 1997
  • The purpose of this study is twofold : (i) to investigate how much effort the married couples are making for the good health of both the pregnant woman and her unborn child from the time of their marriage to and during the period of conception : and (ii) to comprehensive investigate socio-cultural back-grounds which affect prenatal effort. Result of this study provide a basis for the prenatal care program which will be appropriate to our culture. This study has been done by the ethnographic research method. The subjects of this study are 53 people in all consisting of 33 pregnant women and 20 husbands. In order to investigate socio-cultural factors which influence Taegyo, producers of Taegyo music were interviewed. In addition the researcher surveyed the markets of Taegyo music, participated in special courses of prenatal education, analyzed the content of the books and periodicals dealing with Taegyo, and collected the concept of Taegyo distributed by the mass media. The full-fledged study continued for eight months from February to August.1996. The data were analyzed as soon as they were collected. Spradly's(1979, 1980) developmental, sequential method of domain analysis. taxonomic analysis, componential analysis, and theme analysis in this order was adopted as the procedure of analyzing the data. To obtain the exactness of study, Sandelowski's (1986) four criteria, that is, Credibility, Fittingness, Auditability, and Confirmability were applied to all stages of data collection, data analysis, the interpretation of the result, and the description of the result. The following are the result : 1. The couples' Taegyo at the stage of preconception was related to their physical, psychological, spiritual conditions under which a healthy baby will be born. Specific methods they prefer are : "the choice of one's spouse." "physical check-up," "physical good health, " "praying, " and so on. 2. When the marriod couple have sex in order to conceive, their Taegyo was related to the imposition of their physical, psychological, and environmental conditions. Specific methods they prefer are : "having sex at specific time, " "having sex in nice place." "to purify their minds while having sex," and so on. 3. The married couples' Taegyo while they are in pregnancy was related to the imposition of their physical. psychological, emotionmental. environmental, social and spiritual conditions. Specific methods they prefer are : "listening to music. " "reading," "looking at beautiful things only," "to avoid looking at or listening to bad things." "to eat food in good shape, " "to avoid drugs," "eating Korean herbal medicine." "sexual abstinence," "to avoid dangerous places," "to keep emotional tranquility," "moderate exercises and rest." "leading a pure life." "praying." "being aware of their words and behavior." "for the couple to keep a good relationship." "interaction with their unborn child," "to support Taegyo for pregnant women," and so on. 4. The married couple put Taegyo into practice on the basis of the following principles : the principle of respecting an unborn child, the principle of forming a good disposition. the principle of top-down parental love, the principle of synergy between a pregnant woman and her unborn child, the principle of expecting a good child, the principle of forming a good habit, and the principle of acquiring a parental role. 5. The practice of Taegyo is influenced by such factors as the married couple, the supporting system, and the mass media. As the husband -and-wife factor, their information of Taegyo, the degree of importance is assigned to their characters, their time to spare, their healthiness, the age of pregnant woman, their conception plan, their religion, their belief of the Taegyo effects, and the birth of a baby in this order. The factor of the supporting system consists of her husband's support, her family support, and her neighbor's support. The mass media factors include the broadcasting media, books specialized in Taegyo, periodicals for pregnant women, booklets for advertizing powdered milk, Taegyo music of record manufacturing companies, and the teaching materials for gifted children. Among these the mass media is especially taking advantage of Taegyo as its main source of economic profits are leading the public behavior pattern to a prodigal one. Taegyo is a self-control behavior which requires practice for the following : the physical and psychological good health of the pregnant woman and her unborn child, the development of the unborn child's good character, the development of the unborn child's intelligence and talents, the expectation of the unborn child's good features. shape a good habit, the expectation of the unborn child's bright future, and the learning of a parental role, the expectation of male birth. Above all it is a type of our good cultural tradition which pursues a value higher than the one that the prenatal care does. The principles of pregnancy care inherent in the habit of Taegyo will provide us a guideline for the development of the prenatal care.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.

Bankruptcy Prediction Modeling Using Qualitative Information Based on Big Data Analytics (빅데이터 기반의 정성 정보를 활용한 부도 예측 모형 구축)

  • Jo, Nam-ok;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.33-56
    • /
    • 2016
  • Many researchers have focused on developing bankruptcy prediction models using modeling techniques, such as statistical methods including multiple discriminant analysis (MDA) and logit analysis or artificial intelligence techniques containing artificial neural networks (ANN), decision trees, and support vector machines (SVM), to secure enhanced performance. Most of the bankruptcy prediction models in academic studies have used financial ratios as main input variables. The bankruptcy of firms is associated with firm's financial states and the external economic situation. However, the inclusion of qualitative information, such as the economic atmosphere, has not been actively discussed despite the fact that exploiting only financial ratios has some drawbacks. Accounting information, such as financial ratios, is based on past data, and it is usually determined one year before bankruptcy. Thus, a time lag exists between the point of closing financial statements and the point of credit evaluation. In addition, financial ratios do not contain environmental factors, such as external economic situations. Therefore, using only financial ratios may be insufficient in constructing a bankruptcy prediction model, because they essentially reflect past corporate internal accounting information while neglecting recent information. Thus, qualitative information must be added to the conventional bankruptcy prediction model to supplement accounting information. Due to the lack of an analytic mechanism for obtaining and processing qualitative information from various information sources, previous studies have only used qualitative information. However, recently, big data analytics, such as text mining techniques, have been drawing much attention in academia and industry, with an increasing amount of unstructured text data available on the web. A few previous studies have sought to adopt big data analytics in business prediction modeling. Nevertheless, the use of qualitative information on the web for business prediction modeling is still deemed to be in the primary stage, restricted to limited applications, such as stock prediction and movie revenue prediction applications. Thus, it is necessary to apply big data analytics techniques, such as text mining, to various business prediction problems, including credit risk evaluation. Analytic methods are required for processing qualitative information represented in unstructured text form due to the complexity of managing and processing unstructured text data. This study proposes a bankruptcy prediction model for Korean small- and medium-sized construction firms using both quantitative information, such as financial ratios, and qualitative information acquired from economic news articles. The performance of the proposed method depends on how well information types are transformed from qualitative into quantitative information that is suitable for incorporating into the bankruptcy prediction model. We employ big data analytics techniques, especially text mining, as a mechanism for processing qualitative information. The sentiment index is provided at the industry level by extracting from a large amount of text data to quantify the external economic atmosphere represented in the media. The proposed method involves keyword-based sentiment analysis using a domain-specific sentiment lexicon to extract sentiment from economic news articles. The generated sentiment lexicon is designed to represent sentiment for the construction business by considering the relationship between the occurring term and the actual situation with respect to the economic condition of the industry rather than the inherent semantics of the term. The experimental results proved that incorporating qualitative information based on big data analytics into the traditional bankruptcy prediction model based on accounting information is effective for enhancing the predictive performance. The sentiment variable extracted from economic news articles had an impact on corporate bankruptcy. In particular, a negative sentiment variable improved the accuracy of corporate bankruptcy prediction because the corporate bankruptcy of construction firms is sensitive to poor economic conditions. The bankruptcy prediction model using qualitative information based on big data analytics contributes to the field, in that it reflects not only relatively recent information but also environmental factors, such as external economic conditions.

A Study about the Direction and Responsibility of the National Intelligence Agency to the Cyber Security Issues (사이버 안보에 대한 국가정보기구의 책무와 방향성에 대한 고찰)

  • Han, Hee-Won
    • Korean Security Journal
    • /
    • no.39
    • /
    • pp.319-353
    • /
    • 2014
  • Cyber-based technologies are now ubiquitous around the glob and are emerging as an "instrument of power" in societies, and are becoming more available to a country's opponents, who may use it to attack, degrade, and disrupt communications and the flow of information. The globe-spanning range of cyberspace and no national borders will challenge legal systems and complicate a nation's ability to deter threats and respond to contingencies. Through cyberspace, competitive powers will target industry, academia, government, as well as the military in the air, land, maritime, and space domains of our nations. Enemies in cyberspace will include both states and non-states and will range from the unsophisticated amateur to highly trained professional hackers. In much the same way that airpower transformed the battlefield of World War II, cyberspace has fractured the physical barriers that shield a nation from attacks on its commerce and communication. Cyberthreats to the infrastructure and other assets are a growing concern to policymakers. In 2013 Cyberwarfare was, for the first time, considered a larger threat than Al Qaeda or terrorism, by many U.S. intelligence officials. The new United States military strategy makes explicit that a cyberattack is casus belli just as a traditional act of war. The Economist describes cyberspace as "the fifth domain of warfare and writes that China, Russia, Israel and North Korea. Iran are boasting of having the world's second-largest cyber-army. Entities posing a significant threat to the cybersecurity of critical infrastructure assets include cyberterrorists, cyberspies, cyberthieves, cyberwarriors, and cyberhacktivists. These malefactors may access cyber-based technologies in order to deny service, steal or manipulate data, or use a device to launch an attack against itself or another piece of equipment. However because the Internet offers near-total anonymity, it is difficult to discern the identity, the motives, and the location of an intruder. The scope and enormity of the threats are not just focused to private industry but also to the country's heavily networked critical infrastructure. There are many ongoing efforts in government and industry that focus on making computers, the Internet, and related technologies more secure. As the national intelligence institution's effort, cyber counter-intelligence is measures to identify, penetrate, or neutralize foreign operations that use cyber means as the primary tradecraft methodology, as well as foreign intelligence service collection efforts that use traditional methods to gauge cyber capabilities and intentions. However one of the hardest issues in cyber counterintelligence is the problem of "Attribution". Unlike conventional warfare, figuring out who is behind an attack can be very difficult, even though the Defense Secretary Leon Panetta has claimed that the United States has the capability to trace attacks back to their sources and hold the attackers "accountable". Considering all these cyber security problems, this paper examines closely cyber security issues through the lessons from that of U.S experience. For that purpose I review the arising cyber security issues considering changing global security environments in the 21st century and their implications to the reshaping the government system. For that purpose this study mainly deals with and emphasis the cyber security issues as one of the growing national security threats. This article also reviews what our intelligence and security Agencies should do among the transforming cyber space. At any rate, despite of all hot debates about the various legality and human rights issues derived from the cyber space and intelligence service activity, the national security should be secured. Therefore, this paper suggests that one of the most important and immediate step is to understanding the legal ideology of national security and national intelligence.

  • PDF