• Title/Summary/Keyword: split data

Search Result 589, Processing Time 0.03 seconds

Design of 4Kb Poly-Fuse OTP IP for 90nm Process (90nm 공정용 4Kb Poly-Fuse OTP IP 설계)

  • Hyelin Kang;Longhua Li;Dohoon Kim;Soonwoo Kwon;Bushra Mahnoor;Panbong Ha;Younghee Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.509-518
    • /
    • 2023
  • In this paper, we designed a 4Kb poly-fuse OTP IP (Intellectual Property) required for analog circuit trimming and calibration. In order to reduce the BL resistance of the poly-fuse OTP cell, which consists of an NMOS select transistor and a poly-fuse link, the BL stacked metal 2 and metal 3. In order to reduce BL routing resistance, the 4Kb cells are divided into two sub-block cell arrays of 64 rows × 32 rows, with the BL drive circuit located between the two 2Kb sub-block cell arrays, which are split into top and bottom. On the other hand, in this paper, we propose a core circuit for an OTP cell that uses one poly-fuse link to one select transistor. In addition, in the early stages of OTP IP development, we proposed a data sensing circuit that considers the case where the resistance of the unprogrammed poly-fuse can be up to 5kΩ. It also reduces the current flowing through an unprogrammed poly-fuse link in read mode to 138㎂ or less. The poly-fuse OTP cell size designed with DB HiTek 90nm CMOS process is 11.43㎛ × 2.88㎛ (=32.9184㎛2), and the 4Kb poly-fuse OTP IP size is 432.442㎛ × 524.6㎛ (=0.227mm2).

Algorithm for Maximum Degree Vertex Partition of Cutwidth Minimization Problem (절단 폭 최소화 문제의 최대차수 정점 분할 알고리즘)

  • Sang-Un Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.37-42
    • /
    • 2024
  • This paper suggests polynomial time algorithm for cutwidth minimization problem that classified as NP-complete because the polynomial time algorithm to find the optimal solution has been unknown yet. To find the minimum cutwidth CWf(G)=max𝜈VCWf(𝜈)for given graph G=(V,E),m=|V|, n=|E|, the proposed algorithm divides neighborhood NG[𝜈i] of the maximum degree vertex 𝜈i in graph G into left and right and decides the vertical cut plane with minimum number of edges pass through the vertex 𝜈i firstly. Then, we split the left and right NG[𝜈i] into horizontal sections with minimum pass through edges. Secondly, the inner-section vertices are connected into line graph and the inter-section lines are connected by one line layout. Finally, we perform the optimization process in order to obtain the minimum cutwidth using vertex moving method. Though the proposed algorithm requires O(n2) time complexity, that can be obtains the optimal solutions for all of various experimental data

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Private Income Transfers and Old-Age Income Security (사적소득이전과 노후소득보장)

  • Kim, Hisam
    • KDI Journal of Economic Policy
    • /
    • v.30 no.1
    • /
    • pp.71-130
    • /
    • 2008
  • Using data from the Korean Labor & Income Panel Study (KLIPS), this study investigates private income transfers in Korea, where adult children have undertaken the most responsibility of supporting their elderly parents without well-established social safety net for the elderly. According to the KLIPS data, three out of five households provided some type of support for their aged parents and two out of five households of the elderly received financial support from their adult children on a regular base. However, the private income transfers in Korea are not enough to alleviate the impact of the fall in the earned income of those who retired and are approaching an age of needing financial assistance from external source. The monthly income of those at least the age of 75, even with the earning of their spouses, is below the staggering amount of 450,000 won, which indicates that the elderly in Korea are at high risk of poverty. In order to analyze microeconomic factors affecting the private income transfers to the elderly parents, the following three samples extracted from the KLIPS data are used: a sample of respondents of age 50 or older with detailed information on their financial status; a five-year household panel sample in which their unobserved family-specific and time-invariant characteristics can be controlled by the fixed-effects model; and a sample of the younger split-off household in which characteristics of both the elderly household and their adult children household can be controlled simultaneously. The results of estimating private income transfer models using these samples can be summarized as follows. First, the dominant motive lies on the children-to-parent altruistic relationship. Additionally, another is based on exchange motive, which is paid to the elderly parents who take care of their grandchildren. Second, the amount of private income transfers has negative correlation with the income of the elderly parents, while being positively correlated with the income of the adult children. However, its income elasticity is not that high. Third, the amount of private income transfers shows a pattern of reaching the highest level when the elderly parents are in the age of 75 years old, following a decreasing pattern thereafter. Fourth, public assistance, such as the National Basic Livelihood Security benefit, appears to crowd out private transfers. Private transfers have fared better than public transfers in alleviating elderly poverty, but the role of public transfers has been increasing rapidly since the welfare expansion after the financial crisis in the late 1990s, so that one of four elderly people depends on public transfers as their main income source in 2003. As of the same year, however, there existed and occupied 12% of the elderly households those who seemed eligible for the National Basic Livelihood benefit but did not receive any public assistance. To remove elderly poverty, government may need to improve welfare delivery system as well as to increase welfare budget for the poor. In the face of persistent elderly poverty and increasing demand for public support for the elderly, which will lead to increasing government debt, welfare policy needs targeting toward the neediest rather than expanding universal benefits that have less effect of income redistribution and heavier cost. Identifying every disadvantaged elderly in dire need for economic support and providing them with the basic livelihood security would be the most important and imminent responsibility that we all should assume to prepare for the growing aged population, and this also should accompany measures to utilize the elderly workforce with enough capability and strong will to work.

Development and Testing of the Model of Health Promotion Behavior in Predicting Exercise Behavior

  • O'Donnell, Michael P.
    • Korean Journal of Health Education and Promotion
    • /
    • v.2 no.1
    • /
    • pp.31-61
    • /
    • 2000
  • Introduction. Despite the fact that half of premature deaths are caused by unhealthy lifestyles such as smoking tobacco, sedentary lifestyle, alcohol and drug abuse and poor nutrition, there are no theoretical models which accurately explain these health promotion related behaviors. This study tests a new model of health behavior called the Model of Health Promotion Behavior. This model draws on elements and frameworks suggested by the Health Belief Model, Social Cognitive Theory, the Theory of Planned Action and the Health Promotion Model. This model is intended as a general model of behavior but this first test of the model uses amount of exercise as the outcome behavior. Design. This study utilized a cross sectional mail-out, mail-back survey design to determine the elements within the model that best explained intentions to exercise and those that best explained amount of exercise. A follow-up questionnaire was mailed to all respondents to the first questionnaire about 10 months after the initial survey. A pretest was conducted to refine the questionnaire and a pilot study to test the protocols and assumptions used to calculate the required sample size. Sample. The sample was drawn from 2000 eligible participants at two blue collar (utility company and part of a hospital) and two white collar (bank and pharmaceutical) companies located in Southeastern Michigan. Both white collar site had employee fitness centers and all four sites offered health promotion programs. In the first survey, 982 responses were received (49.1%) after two mailings to non-respondents and one additional mailing to secure answers to missing data, with 845 usable cases for the analyzing current intentions and 918 usable cases for the explaining of amount of current exercise analysis. In the follow-up survey, questionnaires were mailed to the 982 employees who responded to the initial survey. After one follow-up mailing to non-respondents, and one mailing to secure answers to missing data, 697 (71.0%) responses were received, with 627 (63.8%) usable cases to predict intentions and 673 (68.5%) usable cases to predict amount of exercise. Measures. The questionnaire in the initial survey had 15 scales and 134 items; these scales measured each of the variables in the model. Thirteen of the scales were drawn from the literature, all had Cronbach's alpha scores above .74 and all but three had scores above .80. The questionnaire in the second mailing had only 10 items, and measured only outcome variables. Analysis. The analysis included calculation of scale scores, Cronbach's alpha, zero order correlations, and factor analysis, ordinary least square analysis, hierarchical tests of interaction terms and path analysis, and comparisons of results based on a random split of the data and splits based on gender and employer site. The power of the regression analysis was .99 at the .01 significance level for the model as a whole. Results. Self efficacy and Non-Health Benefits emerged as the most powerful predictors of Intentions to exercise, together explaining approximately 19% of the variance in future Intentions. Intentions, and the interaction of Intentions with Barriers, with Support of Friends, and with Self Efficacy were the most consistent predictors of amount of future exercise, together explaining 38% of the variance. With the inclusion of Prior Exercise History the model explained 52% of the variance in amount of exercise 10 months later. There were very few differences in the variables that emerged as important predictors of intentions or exercise in the different employer sites or between males and females. Discussion. This new model is viable in predicting intentions to exercise and amount of exercise, both in absolute terms and when compared to existing models.

  • PDF

Assessing the Sensitivity of Runoff Projections Under Precipitation and Temperature Variability Using IHACRES and GR4J Lumped Runoff-Rainfall Models (집중형 모형 IHACRES와 GR4J를 이용한 강수 및 기온 변동성에 대한 유출 해석 민감도 평가)

  • Woo, Dong Kook;Jo, Jihyeon;Kang, Boosik;Lee, Songhee;Lee, Garim;Noh, Seong Jin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.1
    • /
    • pp.43-54
    • /
    • 2023
  • Due to climate change, drought and flood occurrences have been increasing. Accurate projections of watershed discharges are imperative to effectively manage natural disasters caused by climate change. However, climate change and hydrological model uncertainty can lead to imprecise analysis. To address this issues, we used two lumped models, IHACRES and GR4J, to compare and analyze the changes in discharges under climate stress scenarios. The Hapcheon and Seomjingang dam basins were the study site, and the Nash-Sutcliffe efficiency (NSE) and the Kling-Gupta efficiency (KGE) were used for parameter optimizations. Twenty years of discharge, precipitation, and temperature (1995-2014) data were used and divided into training and testing data sets with a 70/30 split. The accuracies of the modeled results were relatively high during the training and testing periods (NSE>0.74, KGE>0.75), indicating that both models could reproduce the previously observed discharges. To explore the impacts of climate change on modeled discharges, we developed climate stress scenarios by changing precipitation from -50 % to +50 % by 1 % and temperature from 0 ℃ to 8 ℃ by 0.1 ℃ based on two decades of weather data, which resulted in 8,181 climate stress scenarios. We analyzed the yearly maximum, abundant, and ordinary discharges projected by the two lumped models. We found that the trends of the maximum and abundant discharges modeled by IHACRES and GR4J became pronounced as changes in precipitation and temperature increased. The opposite was true for the case of ordinary water levels. Our study demonstrated that the quantitative evaluations of the model uncertainty were important to reduce the impacts of climate change on water resources.

A Basic Study for the Retrieval of Surface Temperature from Single Channel Middle-infrared Images (단일 밴드 중적외선 영상으로부터 표면온도 추정을 위한 기초연구)

  • Park, Wook;Lee, Yoon-Kyung;Won, Joong-Sun;Lee, Seung-Geun;Kim, Jong-Min
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.2
    • /
    • pp.189-194
    • /
    • 2008
  • Middle-infrared (MIR) spectral region between 3.0 and $5.0\;{\mu}m$ in wavelength is useful for observing high temperature events such as volcanic activities and forest fire. However, atmospheric effects and sun irradiance in day time has not been well studied for this MIR spectral band. The objectives of this basic study is to evaluate atmospheric effects and eventually to estimate surface temperature from a single channel MIR image, although a typical approach utilize split-window method using more than two channels. Several parameters are involved for the correction including various atmospheric data and sun-irradiance at the area of interest. To evaluate the effect of sun irradiance, MODIS MIR images acquired in day and night times were used for comparison. Atmospheric parameters were modeled by MODTRAN, and applied to a radiative transfer model for estimating the sea surface temperature. MODIS Sea Surface Temperature algorithm based upon multi-channel observation was performed in comparison with results from the radiative transfer model from a single channel. Temperature difference of the two methods was $0.89{\pm}0.54^{\circ}C$ and $1.25{\pm}0.41^{\circ}C$ from the day-time and night-time images, respectively. It is also shown that the emissivity effect has by more largely influenced on the estimated temperature than atmospheric effects. Although the test results encourage using a single channel MR observation, it must be noted that the results were obtained from water body not from land surface. Because emissivity greatly varies on land, it is very difficult to retrieval land surface temperature from a single channel MIR data.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

The effects of desensitizing agents, bonding resin and tooth brushing on dentin permeability, in vitro (지각과민 처치제 후 접착레진 처리가 상아질 투과도에 미치는 영향)

  • Hong, Seung-Woo;Park, No-Je;Park, Young-Bum;Lee, Keun-Woo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.52 no.3
    • /
    • pp.165-176
    • /
    • 2014
  • Purpose: The effects of desensitizing agent are often for a short duration. One of the reasons is believed to be wear of desensitizing agent by tooth brushing. To reduce the wear and make the duration longer, dental bonding resin was applied and the changes of dentin permeability after toothbrushing were measured. Materials and methods: Extracted teeth free from caries were chosen. Coronal dentin discs with thickness of 1 mm were prepared. Using the split chamber device developed by Pashely, hydraulic conductance and scanning electron microscope images (SEM) were compared and contrasted before and immediately after the application of desensitizing agent and bonding resin and then after equivalent tooth brushing of 1 week, 2 weeks, and 6 weeks. Four commercially available desensitizing agents were used in this study; they were All-Bond 2, Seal & Protect, Gluma, and MS Coat. And Dentin/Enamel Bonding resin (Bisco Inc.) was used. The results of this study are as follows. Results: On all specimens, the hydraulic conductance decreased after the application of tooth desensitizing agent and bonding resin. Compared with the specimens treated only with desensitizer, the specimens treated with All-Bond 2, Gluma, MS Coat and plus D/E bonding resin had a little increase in hydraulic conductance after 1, 2 and 6-week tooth brushing. In case of Seal & Protect, the specimens showed the same result only after 6-week tooth brushing. On examination of SEM, the dentinal tubule diameter had decreased after treatment of desensitizing agents and bonding resin. And the specimens treated with All-Bond2, Seal&Protect, Gluma, MS Coat and plus D/E bonding resin had an significant decrease in diameter of dentinal tubule after 6-week tooth brushing. Conclusion: According to the results of this study, it is effective to use bonding resin after application of desensitizer in reducing the wear by tooth brushing and making the duration longer. In this study, just 6-week tooth brushing was performed, and it is not enough to regard it as a long-term data. So further study is needed and more perfect method for treating dentin hypersensitivity should be developed.