• Title/Summary/Keyword: Soft-Real Time

Search Result 234, Processing Time 0.026 seconds

Failure Probability of Nonlinear SDOF System Subject to Scaled and Spectrum Matched Input Ground Motion Models (배율조정 및 스펙트럼 맞춤 입력지반운동 모델에 대한 비선형 단자유도 시스템의 파손확률)

  • Kim, Dong-Seok;Koh, Hyun-Moo;Choi, Chang-Yeol;Park, Won-Suk
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.12 no.1
    • /
    • pp.11-20
    • /
    • 2008
  • In probabilistic seismic analysis of nonlinear structural system, dynamic analysis is performed to obtain the distribution of the response estimate using input ground motion time histories which correspond to a given seismic hazard level. This study investigates the differences in the distribution of the responses and the failure probability according to input ground motion models. Two types of input ground motion models are considered: real earthquake records scaled to specified intensity level and artificial input ground motion fitted to design response spectrum. Simulation results fir a nonlinear SDOF system demonstrate that the spectrum matched input ground motion produces larger failure probability than those of scaled input ground motion due to biased responses. Such tendency is more remarkable in the site of soft soil conditions. Analysis results show that such difference of failure probability is due to the conservative estimation of design response spectrum in the range of long period of ground motion.

A Case Study on Implementation of Mobile Information Security (모바일 정보보안을 위한 실시간 모바일 기기 제어 및 관리 시스템 설계.구현 사례연구)

  • Kang, Yong-Sik;Kwon, Sun-Dong;Lee, Kang-Hyun
    • Information Systems Review
    • /
    • v.15 no.2
    • /
    • pp.1-19
    • /
    • 2013
  • Smart working sparked by iPhone3 opens a revolution in smart ways of working at any time, regardless of location and environment. Also, It provide real-time information processing and analysis, rapid decision-making and the productivity of businesses, including through the timely response and the opportunity to increase the efficiency. As a result, every company are developing mobile information systems. But company data is accessed from the outside, it has problems to solve like security, hacking and information leakage. Also, Mobile devices such as smart phones belonging to the privately-owned asset can't be always controlled to archive company security policy. In the meantime, public smart phones owned by company was always applied security policy. But it can't not apply to privately-owned smart phones. Thus, this paper is focused to archive company security policy, but also enable the individual's free to use of smart phones when we use mobile information systems. So, when we use smart phone as individual purpose, the normal operation of all smart phone functions. But, when we use smart phone as company purpose like mobile information systems, the smart phone functions are blocked like screen capture, Wi-Fi, camera to protect company data. In this study, we suggest the design and implementation of real time control and management of mobile device using MDM(Mobile Device Management) solution. As a result, we can archive company security policy and individual using of smart phone and it is the optimal solution in the BYOD(Bring Your Own Device) era.

  • PDF

Real-Time Hierarchical Techniques for Rendering of Translucent Materials and Screen-Space Interpolation (반투명 재질의 렌더링과 화면 보간을 위한 실시간 계층화 알고리즘)

  • Ki, Hyun-Woo;Oh, Kyoung-Su
    • Journal of Korea Game Society
    • /
    • v.7 no.1
    • /
    • pp.31-42
    • /
    • 2007
  • In the natural world, most materials such as skin, marble and cloth are translucent. Their appearance is smooth and soft compared with metals or mirrors. In this paper, we propose a new GPU based hierarchical rendering technique for translucent materials, based on the dipole diffusion approximation, at interactive rates. Information of incident light, position, normal, and irradiance, on the surfaces are stored into 2D textures by rendering from a primary light view. Huge numbers of pixel photons are clustered into quad-tree image pyramids. Each pixel, we select clusters (sets of photons), and then we approximate multiple subsurface scattering term with the clusters. We also introduce a novel hierarchical screen-space interpolation technique by exploiting spatial coherence with early-z culling on the GPU. We also build image pyramids of the screen using mipmap and pixel shader. Each pixel of the pyramids is stores position, normal and spatial similarity of children pixels. If a pixel's the similarity is high, we render the pixel and interpolate the pixel to multiple pixels. Result images show that our method can interactively render deformable translucent objects by approximating hundreds of thousand photons with only hundreds clusters without any preprocessing. We use an image-space approach for entire process on the GPU, thus our method is less dependent to scene complexity.

  • PDF

Analysis of cell survival genes in human gingival fibroblasts after sequential release of trichloroacetic acid and epidermal growth factor using the nano-controlled release system (나노방출제어시스템을 이용하여 trichloroacetic acid와 epidermal growth factor의 순차적 방출을 적용한 인간치은섬유아세포의 세포생존 관련 유전자 연구분석)

  • Cho, Joon Youn;Lee, Richard sungbok;Lee, Suk Won
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.36 no.3
    • /
    • pp.145-157
    • /
    • 2020
  • Purpose: This study was to determine the possible effects of trichloroacetic acid (TCA) and epidermal growth factor (EGF) through cell survival genes of the PI3K-AKT signaling pathway when applying an hydrophobically modified glycol chitosan (HGC)-based nanocontrolled release system to human gingival fibroblasts in oral soft tissue regeneration. Materials and Methods: An HGC-based nano-controlled release system was produced, followed by the loading of TCA and EGF. The group was divided into control (CON), TCA-loaded nano-controlled release system (EXP1), and the TCA- and EGF- individually loaded nano-controlled release system (EXP2). A total for 29 genes related to the PI3K-AKT signaling pathway were analyzed after 48h of culture in human gingival fibroblasts. Real-time PCR, 1- way ANOVA and multiple regression analysis were performed. Results: Cell survival genes were significantly upregulated in EXP1 and EXP2. From multiple regression analysis, ITGB1 was determined to be the most influential factor for AKT1 expression. Conclusion: The application of TCA and EGF through the HGC-based nano-controlled release system can up-regulate the cell survival pathway.

Performance Analysis of MAP Algorithm by Robust Equalization Techniques in Nongaussian Noise Channel (비가우시안 잡음 채널에서 Robust 등화기법을 이용한 터보 부호의 MAP 알고리즘 성능분석)

  • 소성열
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.9A
    • /
    • pp.1290-1298
    • /
    • 2000
  • Turbo Code decoder is an iterate decoding technology, which extracts extrinsic information from the bit to be decoded by calculating both forward and backward metrics, and uses the information to the next decoding step Turbo Code shows excellent performance, approaching Shannon Limit at the view of BER, when the size of Interleaver is big and iterate decoding is run enough. But it has the problems which are increased complexity and delay and difficulty of real-time processing due to Interleaver and iterate decoding. In this paper, it is analyzed that MAP(maximum a posteriori) algorithm which is used as one of Turbo Code decoding, and the factor which determines its performance. MAP algorithm proceeds iterate decoding by determining soft decision value through the environment and transition probability between all adjacent bits and received symbols. Therefore, to improve the performance of MAP algorithm, the trust between adjacent received symbols must be ensured. However, MAP algorithm itself, can not do any action for ensuring so the conclusion is that it is needed more algorithm, so to decrease iterate decoding. Consequently, MAP algorithm and Turbo Code performance are analyzed in the nongaussian channel applying Robust equalization technique in order to input more trusted information into MAP algorithm for the received symbols.

  • PDF

A TBM data-based ground prediction using deep neural network (심층 신경망을 이용한 TBM 데이터 기반의 굴착 지반 예측 연구)

  • Kim, Tae-Hwan;Kwak, No-Sang;Kim, Taek Kon;Jung, Sabum;Ko, Tae Young
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.1
    • /
    • pp.13-24
    • /
    • 2021
  • Tunnel boring machine (TBM) is widely used for tunnel excavation in hard rock and soft ground. In the perspective of TBM-based tunneling, one of the main challenges is to drive the machine optimally according to varying geological conditions, which could significantly lead to saving highly expensive costs by reducing the total operation time. Generally, drilling investigations are conducted to survey the geological ground before the TBM tunneling. However, it is difficult to provide the precise ground information over the whole tunnel path to operators because it acquires insufficient samples around the path sparsely and irregularly. To overcome this issue, in this study, we proposed a geological type classification system using the TBM operating data recorded in a 5 s sampling rate. We first categorized the various geological conditions (here, we limit to granite) as three geological types (i.e., rock, soil, and mixed type). Then, we applied the preprocessing methods including outlier rejection, normalization, and extracting input features, etc. We adopted a deep neural network (DNN), which has 6 hidden layers, to classify the geological types based on TBM operating data. We evaluated the classification system using the 10-fold cross-validation. Average classification accuracy presents the 75.4% (here, the total number of data were 388,639 samples). Our experimental results still need to improve accuracy but show that geology information classification technique based on TBM operating data could be utilized in the real environment to complement the sparse ground information.

Appropriate Smart Factory : Demonstration of Applicability to Industrial Safety (적정 스마트공장: 산업안전 기술로의 적용 가능성 실증)

  • Kwon, Kui-Kam;Jeong, Woo-Kyun;Kim, Hyungjung;Quan, Ying-Jun;Kim, Younggyun;Lee, Hyunsu;Park, Suyoung;Park, Sae-Jin;Hong, SungJin;Yun, Won-Jae;Jung, Guyeop;Lee, Gyu Wha;Ahn, Sung-Hoon
    • Journal of Appropriate Technology
    • /
    • v.7 no.2
    • /
    • pp.196-205
    • /
    • 2021
  • As industrial safety increases, various industrial accident prevention technologies using smart factory technology are being studied. However, small and medium enterprises (SMEs), which account for the majority of industrial accidents, are having difficulties in preventing industrial accidents by applying these smart factory technologies due to practical problems. In this study, customized monitoring and warning systems for each type of industrial accident were developed and applied to the actual field. Through this, we demonstrated industrial accident prevention technology through appropriate smart factory technology used by SMEs. A customized monitoring system using vision, current, temperature, and gas sensors was established for the four major disaster types: worker body access, short circuit and overcurrent, fire and burns due to high temperature, and emission of hazardous gas. In addition, a notification method suitable for each work environment was applied so that the monitored risk factors could be recognized quickly, and real-time data transmission and display enabled workers and managers to understand the disaster risk effectively. Through the application and demonstration of these appropriate smart factory technologies, the spread of these industrial safety technologies is to be discussed.

Market Structure Analysis of Automobile Market in U.S.A (미국자동차시장의 구조분석)

  • Choi, In-Hye;Lee, Seo-Goo;Yi, Seong-Keun
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.1
    • /
    • pp.141-156
    • /
    • 2008
  • Market structure analysis is a very useful tool to analyze the competition boundary of the brand or the company. But most of the studies in market structure analysis, the concern lies in nondurable goods such as candies, soft drink and etc. because of the their availability of the data. In the field of durable goods, the limitation of the data availability and the repurchase time period constrain the study. In the analysis of the automobile market, those of views might be more persuasive. The purpose of this study is to analyze the structure of automobile market based on some idea suggested by prior studies. Usually the buyers of the automobile tend to buy upper tier when they buy in the next time. That kind of behavior make it impossible to analyze the structure of automobile market under the level of automobile model. For that reason I tried to analyze the market structure in the brand or company level. In this study, consideration data was used for market structure analysis. The reasons why we used the consideration data are summarized as following. Firstly, as the repurchase time cycle is too long, brand switching data which is used for the market analysis of nondurable good is not avaliable. Secondly, as we mentioned, the buyers of the automobile tend to buy upper tier when they buy in the next time. We used survey data collected in the U.S.A. market in the year of 2005 through questionaire. The sample size was 8,291. The number of brand analyzed in this study was 9 among 37 which was being sold in U.S.A. market. Their market share was around 50%. The brands considered were BMW, Chevrolet, Chrysler, Dodge, Ford, Honda, Mercedes, and Toyota. �� ratio was derived from frequency of the consideration set. Actually the frequency is different from the brand switch concept. In this study to compute the �� ratio, the frequency of the consideration set was used like a frequency of brand switch for convenience. The study can be divided into 2 steps. The first step is to build hypothetical market structures. The second step is to choose the best structure based on the hypothetical market structures, Usually logit analysis is used for the choice best structure. In this study we built 3 hypothetical market structure. They are type-cost, cost-type, and unstructured. We classified the automobile into 5 types, sedan, SUV(Sport Utility Vehicle), Pickup, Mini Van, and Full-size Van. As for purchasing cost, we classified it 2 groups based on the median value. The median value was $28,800. To decide best structure among them, maximum likelihood test was used. Resulting from market structure analysis, we find that the automobile market of USA is hierarchically structured in the form of 'automobile type - purchasing cost'. That is, result showed that automobile buyers considered function or usage first and purchasing cost next. This study has some limitations in the analysis level and variable selection. First, in this study only type of the automobile and purchasing cost were as attributes considered for purchase. Considering other attributes is very needful. Because of the attributes considered, only 3 hypothetical structure could be analyzed. Second, due to the data, brand level analysis was tried. But model level analysis would be better because automobile buyers consider model not brand. To conduct model level study more cases should be obtained. That is for acquiring the better practical meaning, brand level analysis should be conducted when we consider the actual competition which occurred in the real market. Third, the variable selection for building nested logit model was very limited to some avaliable data. In spite of those limitations, the importance of this study lies in the trial of market structure analysis of durable good.

  • PDF

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.