• Title/Summary/Keyword: External Project

Search Result 353, Processing Time 0.029 seconds

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

The Effect of Synchronous CMC Technology by Task Network: A Perspective of Media Synchronicity Theory (개인의 업무 네트워크 특성에 따른 동시적 CMC의 영향 : 매체 동시성 이론 관점)

  • Kim, Min-Soo;Park, Chul-Woo;Yang, Hee-Dong
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.21-43
    • /
    • 2008
  • The task network which is formed of different individuals can be recognized as a social network. Therefore, the way to communicate with people inside or outside the network has considerable influence on their outcome. Moreover, the position on which a member stands in a network shows the different effects of the information systems supporting communication with others. In this paper, it is to be studied how personal CMC (computer-mediated communication) tools affect the mission that those who work for a network perform through diverse task networks. Especially, we focused on synchronicity of CMC. On this score, the perspective of Media Synchronicity Theory was taken that had been suggested by criticizing Media Richness Theory. It is the objective, from this perspective, to find which characteristics of networks make the value of IT supporting synchronicity high. In the research trends of social networks, there have been two traditional perspectives to explain the effect of network: embeddedness and diversity ones. These differ from the aspect which type of social network can provide much more economic benefits. As similar studies have been reported by various researchers, these are also divided into the bonding and bridging views which are based on internal and external tie, respectively, Size, density, and centrality were measured as the characteristics of personal task networks. Size means the level of relationship between members. It is the total number of other colleagues who work with a specific member for a certain project. It means, the larger the size of task network, the more the number of coworkers who interact each other through the job. Density is the ratio of the number of relationships arranged actually to the total number of available ones. In an ego-centered network, it is defined as the ratio of the number of relationship made really to the total number of possible ones between members who are actually involved each other. The higher the level of density, the larger the number of projects on which the members collaborate. Centrality means that his/her position is on the exact center of whole network. There are several methods to measure it. In this research, betweenness centrality was adopted among them. It is measured by the position on which one member stands between others in a network. The determinant to raise its level is the shortest geodesic that represents the shortest distance between members. Centrality also indicates the level of role as a broker among others. To verify the hypotheses, we interviewed and surveyed a group of employees of a nationwide financial organization in which a groupware system is used. They were questioned about two CMC applications: MSN with a higher level of synchronicity and email with a lower one. As a result, the larger the size of his/her own task network, the smaller its density and the higher the level of his/her centrality, the higher the level of the effect using the task network with CMC tools. Above all, this positive effect is verified to be much more produced while using CMC applications with higher-level synchronicity. Among the a variety of situations under which the use of CMC gives more benefits, this research is considered as one of rare cases regarding the characteristics of task network as moderators by focusing ITs for the operation of his/her own task network. It is another contribution of this research to prove empirically that the values of information system depend on the social, or comparative, characteristic of time. Though the same amount of time is shared, the social characteristics of users change its value. In addition, it is significant to examine empirically that the ITs with higher-level synchronicity have the positive effect on productivity. Many businesses are worried about the negative effect of synchronous ITs, for their employees are likely to use them for personal social activities. However. this research can help to dismiss the concern against CMC tools.

Progress of Composite Fabrication Technologies with the Use of Machinery

  • Choi, Byung-Keun;Kim, Yun-Hae;Ha, Jin-Cheol;Lee, Jin-Woo;Park, Jun-Mu;Park, Soo-Jeong;Moon, Kyung-Man;Chung, Won-Jee;Kim, Man-Soo
    • International Journal of Ocean System Engineering
    • /
    • v.2 no.3
    • /
    • pp.185-194
    • /
    • 2012
  • A Macroscopic combination of two or more distinct materials is commonly referred to as a "Composite Material", having been designed mechanically and chemically superior in function and characteristic than its individual constituent materials. Composite materials are used not only for aerospace and military, but also heavily used in boat/ship building and general composite industries which we are seeing increasingly more. Regardless of the various applications for composite materials, the industry is still limited and requires better fabrication technology and methodology in order to expand and grow. An example of this is that the majority of fabrication facilities nearby still use an antiquated wet lay-up process where fabrication still requires manual hand labor in a 3D environment impeding productivity of composite product design advancement. As an expert in the advanced composites field, I have developed fabrication skills with the use of machinery based on my past composite experience. In autumn 2011, the Korea government confirmed to fund my project. It is the development of a composite sanding machine. I began development of this semi-robotic prototype beginning in 2009. It has possibilities of replacing or augmenting the exhaustive and difficult jobs performed by human hands, such as sanding, grinding, blasting, and polishing in most often, very awkward conditions, and is also will boost productivity, improve surface quality, cut abrasive costs, eliminate vibration injuries, and protect workers from exposure to dust and airborne contamination. Ease of control and operation of the equipment in or outside of the sanding room is a key benefit to end-users. It will prove to be much more economical than normal robotics and minimize errors that commonly occur in factories. The key components and their technologies are a 360 degree rotational shoulder and a wrist that is controlled under PLC controller and joystick manual mode. Development on both of the key modules is complete and are now operational. The Korean government fund boosted my development and I expect to complete full scale development no later than 3rd quarter 2012. Even with the advantages of composite materials, there is still the need to repair or to maintain composite products with a higher level of technology. I have learned many composite repair skills on composite airframe since many composite fabrication skills including repair, requires training for non aerospace applications. The wind energy market is now requiring much larger blades in order to generate more electrical energy for wind farms. One single blade is commonly 50 meters or longer now. When a wind blade becomes damaged from external forces, on-site repair is required on the columns even under strong wind and freezing temperature conditions. In order to correctly obtain polymerization, the repair must be performed on the damaged area within a very limited time. The use of pre-impregnated glass fabric and heating silicone pad and a hot bonder acting precise heating control are surely required.

A Look at the Need for a Crafts program of Developmental Disabilities (발달장애의 수예공작 프로그램 필요성에 관한 고찰)

  • Kim, Nam-Soon;Kim, Dong-Hyun;Kim, Hee-Jung
    • The Journal of Korean society of community based occupational therapy
    • /
    • v.1 no.1
    • /
    • pp.79-89
    • /
    • 2011
  • The number of the disabled person had been increased for the industrial accident and the environmental pollution. Especially, developmental disability has the high prevalence rate between 5% and 10% of the whole children. The children with a developmental disability can be treated by the physical therapy, the occupational therapy, the psychology therapy, speech therapy, and art therapy. Visual preception which is function to recognize the external environment through the optic organ could be related to most behaviors on the everyday life. But because the children with disability could not develop the visual-preception enough, they came to have difficulties in executing daily life project. For this reason, it is most important to understand the estimation and the cure on the visual-preception in the pediatric occupational therapy. To improve the visual-preception power, we have many kind of methods including sensory integration, training program for the visual-perception and art-craft program. Particularly, the art-craft which is the representative activity for making something by hands, can be applied to anyone. As the study on the brain has been activated, it was proved that handicraft actives could have an good effect on the brain function and using brain. When the fine motor exercise and more delicate and accurate motion were carried, these motions need the essential help of the visual-perception. So it could be expected that using the repetitive hand function by art-craft makes the brain function improve, when a activity that needs a fine motor exercise and more delicate, accurate motion was carried, It also indicates that the art-craft program has a clear treatment value. Though the intervention between visual-perception development and visual-perception disability have a majority in the field of occupational therapy, there is a few study yet. Therefore, this study tried to look back on the necessity of applying the art-craft program to the children with disability as the prestudy for preliminary validity of the master's thesis.

  • PDF

A Resource-Based Perspective on Three IT Resources and Their Relationships in IT Outsourcing (IT 아웃소싱 성공에 영향을 미치는 3가지 IT 자원들과 그 관계: 자원기반 관점에서)

  • Kim, Chy Heon;Kim, Joon S.;Im, Kun Shin
    • Information Systems Review
    • /
    • v.14 no.3
    • /
    • pp.53-74
    • /
    • 2012
  • IT outsourcing (ITO) is an integration of two firms-external vendor(s) and a client firm-IT resources by contract. According to resource-based view(RBV), three different resources-ITO vendor's resource, client firm's resource, and the relationship resource between two firms- may have an impact on ITO performance. However, there have been few previous studies considering all three IT resources simultaneously. There have been also few empirical studies in ITO context, which test Bharadwaj (2000)'s findings: 1) IT resources can be divided into tangible IT asset and intangible IT capability, and 2) only IT capability has an impact on the IT performance. Therefore we examined whether, in ITO context, all three different resources have a significant impact on ITO performance. Adopting the findings of previous IT studies, we also divided IT resource into IT asset and IT capability. To achieve this research objective, we analyzed 62 ITO cases of 45 companies being listed in Korean top 100 companies for recent 3 years. Also, we analyzed the data with the Partial Least Squares method. The results of this research lead to the following conclusions: First, only when partnership is high, ITO vendors' resource can have an influence on ITO performance. Second, only client firm's IT capability, not IT asset, is directly related to the ITO success. Third, a firm's IT capability can increase the partnership. Therefore, we concluded that 1) RBV is also an useful theory in ITO context, 2) Bharadwaj(2000)'s suggestion is valid in ITO context as well, and 3) the relationship resource is also important in ITO.

  • PDF

Professionalism raising of the escort which leads an instance analysis (사례분석을 통한 경호 전문성 제고)

  • Yu, Hyung-Chang
    • Korean Security Journal
    • /
    • no.18
    • /
    • pp.73-99
    • /
    • 2009
  • There are three assassination and treatening cases in this thesis introduced as analysis data. They are shooting accidents of the U.S.A's President Reagun (1981,3.30), and the President Park Jeong Hee of South Korea(1974.8.15), assassination of the Prime Minister Lavin of Israel (1995.11.4) In March 30, 1981, there was an accident where criminal, Hinckley, fired ball cartridges right before the President Reagan got into the car to move to White House after completing the announcement of Hilton Hotel of Washington. As a result, the chest of president was shot and public information secretary and safeguard were wounded. In August, 15, pm 10:23, where the 29th 8.15 independent anniversay event was being celebrated by the people at the National theater in Jangchungdong, Seoul, the criminal Moon Sekwang fired ball cartridges, he failed to assassinate the President Park Jeong Hee of Korea, but shot the First lady Yuk Young Soo. She was wounded right part of head and died. In November 4, Saturday, pm 22:00 the Prime Minster Lavin had finished the supporting event of Middle Asia's Peace project and was taking on the car when he was killed by the criminal Amir's shooting, The accidents left very important lesson from the aspect of security analysis and it has been frequently used as a material for the education and training of safeguard organization. In Korea, as well as Presidential Security Service, national security departments have selected it as an important model for the subjects such as 'Security Analysis, 'Security Practice' and 'Security Methodology'. In the performance of security duty, security skill is the most important matter. Moreover, it has a close relationship with politics, society and culture. The purpose of this study is to analyze and reevaluate the case, which has been treated as a usual model from the aspect of security analysis, beyond its introduction. Attempted assassination of President Reagan was evaluated as a positive success example because of its rapid response of adjacent guards to evacuate Reagan, who is a guard target, within 10 seconds after the shot. When comparing it to President Kennedy Assassination of 1963, it was evaluated that guards were significantly specialized. In the study, however, it was possible to found many problems such as carelessness of guard, who is in charge of external area of event place, idle attitude for frequently used event place, confusion of wireless communication, risk of wireless security disclose, insufficient provision of compulsory record file, insufficient profiling of dangerous person and unsecured hospital and first-aid room.

  • PDF

Beginning of the Meteorological Satellite: The First Meteorological Satellite TIROS (기상위성의 태동: 최초의 기상위성 TIROS)

  • Ahn, Myoung-Hwan
    • Atmosphere
    • /
    • v.22 no.4
    • /
    • pp.489-497
    • /
    • 2012
  • Recently released a top secret document explicitly shows that the early development plan for an earth observation satellite in the USA has a hidden and more important purpose for a concept of 'free space' than the scientific purpose. At that time, the hidden and secret concept imbedded within the early space development plan prevail other national policies of the USA government for purpose of the national security. Under these circumstances, it is quite reasonable to accept a possibility that the meteorological satellites which play a key role in the every area of meteorology and climatology was also born for the hidden purposes. Even it is so, it is quite amazing that the first meteorological satellite is launched in the USA despite of the facts that the major users of the meteorological satellites were not very enthusiastic with the meteorological satellite and the program was not started as a formal meteorological satellite project. This was only possible because of the external socio-political impact caused by the successful launch of the Russian Sputnik satellite and a few key policy developers who favored the meteorological satellite program. It is also interesting to note that the beginning of the first Korean meteorological satellite program was initiated by a similar socio-political influence occurred by the launch of a North Korean satellite.

Sensitivity Analysis for CAS500-4 Atmospheric Correction Using Simulated Images and Suggestion of the Use of Geostationary Satellite-based Atmospheric Parameters (모의영상을 이용한 농림위성 대기보정의 주요 파라미터 민감도 분석 및 타위성 산출물 활용 가능성 제시)

  • Kang, Yoojin;Cho, Dongjin;Han, Daehyeon;Im, Jungho;Lim, Joongbin;Oh, Kum-hui;Kwon, Eonhye
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1029-1042
    • /
    • 2021
  • As part of the next-generation Compact Advanced Satellite 500 (CAS500) project, CAS500-4 is scheduled to be launched in 2025 focusing on the remote sensing of agriculture and forestry. To obtain quantitative information on vegetation from satellite images, it is necessary to acquire surface reflectance through atmospheric correction. Thus, it is essential to develop an atmospheric correction method suitable for CAS500-4. Since the absorption and scattering characteristics in the atmosphere vary depending on the wavelength, it is needed to analyze the sensitivity of atmospheric correction parameters such as aerosol optical depth (AOD) and water vapor (WV) considering the wavelengths of CAS500-4. In addition, as CAS500-4 has only five channels (blue, green, red, red edge, and near-infrared), making it difficult to directly calculate key parameters for atmospheric correction, external parameter data should be used. Therefore, thisstudy performed a sensitivity analysis of the key parameters (AOD, WV, and O3) using the simulated images based on Sentinel-2 satellite data, which has similar wavelength specifications to CAS500-4, and examined the possibility of using the products of GEO-KOMPSAT-2A (GK2A) as atmospheric parameters. The sensitivity analysisshowed that AOD wasthe most important parameter with greater sensitivity in visible channels than in the near-infrared region. In particular, since AOD change of 20% causes about a 100% error rate in the blue channel surface reflectance in forests, a highly reliable AOD is needed to obtain accurate surface reflectance. The atmospherically corrected surface reflectance based on the GK2A AOD and WV was compared with the Sentinel-2 L2A reflectance data through the separability index of the known land cover pixels. The result showed that two corrected surface reflectance had similar Seperability index (SI) values, the atmospheric corrected surface reflectance based on the GK2A AOD showed higher SI than the Sentinel-2 L2A reflectance data in short-wavelength channels. Thus, it is judged that the parameters provided by GK2A can be fully utilized for atmospheric correction of the CAS500-4. The research findings will provide a basis for atmospheric correction of the CAS500-4 in the future.

A Study on the Establishment Case of Technical Standard for Electronic Record Information Package (전자문서 정보패키지 구축 사례 연구 - '공인전자문서보관소 전자문서 정보패키지 기술규격 개발 연구'를 중심으로-)

  • Kim, Sung-Kyum
    • The Korean Journal of Archival Studies
    • /
    • no.16
    • /
    • pp.97-146
    • /
    • 2007
  • Those days when people used paper to make up and manage all kinds of documents in the process of their jobs are gone now. Today electronic types of documents have replaced paper. Unlike paper documents, electronic ones contribute to the maximum job efficiency with their convenience in production and storage. But they too have some disadvantages; it's difficult to distinguish originals and copies like paper documents; it's not easy to examine if there is a change or damage to the documents; they are also prone to alteration and damage by the external influences in the electronic environment; and electronic documents require enormous amounts of workforce and costs for immediate measures to be taken according to the changes to the S/W and H/W environment. Despite all those weaknesses, however, electronic documents increasingly account for more percentage in the current job environment thanks to their job convenience and efficiency of production costs. Both the government and private sector have made efforts to come up with plans to maximize their advantages and minimize their risks at the same time. One of the methods is the Authorized Retention Center which is described in the study. There are a couple of prerequisites for its smooth operation; they should guarantee the legal validity of electronic documents in the administrative aspects and first secure the reliability and authenticity of electronic documents in the technological aspects. Responding to those needs, the Ministry of Commerce, Industry and Energy and the Korea Institute for Electronic Commerce, which were the two main bodies to drive the Authorized Retention Center project, revised the Electronic Commerce Act and supplemented the provisions to guarantee the legal validity of electronic documents in 2005 and conducted researches on the ways to preserve electronic documents for a long term and secure their reliability, which had been demanded by the users of the center, in 2006. In an attempt to fulfill those goals of the Authorized Retention Center, this study researched technical standard for electronic record information package of the center and applied the ISO 14721 information package model that's the standard for the long-term preservation of digital data. It also suggested a process to produce and manage information package so that there would be the SIP, AIP and DIP metadata features for the production, preservation, and utilization by users points of electronic documents and they could be implemented according to the center's policies. Based on the previous study, the study introduced the flow charts among the production and progress process, application methods and packages of technical standard for electronic record information package at the center and suggested some issues that should be consistently researched in the field of records management based on the results.