This study aims to investigate Marseille's Euromméditerranée project and provide policy implications for revitalizing domestic urban regeneration projects. First, we identify Euroméditerranée as a pivotal urban regeneration effort, executed by EPAEM-an organization fostering governance-driven project advancement through collaboration and investment from both central and local governments. This endeavor has significantly contributed to revitalizing Marseille, enriching the quality of life for its residents. Second, this urban regeneration project has the following notable features: consolidated approach with combination of full redevelopment and rehabilitation, integrated regeneration covering hardware-like physical regeneration and software-like economic, cultural, and environmental regeneration; government-type urban regeneration project structure. Finally, we suggest that policymakers should consider the economic scale in urban regeneration projects, national-level government organizations, and efficient public-private partnerships.
The customer satisfaction of WAP service greatly relies on the usability of the service due to the limited display size of a mobile phone and limitation in realizing UI (User Interface) for function keys, browser, and OS (operating system) Currently, a number of contents providers develop and deliver varying services, and thus, it is critical to control quality level of UI in consistent standards and manner. This study suggests usability index evaluation system to achieve consistent UI quality control of various WAP services. The system adopts both top-down and bottom-up approaches. The former concerns deriving UI design components and evaluation checklists for the WAP, based on the usability attributes and UI principles. The latter concerns deriving usability-related evaluation checklists from the established UI design features, and then grouping them from the viewpoint of usability principles and attributes. This bidirectional approach has two outstanding advantages: it allows thorough examination of potential elements that can cause usability problems from the standpoint of usability attributes, and also derives specific evaluation elements from the perspective of UI design components that are relevant to the real service environment. The evaluation system constitutes a hierarchical structure by networking usability attributes, UI guideline which indicates usability principles for each attribute, and usability evaluation checklist for each UI component that enables specific evaluation. Especially, each evaluation checklist contains concrete contents and format so that it can be readily marked in O/X. The score is based on the ratio of number of items that received positive answer to the number of total items. This enables a quantitative evaluation of the usability of mobile WAP service. The validity of the proposed evaluation system has been proved through comparative analysis with the real usability problems based on the user test. A software was developed that provides guideline for evaluation objects, criteria and examples for each checklist, and automatically calculates a score. The software was applied to evaluating and improving the real mobile WAP service.
Journal of the Korea Institute of Information Security & Cryptology
/
v.30
no.5
/
pp.909-928
/
2020
From the early 1970s, the US government began to recognize that penetration testing could not assure the security quality of products. Results of penetration testing such as identified vulnerabilities and faults can be varied depending on the capabilities of the team. In other words none of penetration team can assure that "vulnerabilities are not found" is not equal to "product does not have any vulnerabilities". So the U.S. government realized that in order to improve the security quality of products, the development process itself should be managed systematically and strictly. Therefore, the US government began to publish various standards related to the development methodology and evaluation procurement system embedding "security-by-design" concept from the 1980s. Security-by-design means reducing product's complexity by considering security from the initial phase of development lifecycle such as the product requirements analysis and design phase to achieve trustworthiness of product ultimately. Since then, the security-by-design concept has been spread to the private sector since 2002 in the name of Secure SDLC by Microsoft and IBM, and is currently being used in various fields such as automotive and advanced weapon systems. However, the problem is that it is not easy to implement in the actual field because the standard or guidelines related to Secure SDLC contain only abstract and declarative contents. Therefore, in this paper, we present the new framework in order to specify the level of Secure SDLC desired by enterprises. Our proposed CIA (functional Correctness, safety Integrity, security Assurance)-level-based security-by-design framework combines the evidence-based security approach with the existing Secure SDLC. Using our methodology, first we can quantitatively show gap of Secure SDLC process level between competitor and the company. Second, it is very useful when you want to build Secure SDLC in the actual field because you can easily derive detailed activities and documents to build the desired level of Secure SDLC.
Information system failure is various such as program test unpreparedness, physical facilities for damage prevention unpreparedness from simple software error. Although cross is trifling the result causes vast damage. Recently, became difficult by simple outside security system to solve this problem. Now, synthetic countermove establishment and suitable confrontation connected with danger came in necessary visual point about general Information Technology of enterprise. In connection with, in this paper, various informations and system and control about data that can happen information inside and outside considering integrity for IT resource, solubility, confidentiality within organization studied about special quality to model synthetic Risk Management System that can of course and cope in danger.
Journal of the Institute of Electronics Engineers of Korea CI
/
v.47
no.5
/
pp.50-60
/
2010
For the last half a century, the personal computer and software industries have been prosperous due to the incessant evolution of computer systems. In the 21st century, the embedded system market has greatly increased as the market shifted to the mobile gadget field. While a lot of multimedia gadgets such as mobile phone, navigation system, PMP, etc. are pouring into the market, most industrial control systems still rely on 8-bit micro-controllers and simple application software techniques. Unfortunately, the technological barrier which requires additional investment and higher quality manpower to overcome, and the business risks which come from the uncertainty of the market growth and the competitiveness of the resulting products have prevented the companies in the industry from taking advantage of such fancy technologies. However, high performance, low-power and low-cost hardware and software platforms will enable their high-technology products to be developed and recognized by potential clients in the future. This paper presents such a platform for industrial embedded systems. The platform was designed based on Telechips TCC8300 multimedia processor which embedded a variety of parallel hardware for the implementation of multimedia functions. And open-source Embedded Linux, TinyX and GTK+ are used for implementation of GUI to minimize technology costs. In order to estimate the expected performance and power consumption, the performance improvement and the power consumption due to each of enabled hardware sub-systems including YUV2RGB frame converter are measured. An analytic model was devised to check the feasibility of a new application and trade off its performance and power consumption. The validity of the model has been confirmed by implementing a real target system. The cost can be further mitigated by using the hardware parts which are being used for mass production products mostly in the cell-phone market.
In the recent years, Popularization of mobile devices such as Smart Phones, PDAs and MP3 Players causes rapid increasing necessity of Power management technology because it is most essential factor of mobile devices. On the other hand, despite low price, hard disk has large capacity and high speed. Even it can be made small enough today, too. So it appropriates mobile devices. but it consumes too much power to embed In mobile devices. Due to these motivations, in this paper we had suggested methods of minimizing Power consumption while playing multimedia data in the disk media for real-time and we evaluated what we had suggested. Strict limitation of power consumption of mobile devices has a big impact on designing both hardware and software. One difference between real-time multimedia streaming data and legacy text based data is requirement about continuity of data supply. This fact is why disk drive must persist in active state for the entire playback duration, from power management point of view; it nay be a great burden. A legacy power management function of mobile disk drive affects quality of multimedia playback negatively because of excessive I/O requests when the disk is in standby state. Therefore, in this paper, we analyze power consumption profile of disk drive in detail, and we develop the algorithm which can play multimedia data effectively using less power. This algorithm calculates number of data block to be read and time duration of active/standby state. From this, the algorithm suggested in this paper does optimal scheduling that is ensuring continual playback of data blocks stored in mobile disk drive. And we implement our algorithms in publicly available MPEG player software. This MPEG player software saves up to 60% of power consumption as compared with full-time active stated disk drive, and 38% of power consumption by comparison with disk drive controlled by native power management method.
The aim of this study is to develop a new software tool for 3D dose verification using $PRESAGE^{REU}$ Gel dosimeter. The tool included following functions: importing 3D doses from treatment planning systems (TPS), importing 3D optical density (OD), converting ODs to doses, 3D registration between two volumetric data by translational and rotational transformations, and evaluation with 3D gamma index. To acquire correlation between ODs and doses, CT images of a $PRESAGE^{REU}$ Gel with cylindrical shape was acquired, and a volumetric modulated arc therapy (VMAT) plan was designed to give radiation doses from 1 Gy to 6 Gy to six disk-shaped virtual targets along z-axis. After the VMAT plan was delivered to the targets, 3D OD data were reconstructed from 512 projection data from $Vista^{TM}$ optical CT scanner (Modus Medical Devices Inc, Canada) per every 2 hours after irradiation. A curve for converting ODs to doses was derived by comparing TPS dose profile to OD profile along z-axis, and the 3D OD data were converted to the absorbed doses using the curve. Supra-linearity was observed between doses and ODs, and the ODs were decayed about 60% per 24 hours depending on their magnitudes. Measured doses from the $PRESAGE^{REU}$ Gel were well agreed with the TPS doses at central region, but large under-doses were observed at peripheral region at the cylindrical geometry. Gamma passing rate for 3D doses was 70.36% under the gamma criteria of 3% of dose difference and 3 mm of distance to agreement. The low passing rate was resulted from the mismatching of the refractive index between the PRESAGE gel and oil bath in the optical CT scanner. In conclusion, the developed software was useful for 3D dose verification from PRESAGE gel dosimetry, but further improvement of the Gel dosimetry system were required.
Today, IT organizations perform projects with vision related to marketing and financial profit. The objective of realizing the vision is to improve the project performing ability in terms of QCD. Organizations have made a lot of efforts to achieve this objective through process improvement. Large companies such as IBM, Ford, and GE have made over $80\%$ of success through business process re-engineering using information technology instead of business improvement effect by computers. It is important to collect, analyze and manage the data on performed projects to achieve the objective, but quantitative measurement is difficult as software is invisible and the effect and efficiency caused by process change are not visibly identified. Therefore, it is not easy to extract the strategy of improvement. This paper measures and analyzes the project performance, focusing on organizations' external effectiveness and internal efficiency (Qualify, Delivery, Cycle time, and Waste). Based on the measured project performance scores, an OT (Opportunity Tree) model was designed for optimizing the project performance. The process of design is as follows. First, meta data are derived from projects and analyzed by quantitative GQM(Goal-Question-Metric) questionnaire. Then, the project performance model is designed with the data obtained from the quantitative GQM questionnaire and organization's performance score for each area is calculated. The value is revised by integrating the measured scores by area vision weights from all stakeholders (CEO, middle-class managers, developer, investor, and custom). Through this, routes for improvement are presented and an optimized improvement method is suggested. Existing methods to improve software process have been highly effective in division of processes' but somewhat unsatisfactory in structural function to develop and systemically manage strategies by applying the processes to Projects. The proposed OT model provides a solution to this problem. The OT model is useful to provide an optimal improvement method in line with organization's goals and can reduce risks which may occur in the course of improving process if it is applied with proposed methods. In addition, satisfaction about the improvement strategy can be improved by obtaining input about vision weight from all stakeholders through the qualitative questionnaire and by reflecting it to the calculation. The OT is also useful to optimize the expansion of market and financial performance by controlling the ability of Quality, Delivery, Cycle time, and Waste.
Recently web applications have growl rapidly and have become more and more complex. As web applications become more complex, there is a growing concern about their quality. But very little attentions are paid to web applications testing and there are scarce of the practical research efforts and tools. Thus, in this paper, we suggest the automated testing methods for web applications. For this, the methods generate an analysis model by analyzing the HTML codes and the source codes. Then test targets are identified and test cases are extracted from the analysis model. In addition, test drivers and test data are generated automatically, and then they are depleted on the web server to establish a testing environment. Through this process we can automate the testing processes for web applications, besides the automated methods makes our approach more effective than the existing research efforts.
Kim, Joo Seob;Ahn, Woo Sang;Lee, Woo Suk;Park, Sung Ho;Choi, Wonsik;Shin, Seong Soo
The Journal of Korean Society for Radiation Therapy
/
v.26
no.2
/
pp.305-312
/
2014
Purpose : The purpose of this study is to analyze the mechanical and leaf speed accuracy of the dynamic multileaf collimator (DMLC) and determine the appropriate period of quality assurance (QA). Materials and Methods : The quality assurance of the DMLC equipped with Millennium 120 leaves has been performed total 92 times from January 2012 to June 2014. The the accuracy of leaf position and isocenter coincidence for MLC were checked using the graph paper and Gafchromic EBT film, respectively. The stability of leaf speed was verified using a test file requiring the leaves to reach maximum leaf speed during the gantry rotation. At the end of every leaf speed QA, dynamic dynalog files created by MLC controller were analyzed using dynalog file viewer software. This file concludes the information about the planned versus actual position for all leaves and provides error RMS (root-mean square) for individual leaf deviations and error histogram for all leaf deviations. In this study, the data obtained from the leaf speed QA were used to screen the performance degradation of leaf speed and determine the need for motor replacement. Results : The leaf position accuracy and isocenteric coincidence of MLC was observed within a tolerance range recommanded from TG-142 reports. Total number of motor replacement were 56 motors over whole QA period. For all motors replaced from QA, gradually increased patterns of error RMS values were much more than suddenly increased patterns of error RMS values. Average error RMS values of gradually and suddenly increased patterns were 0.298 cm and 0.273 cm, respectively. However, The average error RMS values were within 0.35 cm recommended by the vendor, motors were replaced according to the criteria of no counts with misplacement > 1 cm. On average, motor replacement for gradually increased patterns of error RMS values 22 days. 28 motors were replaced regardless of the leaf speed QA. Conclusion : This study performed the periodic MLC QA for analyzing the mechanical and leaf speed accuracy of the dynamic multileaf collimator (DMLC). The leaf position accuracy and isocenteric coincidence showed whthin of MLC evaluation is observed within the tolerance value recommanded by TG-142 report. Based on the result obtained from leaf speed QA, we have concluded that QA protocol of leaf speed for DMLC was performed at least bimonthly in order to screen the performance of leaf speed. The periodic QA protocol can help to ensure for delivering accurate IMRT treatment to patients maintaining the performance of leaf speed.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.