Use of Imaging and Biopsy in Prostate Cancer Diagnosis: A Survey From the Asian Prostate Imaging Working Group
-
- Korean Journal of Radiology
- /
- v.24 no.11
- /
- pp.1102-1113
- /
- 2023
Objective: To elucidate the use of radiological studies, including nuclear medicine, and biopsy for the diagnosis and staging of prostate cancer (PCA) in clinical practice and understand the current status of PCA in Asian countries via an international survey. Materials and Methods: The Asian Prostate Imaging Working Group designed a survey questionnaire with four domains focused on prostate magnetic resonance imaging (MRI), other prostate imaging, prostate biopsy, and PCA backgrounds. The questionnaire was sent to 111 members of professional affiliations in Korea, Japan, Singapore, and Taiwan who were representatives of their working hospitals, and their responses were analyzed. Results: This survey had a response rate of 97.3% (108/111). The rates of using 3T scanners, antispasmodic agents, laxative drugs, and prostate imaging-reporting and data system reporting for prostate MRI were 21.6%-78.9%, 22.2%-84.2%, 2.3%-26.3%, and 59.5%-100%, respectively. Respondents reported using the highest b-values of 800-2000 sec/mm2 and fields of view of 9-30 cm. The prostate MRI examinations per month ranged from 1 to 600, and they were most commonly indicated for biopsy-naïve patients suspected of PCA in Japan and Singapore and staging of proven PCA in Korea and Taiwan. The most commonly used radiotracers for prostate positron emission tomography are prostate-specific membrane antigen in Singapore and fluorodeoxyglucose in three other countries. The most common timing for prostate MRI was before biopsy (29.9%). Prostate-targeted biopsies were performed in 63.8% of hospitals, usually by MRI-ultrasound fusion approach. The most common presentation was localized PCA in all four countries, and it was usually treated with radical prostatectomy. Conclusion: This survey showed the diverse technical details and the availability of imaging and biopsy in the evaluation of PCA. This suggests the need for an educational program for Asian radiologists to promote standardized evidence-based imaging approaches for the diagnosis and staging of PCA.
Globally, drug side effects rank among the top causes of death. To effectively respond to these adverse drug reactions, a shift towards an active real-time monitoring system, along with the standardization and quality improvement of data, is necessary. Integrating individual institutional data and utilizing large-scale data to enhance the accuracy of drug side effect predictions is critical. However, data sharing between institutions poses privacy concerns and involves varying data standards. To address this issue, our research adopts a federated learning approach, where data is not shared directly in compliance with privacy regulations, but rather the results of the model's learning are shared. We employ the Common Data Model (CDM) to standardize different data formats, ensuring accuracy and consistency of data. Additionally, we propose a drug monitoring system that enhances security and scalability management through a cloud-based federated learning environment. This system allows for effective monitoring and prediction of drug side effects while protecting the privacy of data shared between hospitals. The goal is to reduce mortality due to drug side effects and cut medical costs, exploring various technical approaches and methodologies to achieve this.
This study aims to deeply analyze the military strategies and tactics used in the battles between Israel and Hamas, to understand the military approaches, technical capabilities, and their impact on the outcomes of the conflict. To achieve this, methodologies such as literature review, data analysis, and case studies were utilized. The research findings confirm that Hamas employed asymmetric tactics, such as rocket attacks and surprise attacks through underground tunnels, to counter Israel's military superiority. On the other hand, Israel responded to Hamas's attacks with the Iron Dome interception system and intelligence-gathering capabilities, but faced difficulties due to Hamas's underground tunnel network. After six months of fighting, the casualties in the Gaza Strip exceeded 30,000, and more than 1.7 million people became refugees. Israel also suffered over 1,200 deaths. Militarily, neither side achieved a decisive victory, resulting in a war of attrition. This study suggests that the Israel-Hamas war exemplifies the complexity of modern asymmetric warfare. Furthermore, it recommends that political compromise between the two sides and active mediation efforts by the international community are necessary for the peaceful resolution of the Israel-Palestine conflict.
Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.
1. Introduction Today Internet is recognized as an important way for the transaction of products and services. According to the data surveyed by the National Statistical Office, the on-line transaction in 2007 for a year, 15.7656 trillion, shows a 17.1%(2.3060 trillion won) increase over last year, of these, the amount of B2C has been increased 12.0%(10.2258 trillion won). Like this, because the entry barrier of on-line market of Korea is low, many retailers could easily enter into the market. So the bigger its scale is, but on the other hand, the tougher its competition is. Particularly due to the Internet and innovation of IT, the existing market has been changed into the perfect competitive market(Srinivasan, Rolph & Kishore, 2002). In the early years of on-line business, they think that the main reason for success is a moderate price, they are awakened to its importance of on-line service quality with tough competition. If it's not sure whether customers can be provided with what they want, they can use the Web sites, perhaps they can trust their products that had been already bought or not, they have a doubt its viability(Parasuraman, Zeithaml & Malhotra, 2005). Customers can directly reserve and issue their air tickets irrespective of place and time at the Web sites of travel agencies or airlines, but its empirical studies about these Web sites for reserving and issuing air tickets are insufficient. Therefore this study goes on for following specific objects. First object is to measure service quality and service recovery of Web sites for reserving and issuing air tickets. Second is to look into whether above on-line service quality and on-line service recovery have an impact on overall service quality. Third is to seek for the relation with overall service quality and customer satisfaction, then this customer satisfaction and loyalty intention. 2. Theoretical Background 2.1 On-line Service Quality Barnes & Vidgen(2000; 2001a; 2001b; 2002) had invented the tool to measure Web sites' quality four times(called WebQual). The WebQual 1.0, Step one invented a measuring item for information quality based on QFD, and this had been verified by students of UK business school. The Web Qual 2.0, Step two invented for interaction quality, and had been judged by customers of on-line bookshop. The WebQual 3.0, Step three invented by consolidating the WebQual 1.0 for information quality and the WebQual2.0 for interactionquality. It includes 3-quality-dimension, information quality, interaction quality, site design, and had been assessed and confirmed by auction sites(e-bay, Amazon, QXL). Furtheron, through the former empirical studies, the authors changed sites quality into usability by judging that usability is a concept how customers interact with or perceive Web sites and It is used widely for accessing Web sites. By this process, WebQual 4.0 was invented, and is consist of 3-quality-dimension; information quality, interaction quality, usability, 22 items. However, because WebQual 4.0 is focusing on technical part, it's usable at the Website's design part, on the other hand, it's not usable at the Web site's pleasant experience part. Parasuraman, Zeithaml & Malhorta(2002; 2005) had invented the measure for measuring on-line service quality in 2002 and 2005. The study in 2002 divided on-line service quality into 5 dimensions. But these were not well-organized, so there needed to be studied again totally. So Parasuraman, Zeithaml & Malhorta(2005) re-worked out the study about on-line service quality measure base on 2002's study and invented E-S-QUAL. After they invented preliminary measure for on-line service quality, they made up a question for customers who had purchased at amazon.com and walmart.com and reassessed this measure. And they perfected an invention of E-S-QUAL consists of 4 dimensions, 22 items of efficiency, system availability, fulfillment, privacy. Efficiency measures assess to sites and usability and others, system availability measures accurate technical function of sites and others, fulfillment measures promptness of delivering products and sufficient goods and others and privacy measures the degree of protection of data about their customers and so on. 2.2 Service Recovery Service industries tend to minimize the losses by coping with service failure promptly. This responses of service providers to service failure mean service recovery(Kelly & Davis, 1994). Bitner(1990) went on his study from customers' view about service providers' behavior for customers to recognize their satisfaction/dissatisfaction at service point. According to them, to manage service failure successfully, exact recognition of service problem, an apology, sufficient description about service failure and some tangible compensation are important. Parasuraman, Zeithaml & Malhorta(2005) approached the service recovery from how to measure, rather than how to manage, and moved to on-line market not to off-line, then invented E-RecS-QUAL which is a measuring tool about on-line service recovery. 2.3 Customer Satisfaction The definition of customer satisfaction can be divided into two points of view. First, they approached customer satisfaction from outcome of comsumer. Howard & Sheth(1969) defined satisfaction as 'a cognitive condition feeling being rewarded properly or improperly for their sacrifice.' and Westbrook & Reilly(1983) also defined customer satisfaction/dissatisfaction as 'a psychological reaction to the behavior pattern of shopping and purchasing, the display condition of retail store, outcome of purchased goods and service as well as whole market.' Second, they approached customer satisfaction from process. Engel & Blackwell(1982) defined satisfaction as 'an assessment of a consistency in chosen alternative proposal and their belief they had with them.' Tse & Wilton(1988) defined customer satisfaction as 'a customers' reaction to discordance between advance expectation and ex post facto outcome.' That is, this point of view that customer satisfaction is process is the important factor that comparing and assessing process what they expect and outcome of consumer. Unlike outcome-oriented approach, process-oriented approach has many advantages. As process-oriented approach deals with customers' whole expenditure experience, it checks up main process by measuring one by one each factor which is essential role at each step. And this approach enables us to check perceptual/psychological process formed customer satisfaction. Because of these advantages, now many studies are adopting this process-oriented approach(Yi, 1995). 2.4 Loyalty Intention Loyalty has been studied by dividing into behavioral approaches, attitudinal approaches and complex approaches(Dekimpe et al., 1997). In the early years of study, they defined loyalty focusing on behavioral concept, behavioral approaches regard customer loyalty as "a tendency to purchase periodically within a certain period of time at specific retail store." But the loyalty of behavioral approaches focuses on only outcome of customer behavior, so there are someone to point the limits that customers' decision-making situation or process were neglected(Enis & Paul, 1970; Raj, 1982; Lee, 2002). So the attitudinal approaches were suggested. The attitudinal approaches consider loyalty contains all the cognitive, emotional, voluntary factors(Oliver, 1997), define the customer loyalty as "friendly behaviors for specific retail stores." However these attitudinal approaches can explain that how the customer loyalty form and change, but cannot say positively whether it is moved to real purchasing in the future or not. This is a kind of shortcoming(Oh, 1995). 3. Research Design 3.1 Research Model Based on the objects of this study, the research model derived is
Background: Mediastinal neurogenic tumors are generally benign lesions and they are ideal candidates for performing resection via video-assisted thoracoscopic surgery (VATS). However, benign neurogenic tumors at the thoracic apex present technical problems for the surgeon because of the limited exposure of the neurovascular structures, and the optimal way to surgically access these tumors is still a matter of debate. This study aims to clarify the feasibility and safety of the VATS approach for performing surgical resection of benign apical neurogenic tumors (ANT). Material and Method: From January 1996 to September 2008, 31 patients with benign ANT (15 males/16 females, mean age: 45 years, range: 8~73), were operated on by various surgical methods: 14 VATS, 10 lateral thoracotomies, 6 cervical or cervicothoracic incisions and 1 median sternotomy. 3 patients had associated von Recklinhausen's disease. The perioperative variables and complications were retrospectively reviewed according to the surgical approaches, and the surgical results of VATS were compared with those of the other invasive surgeries. Result: In the VATS group, the histologic diagnosis was schwannoma in 9 cases, neurofibroma in 4 cases and ganglioneuroma in 1 case, and the median tumor size was 4.3 cm (range: 1.2~7.0 cm). The operation time, amount of chest tube drainage and the postoperative stay in the VATS group were significantly less than that in the other invasive surgical group (p<0.05). No conversion thoracotomy was required. There were 2 cases of Hornor's syndrome and 2 brachial plexus neuropathies in the VATS group; there was 1 case of Honor's syndrome, 1 brachial plexus neuropathy, 1 vocal cord palsy and 2 non-neurologic complications in the invasive surgical group, and all the complications developed postoperatively. The operative method was an independent predictor for postoperative neuropathies in the VATS group (that is, non-enucleation of the tumor) (p=0.029). Conclusion: The VATS approach for treating benign ANT is a less invasive, safe and feasible method. Enucleation of the tumor during the VATS procedure may be an important technique to decrease the postoperative neurological complications.
In recent years, although numbers of corporations are bringing in PMO, they seem to be indifferent to PMO performance measurement. This demonstrates that there are also other reasons beside performance measurement of information systems (IS) project being ambiguous by introducing PMO; the lack of acknowledging the concrete function of PMO, and the scarcity of empirical study about the effect of PMO on the project members and project performance. In this sense, this study is aimed at proposing a new research model in which project success factors (i.e., standardization, management advocacy, and staff expertise) affect PMO capability (i.e., knowledge management, resources management, and problem solving competency) positively, leading to project performance (i.e., task outcomes, psychological outcomes, and organizational outcomes) eventually. To empirically test the research model, data are surveyed from PMO department and IS department. To prove the validity of the proposed research model, PLS analysis is applied with valid 132 questionnaires. By employing PLS technique, the measurement reliability and validity of research variables are tested and the path analysis is conducted to do the hypothesis testing. The path analysis results can be organized into 7 ways in large scale. First, standardization of project success factors has a positive association with knowledge management, resources management, and problem solving competency of PMO capabilities. The findings of this result indicate that the multiple or single project management should satisfy standardization in order to operate an effective PMO. Second, management advocacy of project success factors has a positive association with knowledge management, resources management, and problem solving competency. Management advocacy refers to the willingness of management to provide the required resources and authority for project success. There is agreement among researchers regarding the importance of management advocacy for favorable PMO capability. Third, staff expertise of project success factors has a positive association with knowledge management, resources management, and problem solving competency. The findings of this result indicate that the formation of an exceptional consultant or members with a proficient knowledge for staff expertise of project member is the key factor to elevate the PMO capability. Past research suggests that experience and knowledge and the resultant familiarity with the problem faced can be an important determinant of PMO capability. A capable project with appropriate staff expertise means that it enjoys a diversity of abilities and experiences. Fourth, knowledge management competency of PMO capabilities has a positive impact on psychological outcomes but has no direct effect on task outcomes and organizational outcomes. In domestic case of S. Korea, PMO was finally introduced to many other corporations in 2005 though it started bringing in 2000. Therefore, it had neither a significant impact on the task outcomes nor organizational outcomes by lacking the contents and the infrastructure of the knowledge management because the knowledge consolidation and management period of PMO is comparatively shorter by terms than other foreign nations. Fifth, resources management competency of PMO capabilities has a positive association with task outcomes, psychological outcomes, and organizational outcomes. In addition, problem solving competency of PMO capabilities has a positive association with task outcomes, psychological outcomes, and organizational outcomes. Therefore, the findings of this results stress that PMO capabilities has a positive impact on project performance. Sixth, according to the path analysis of the hypothesis, which suggested in this research, problem solving competency is the PMO capability which is the key success factor for task, psychological, and organizational outcomes as an integrated performance model. Further, the analysis reveals that problem solving competency is an important factor for integrated performance model. The finding is in line with past IS research, which affirms that the work of IS projects is essentially a problem solving endeavor. Seventh, in the path analysis of the hypothesis in this research, the path of the management advocacy
This paper describes the technical background for the Korean wildlife radiation dose assessment code, K-BIOTA, and the summary of its application. The K-BIOTA applies the graded approaches of 3 levels including the screening assessment (Level 1 & 2), and the detailed assessment based on the site specific data (Level 3). The screening level assessment is a preliminary step to determine whether the detailed assessment is needed, and calculates the dose rate for the grouped organisms, rather than an individual biota. In the Level 1 assessment, the risk quotient (RQ) is calculated by comparing the actual media concentration with the environmental media concentration limit (EMCL) derived from a bench-mark screening reference dose rate. If RQ for the Level 1 assessment is less than 1, it can be determined that the ecosystem would maintain its integrity, and the assessment is terminated. If the RQ is greater than 1, the Level 2 assessment, which calculates RQ using the average value of the concentration ratio (CR) and equilibrium distribution coefficient (Kd) for the grouped organisms, is carried out for the more realistic assessment. Thus, the Level 2 assessment is less conservative than the Level 1 assessment. If RQ for the Level 2 assessment is less than 1, it can be determined that the ecosystem would maintain its integrity, and the assessment is terminated. If the RQ is greater than 1, the Level 3 assessment is performed for the detailed assessment. In the Level 3 assessment, the radiation dose for the representative organism of a site is calculated by using the site specific data of occupancy factor, CR and Kd. In addition, the K-BIOTA allows the uncertainty analysis of the dose rate on CR, Kd and environmental medium concentration among input parameters optionally in the Level 3 assessment. The four probability density functions of normal, lognormal, uniform and exponential distribution can be applied.The applicability of the code was tested through the participation of IAEA EMRAS II (Environmental Modeling for Radiation Safety) for the comparison study of environmental models comparison, and as the result, it was proved that the K-BIOTA would be very useful to assess the radiation risk of the wildlife living in the various contaminated environment.
Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.
Green infrastructure planning represents landscape planning measures to reduce particulate matter. This study aimed to derive factors that may be used in planning green infrastructure for particulate matter reduction using text mining techniques. A range of analyses were carried out by focusing on keywords such as 'particulate matter reduction plan' and 'green infrastructure planning elements'. The analyses included Term Frequency-Inverse Document Frequency (TF-IDF) analysis, centrality analysis, related word analysis, and topic modeling analysis. These analyses were carried out via text mining by collecting information on previous related research, policy reports, and laws. Initially, TF-IDF analysis results were used to classify major keywords relating to particulate matter and green infrastructure into three groups: (1) environmental issues (e.g., particulate matter, environment, carbon, and atmosphere), target spaces (e.g., urban, park, and local green space), and application methods (e.g., analysis, planning, evaluation, development, ecological aspect, policy management, technology, and resilience). Second, the centrality analysis results were found to be similar to those of TF-IDF; it was confirmed that the central connectors to the major keywords were 'Green New Deal' and 'Vacant land'. The results from the analysis of related words verified that planning green infrastructure for particulate matter reduction required planning forests and ventilation corridors. Additionally, moisture must be considered for microclimate control. It was also confirmed that utilizing vacant space, establishing mixed forests, introducing particulate matter reduction technology, and understanding the system may be important for the effective planning of green infrastructure. Topic analysis was used to classify the planning elements of green infrastructure based on ecological, technological, and social functions. The planning elements of ecological function were classified into morphological (e.g., urban forest, green space, wall greening) and functional aspects (e.g., climate control, carbon storage and absorption, provision of habitats, and biodiversity for wildlife). The planning elements of technical function were classified into various themes, including the disaster prevention functions of green infrastructure, buffer effects, stormwater management, water purification, and energy reduction. The planning elements of the social function were classified into themes such as community function, improving the health of users, and scenery improvement. These results suggest that green infrastructure planning for particulate matter reduction requires approaches related to key concepts, such as resilience and sustainability. In particular, there is a need to apply green infrastructure planning elements in order to reduce exposure to particulate matter.
shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient
shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient
Is Video-assisted Thoracoscopic Resection for Treating Apical Neurogenic Tumors Always Safe?
(흉강 첨부 양성 신경종의 흉강경을 이용한 절제술: 언제나 안전하게 시행할 수 있나?)
An Exploratory Study on the Project Performance by PMO Capability
(PMO 역량에 따른 프로젝트 성과에 관한 연구)
Characteristics of the Graded Wildlife Dose Assessment Code K-BIOTA and Its Application
(단계적 야생동식물 선량평가 코드 K-BIOTA의 특성 및 적용)
An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism
(상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)
Derivation of Green Infrastructure Planning Factors for Reducing Particulate Matter - Using Text Mining -
(미세먼지 저감을 위한 그린인프라 계획요소 도출 - 텍스트 마이닝을 활용하여 -)
이메일무단수집거부
이용약관
제 1 장 총칙
제 2 장 이용계약의 체결
제 3 장 계약 당사자의 의무
제 4 장 서비스의 이용
제 5 장 계약 해지 및 이용 제한
제 6 장 손해배상 및 기타사항
Detail Search
Image Search
(β)