• Title/Summary/Keyword: Data transform

Search Result 2,225, Processing Time 0.034 seconds

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

A study of Development of Transmission Systems for Terrestrial Single Channel Fixed 4K UHD & Mobile HD Convergence Broadcasting by Employing FEF (Future Extension Frame) Multiplexing Technique (FEF (Future Extension Frame) 다중화 기법을 이용한 지상파 단일 채널 고정 4K UHD & 이동 HD 융합방송 전송시스템 개발에 관한 연구)

  • Oh, JongGyu;Won, YongJu;Lee, JinSeop;Kim, JoonTae
    • Journal of Broadcast Engineering
    • /
    • v.20 no.2
    • /
    • pp.310-339
    • /
    • 2015
  • In this paper, the possibility of a terrestrial fixed 4K UHD (Ultra High Definition) and mobile HD (High Definition) convergence broadcasting service through a single channel employing the FEF (Future Extension Frame) multiplexing technique in DVB (Digital Video Broadcasting)-T2 (Second Generation Terrestrial) systems is examined. The performance of such a service is also investigated. FEF multiplexing technology can be used to adjust the FFT (fast Fourier transform) and CP (cyclic prefix) size for each layer, whereas M-PLP (Multiple-Physical Layer Pipe) multiplexing technology in DVB-T2 systems cannot. The convergence broadcasting service scenario, which can provide fixed 4K UHD and mobile HD broadcasting through a single terrestrial channel, is described, and transmission requirements of the SHVC (Scalable High Efficiency Video Coding) technique are predicted. A convergence broadcasting transmission system structure is described by employing FEF and transmission technologies in DVB-T2 systems. Optimized transmission parameters are drawn to transmit 4K UHD and HD convergence broadcasting by employing a convergence broadcasting transmission structure, and the reception performance of the optimized transmission parameters under AWGN (additive white Gaussian noise), static Brazil-D, and time-varying TU (Typical Urban)-6 channels is examined using computer simulations to find the TOV (threshold of visibility). From the results, for the 6 and 8 MHz bandwidths, reliable reception of both fixed 4K UHD and mobile HD layer data can be achieved under a static fixed and very fast fading multipath channel.

Changes of the Prefrontal EEG(Electroencephalogram) Activities according to the Repetition of Audio-Visual Learning (시청각 학습의 반복 수행에 따른 전두부의 뇌파 활성도 변화)

  • Kim, Yong-Jin;Chang, Nam-Kee
    • Journal of The Korean Association For Science Education
    • /
    • v.21 no.3
    • /
    • pp.516-528
    • /
    • 2001
  • In the educational study, the measure of EEG(brain waves) can be useful method to study the functioning state of brain during learning behaviour. This study investigated the changes of neuronal response according to four times repetition of audio-visual learning. EEG data at the prefrontal$(Fp_{1},Fp_{2})$ were obtained from twenty subjects at the 8th grade, and analysed quantitatively using FFT(fast Fourier transform) program. The results were as follows: 1) In the first audio-visual learning, the activities of $\beta_{2}(20-30Hz)$ and $\beta_{1}(14-19Hz)$ waves increased highly, but the activities of $\theta(4-7Hz)$ and $\alpha$ (8-13Hz) waves decreased compared with the base lines. 2). According to the repetitive audio-visual learning, the activities of $\beta_{2}$ and $\beta_{1}$ waves decreased gradually after the 1st repetitive learning. And, the activity of $\beta_{2}$ wave had the higher change than that of $\beta_{1}$ wave. 3). The activity of $\alpha$ wave decreased smoothly according to the repetitive audio-visual learning, and the activity of $\theta$ wave decreased radically after twice repetitive learning. 4). $\beta$ and $\theta$ waves together showed high activities in the 2nd audio-visual learning(once repetition), and the learning achievement increased highly after the 2nd learning. 5). The right prefrontal$(Fp_{2})$ showed higher activation than the left$(Fp_{1})$ in the first audio-visual learning. However, there were not significant differences between the right and the left prefrontal EEG activities in the repetitive audio-visual learning. Based on these findings, we can conclude that the habituation of neuronal response shows up in the repetitive audio-visual learning and brain hemisphericity can be changed by learning experiences. In addition, it is suggested once repetition of audio-visual learning be effective on the improvement of the learning achievement and on the activation of the brain function.

  • PDF

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

Kinetics and Mechanism of the Oxidation of Alcohols by C9H7NHCrO3Cl (C9H7NHCrO3Cl에 의한 알코올류의 산화반응에서 속도론과 메카니즘)

  • Park, Young-Cho;Kim, Young-Sik;Kim, Soo-Jong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.8
    • /
    • pp.378-384
    • /
    • 2018
  • $C_9H_7NHCrO_3Cl$ was synthesized by reacting $C_9H_7NH$ with chromium (VI) trioxide. The structure of the product was characterized by FT-IR (Fourier transform infrared) spectroscopy and elemental analysis. The oxidation of benzyl alcohol by $C_9H_7NHCrO_3Cl$ in various solvents showed that the reactivity increased with increasing dielectric constant(${\varepsilon}$) in the following order: DMF (N,N'-dimethylformamide) > acetone > chloroform > cyclohexane. The oxidation of alcohols was examined by $C_9H_7NHCrO_3Cl$ in DMF. As a result, $C_9H_7NHCrO_3Cl$ was found to be an efficient oxidizing agent that converts benzyl alcohol, allyl alcohol, primary alcohols, and secondary alcohols to the corresponding aldehydes or ketones (75%-95%). The selective oxidation of alcohols was also examined by $C_9H_7NHCrO_3Cl$ in DMF. $C_9H_7NHCrO_3Cl$ was the selective oxidizing agent of benzyl, allyl and primary alcohol in the presence of secondary ones. In the presence of DMF with an acidic catalyst, such as $H_2SO_4$, $C_9H_7NHCrO_3Cl$ oxidized benzyl alcohol (H) and its derivatives ($p-OCH_3$, $m-CH_3$, $m-OCH_3$, m-Cl, and $m-NO_2$). Electron donating substituents accelerated the reaction rate, whereas electron acceptor groups retarded the reaction rate. The Hammett reaction constant (${\rho}$) was -0.69 (308K). The observed experimental data were used to rationalize hydride ion transfer in the rate-determining step.

Speech Recognition Using Linear Discriminant Analysis and Common Vector Extraction (선형 판별분석과 공통벡터 추출방법을 이용한 음성인식)

  • 남명우;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.35-41
    • /
    • 2001
  • This paper describes Linear Discriminant Analysis and common vector extraction for speech recognition. Voice signal contains psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word spelled out by different speakers can be very different heard. This property of speech signal make it very difficult to extract common properties in the same speech class (word or phoneme). Linear algebra method like BT (Karhunen-Loeve Transformation) is generally used for common properties extraction In the speech signals, but common vector extraction which is suggested by M. Bilginer et at. is used in this paper. The method of M. Bilginer et al. extracts the optimized common vector from the speech signals used for training. And it has 100% recognition accuracy in the trained data which is used for common vector extraction. In spite of these characteristics, the method has some drawback-we cannot use numbers of speech signal for training and the discriminant information among common vectors is not defined. This paper suggests advanced method which can reduce error rate by maximizing the discriminant information among common vectors. And novel method to normalize the size of common vector also added. The result shows improved performance of algorithm and better recognition accuracy of 2% than conventional method.

  • PDF

A Reappraisal of Rural Public Service Location: the Case of Postal Facilities (農村地域의 郵政施設 立地問題)

  • Huh, Woo-Kung
    • Journal of the Korean Geographical Society
    • /
    • v.31 no.1
    • /
    • pp.1-18
    • /
    • 1996
  • This study examines the spatial characteristics of postal office patronage in rural areas. in the light of future possible relocation and closures of the postal facilities. Most of private services have flown out small rural central places due to the decrease of supporting population, and there consequently remain only a few public services including government-run post offices at the Myon seats, the lowest level among rural central places in Korea. The small local population and its further decline undermine the rationale for maintaining such public services in depleted rural areas. For the worse of it, the government recently plans to transform the postal system to a quasi-private, corporational structure. One can fear that the profit-seeking nature of the new postal corporation will inevitably force to close many of such small rural facilities. The study first analysed nation-wide censuses of postal offices for the years of 1986 and 1992. The postal services examined are per capita number of postal stamps and revenue stamps sold, and letters, parcels, telegrams and monetary transactions handled at the post offices. It is found that, while the usage of postal services has increased substantially throughout the nation during the period of 1986-1992, the increment has largely been occurred by urban post offices rather than by those in Gun seats (i.e., rural counties); and that the gap of the service levels between urban and rural post offices is ever widening. The study further examined the service differentials among the post offices within rural counties to find that those post offices adjacent to the county (Gun) seats and larger urban centers rendered less amount of services than remote rural post offices, indicating that rural residents tend to partonize larger centers more and more than local Myon seats. At the second stage of the study, questionnaire surveys were conducted in Muju, Kimpo, and Hongsung-Gun's. These three counties are meant to represent respectively the remote, suburban, and intermediary counties in Korea. The analyses of survey data reveal that the postal hinterlands of the county seats extend to much of nearby Myons, the subdivisions of a Gun. It is also found that the extent of postal hinterlands of the three counties and the magnitude of patronage and quite different from each other depending upon the topography, population density, and the propinquity of the counties to metropolitan centers. The findings suggest to reappraise the current flat allocation scheme of public facilites to each of rural subdivisions throughout the nation. A detailed analysis on the travel behavior of the survey respondents yields that age is the most salient variable to distinguish activity spaces of rural residents. The activity spaces of older respondents tend to be more limited within their Myon, whereas those of younger respondents extend across the Myon boundary, toward the central towns and even distant larger cities. The very existence of several activity spaces in rural areas calls for an attention in the future locational decisions of public facilities. The locational criteria, employed by the Ministry of Communication of Korean government to establish a post office, are the size of hinterland population and the distance from adjacent postal facilities. The present study findings suggest two additional criteria: the order in rural central place hierarchy and the propinquity to the upper-level centers of the central hierarchy. These old and new criteria are complementary each other in that the former criteria are employed to determine new office locations, whereas the latter are appropriate to determine facility relocation and closures.

  • PDF

Understanding the Relationship between Value Co-Creation Mechanism and Firm's Performance based on the Service-Dominant Logic (서비스지배논리하에서 가치공동창출 매커니즘과 기업성과간의 관계에 대한 연구)

  • Nam, Ki-Chan;Kim, Yong-Jin;Yim, Myung-Seong;Lee, Nam-Hee;Jo, Ah-Rha
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.177-200
    • /
    • 2009
  • AIn the advanced - economy, the services industry hasbecome a dominant sector. Evidently, the services sector has grown at a much faster rate than any other. For instance, in such developed countries as the U.S., the proportion of the services sector in its GDP is greater than 75%. Even in the developing countries including India and China, the magnitude of the services sector in their GDPs is rapidly growing. The increasing dependence on service gives rise to new initiatives including service science and service-dominant logic. These new initiatives propose a new theoretical prism to promote the better understanding of the changing economic structure. From the new perspectives, service is no longer regarded as a transaction or exchange, but rather co-creation of value through the interaction among service users, providers, and other stakeholders including partners, external environments, and customer communities. The purpose of this study is the following. First, we review previous literature on service, service innovation, and service systems and integrate the studies based on service dominant logic. Second, we categorize the ten propositions of service dominant logic into conceptual propositions and the ones that are directly related to service provision. Conceptual propositions are left out to form the research model. With the selected propositions, we define the research constructs for this study. Third, we develop measurement items for the new service concepts including service provider network, customer network, value co-creation, and convergence of service with product. We then propose a research model to explain the relationship among the factors that affect the value creation mechanism. Finally, we empirically investigate the effects of the factors on firm performance. Through the process of this research study, we want to show the value creation mechanism of service systems in which various participants in service provision interact with related parties in a joint effort to create values. To test the proposed hypotheses, we developed measurement items and distributed survey questionnaires to domestic companies. 500 survey questionnaires were distributed and 180 were returned among which 171 were usable. The results of the empirical test can be summarized as the following. First, service providers' network which is to help offer required services to customers is found to affect customer network, while it does not have a significant effect on value co-creation and product-service convergence. Second, customer network, on the other hand, appears to influence both value co-creation and product-service convergence. Third, value co-creation accomplished through the collaboration of service providers and customers is found to have a significant effect on both product-service convergence and firm performance. Finally, product-service convergence appears to affect firm performance. To interpret the results from the value creation mechanism perspective, service provider network well established to support customer network is found to have significant effect on customer network which in turn facilitates value co-creation in service provision and product-service convergence to lead to greater firm performance. The results have some enlightening implications for practitioners. If companies want to transform themselves into service-centered business enterprises, they have to consider the four factors suggested in this study: service provider network, customer network, value co-creation, and product-service convergence. That is, companies becoming a service-oriented organization need to understand what the four factors are and how the factors interact with one another in their business context. They then may want to devise a better tool to analyze the value creation mechanism and apply the four factors to their own environment. This research study contributes to the literature in following ways. First, this study is one of the very first empirical studies on the service dominant logic as it has categorized the fundamental propositions into conceptual and empirically testable ones and tested the proposed hypotheses against the data collected through the survey method. Most of the propositions are found to work as Vargo and Lusch have suggested. Second, by providing a testable set of relationships among the research variables, this study may provide policy makers and decision makers with some theoretical grounds for their decision making on what to do with service innovation and management. Finally, this study incorporates the concepts of value co-creation through the interaction between customers and service providers into the proposed research model and empirically tests the validity of the concepts. The results of this study will help establish a value creation mechanism in the service-based economy, which can be used to develop and implement new service provision.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Manufacturing Techniques of a Backje Gilt-Bronze Cap from Bujang-ri Site in Seosan (서산 부장리 백제 금동관모의 제작기법 연구)

  • Chung, Kwang Yong;Lee, Su Hee;Kim, Gyongtaek
    • Korean Journal of Heritage: History & Science
    • /
    • v.39
    • /
    • pp.243-280
    • /
    • 2006
  • At the Bujang-ri Site, Seosan, South Chungcheong Province, around 220 archaeological features, including semi-subterranean houses and pits of Bronze Age and semi-subterranean houses, pits, and burials of Baekje period had been identified and investigated. In Particular, mound burials No. 5 of 13 of Baekje mound burials yielding a gilt-bronze cap along with other valuable artifacts drew international scholarly attention. The gilt-bronze cap from the mound burial No. 5 is a significant archaeological data not only in the study of Baekje archaeology but also in the study of international affairs and exchange at that time. At the time of exposure, the gilt-bronze cap was already broken into a number of pieces and seriously damaged by corrosion, and hardening and urethane foam were necessary in the process of collecting its pieces. Ahead of main conservational treatments on cap, X-ray photograph and CT(computerizes tomography) were taken in order to examine interior structure of the cap and to decide appropriate treatments. In the five layers identified in the profile of cap, a textile layer was set between a metal and a layerof bark of paper birch for avoiding direct contact of the metal and the bark of paper birch. Analyses were executed for examining textile layer and a layer of fibroid material. According to microscopic analysis, while the textile layer consisted of the simplest plain fabric with one fold among three kinds of textile structures, the layer of fibroid material was mixed with two or three kinds of fibers. A comparative analysis with standard sample using FT-IR (Fourier Transform Infrared Spectroscopy) announced that both textiles and fabrics were hemp. Analysis of kind of the paper birch resulted in barks of paper birch with 15 fold. A metallographic microscope, SEM, and WDS were used for the analysis of microscopic structures of plated metal pieces. While amalgam plating was treated as a plating method, the thickness of the plated layer, a barometer of plating technique, was ranged from $1.72{\mu}m$ to $8.67{\mu}m$. The degree of purity of gold (Au) used in plating was 98% in average, and less than 1% of silver (Ag) was included.