• Title/Summary/Keyword: Information Processing Process

Search Result 4,585, Processing Time 0.038 seconds

A Reflectance Normalization Via BRDF Model for the Korean Vegetation using MODIS 250m Data (한반도 식생에 대한 MODIS 250m 자료의 BRDF 효과에 대한 반사도 정규화)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.6
    • /
    • pp.445-456
    • /
    • 2005
  • The land surface parameters should be determined with sufficient accuracy, because these play an important role in climate change near the ground. As the surface reflectance presents strong anisotropy, off-nadir viewing results a strong dependency of observations on the Sun - target - sensor geometry. They contribute to the random noise which is produced by surface angular effects. The principal objective of the study is to provide a database of accurate surface reflectance eliminated the angular effects from MODIS 250m reflective channel data over Korea. The MODIS (Moderate Resolution Imaging Spectroradiometer) sensor has provided visible and near infrared channel reflectance at 250m resolution on a daily basis. The successive analytic processing steps were firstly performed on a per-pixel basis to remove cloudy pixels. And for the geometric distortion, the correction process were performed by the nearest neighbor resampling using 2nd-order polynomial obtained from the geolocation information of MODIS Data set. In order to correct the surface anisotropy effects, this paper attempted the semiempirical kernel-driven Bi- directional Reflectance Distribution Function(BRDF) model. The algorithm yields an inversion of the kernel-driven model to the angular components, such as viewing zenith angle, solar zenith angle, viewing azimuth angle, solar azimuth angle from reflectance observed by satellite. First we consider sets of the model observations comprised with a 31-day period to perform the BRDF model. In the next step, Nadir view reflectance normalization is carried out through the modification of the angular components, separated by BRDF model for each spectral band and each pixel. Modeled reflectance values show a good agreement with measured reflectance values and their RMSE(Root Mean Square Error) was totally about 0.01(maximum=0.03). Finally, we provide a normalized surface reflectance database consisted of 36 images for 2001 over Korea.

An Analysis of the Moderating Effects of User Ability on the Acceptance of an Internet Shopping Mall (인터넷 쇼핑몰 수용에 있어 사용자 능력의 조절효과 분석)

  • Suh, Kun-Soo
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.27-55
    • /
    • 2008
  • Due to the increasing and intensifying competition in the Internet shopping market, it has been recognized as very important to develop an effective policy and strategy for acquiring loyal customers. For this reason, web site designers need to know if a new Internet shopping mall(ISM) will be accepted. Researchers have been working on identifying factors for explaining and predicting user acceptance of an ISM. Some studies, however, revealed inconsistent findings on the antecedents of user acceptance of a website. Lack of consideration for individual differences in user ability is believed to be one of the key reasons for the mixed findings. The elaboration likelihood model (ELM) and several studies have suggested that individual differences in ability plays an moderating role on the relationship between the antecedents and user acceptance. Despite the critical role of user ability, little research has examined the role of user ability in the Internet shopping mall context. The purpose of this study is to develop a user acceptance model that consider the moderating role of user ability in the context of Internet shopping. This study was initiated to see the ability of the technology acceptance model(TAM) to explain the acceptance of a specific ISM. According to TAM. which is one of the most influential models for explaining user acceptance of IT, an intention to use IT is determined by usefulness and ease of use. Given that interaction between user and website takes place through web interface, the decisions to accept and continue using an ISM depend on these beliefs. However, TAM neglects to consider the fact that many users would not stick to an ISM until they trust it although they may think it useful and easy to use. The importance of trust for user acceptance of ISM has been raised by the relational views. The relational view emphasizes the trust-building process between the user and ISM, and user's trust on the website is a major determinant of user acceptance. The proposed model extends and integrates the TAM and relational views on user acceptance of ISM by incorporating usefulness, ease of use, and trust. User acceptance is defined as a user's intention to reuse a specific ISM. And user ability is introduced into the model as moderating variable. Here, the user ability is defined as a degree of experiences, knowledge and skills regarding Internet shopping sites. The research model proposes that the ease of use, usefulness and trust of ISM are key determinants of user acceptance. In addition, this paper hypothesizes that the effects of the antecedents(i.e., ease of use, usefulness, and trust) on user acceptance may differ among users. In particular, this paper proposes a moderating effect of a user's ability on the relationship between antecedents with user's intention to reuse. The research model with eleven hypotheses was derived and tested through a survey that involved 470 university students. For each research variable, this paper used measurement items recognized for reliability and widely used in previous research. We slightly modified some items proper to the research context. The reliability and validity of the research variables were tested using the Crobnach's alpha and internal consistency reliability (ICR) values, standard factor loadings of the confirmative factor analysis, and average variance extracted (AVE) values. A LISREL method was used to test the suitability of the research model and its relating six hypotheses. Key findings of the results are summarized in the following. First, TAM's two constructs, ease of use and usefulness directly affect user acceptance. In addition, ease of use indirectly influences user acceptance by affecting trust. This implies that users tend to trust a shopping site and visit repeatedly when they perceive a specific ISM easy to use. Accordingly, designing a shopping site that allows users to navigate with heuristic and minimal clicks for finding information and products within the site is important for improving the site's trust and acceptance. Usefulness, however, was not found to influence trust. Second, among the three belief constructs(ease of use, usefulness, and trust), trust was empirically supported as the most important determinants of user acceptance. This implies that users require trustworthiness from an Internet shopping site to be repeat visitors of an ISM. Providing a sense of safety and eliminating the anxiety of online shoppers in relation to privacy, security, delivery, and product returns are critically important conditions for acquiring repeat visitors. Hence, in addition to usefulness and ease of use as in TAM, trust should be a fundamental determinants of user acceptance in the context of internet shopping. Third, the user's ability on using an Internet shopping site played a moderating role. For users with low ability, ease of use was found to be a more important factors in deciding to reuse the shopping mall, whereas usefulness and trust had more effects on users with high ability. Applying the EML theory to these findings, we can suggest that experienced and knowledgeable ISM users tend to elaborate on such usefulness aspects as efficient and effective shopping performance and trust factors as ability, benevolence, integrity, and predictability of a shopping site before they become repeat visitors of the site. In contrast, novice users tend to rely on the low elaborating features, such as the perceived ease of use. The existence of moderating effects suggests the fact that different individuals evaluate an ISM from different perspectives. The expert users are more interested in the outcome of the visit(usefulness) and trustworthiness(trust) than those novice visitors. The latter evaluate the ISM in a more superficial manner focusing on the novelty of the site and on other instrumental beliefs(ease of use). This is consistent with the insights proposed by the Heuristic-Systematic model. According to the Heuristic-Systematic model. a users act on the principle of minimum effort. Thus, the user considers an ISM heuristically, focusing on those aspects that are easy to process and evaluate(ease of use). When the user has sufficient experience and skills, the user will change to systematic processing, where they will evaluate more complex aspects of the site(its usefulness and trustworthiness). This implies that an ISM has to provide a minimum level of ease of use to make it possible for a user to evaluate its usefulness and trustworthiness. Ease of use is a necessary but not sufficient condition for the acceptance and use of an ISM. Overall, the empirical results generally support the proposed model and identify the moderating effect of the effects of user ability. More detailed interpretations and implications of the findings are discussed. The limitations of this study are also discussed to provide directions for future research.

Sensory Information Processing

  • Yoshimoto, Chiyoshi
    • Journal of Biomedical Engineering Research
    • /
    • v.6 no.2
    • /
    • pp.1-8
    • /
    • 1985
  • The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70$\pm$1.32mmHg/min)compared to CF dialyzers(4.32$\pm$0.55mmHg/min)(p<0.05). However, there was no observable difference in the UFR between the two dialyzers. Neither APD nor UFR showed any significant increase with an increasing number of reuses for up to more than 20reuses. A substantial number of failures observed in APD(larger than 20mmHe/min)on the reused dialyzers(2 out of 40 CP and S out 26 C-DAK) were attributed to the Possible damage on the fibers. The CF 15-11 HFDs which failed APD test did not show changes in the UFR compared to normal dialyzers indicating that APD is a more sensitive test than UFR test to evaluate the integrity of the fibers. 30527 T00401030527 ^x For quantitative measurement of reflected light from a clinical diagnostic strip, a prototype old reflectance photometer was designed. The strip loader and cassette were made to obtain more accurate reflectance parameters. The strip was illuminated at 45˚c through optical fiber and the intensity of reflected light was determined at rectanguLat angle using a photodiode. The kubelka-munk coefficient and reflection optical density were determined ar four different wavelengths(500, 550, 570 and 610nm) for blood glucose strip. For higher concentration than 300mg/41 about glucose, a saturation state of abforbance was observed at 500, 550 and 570nm. The correlation between glucose concentration and parameters was the best at 610nm. 30535 T00401030535 ^x Radiation-induced fibrosarcoma tumors were grown on the flanks of C3H mice. The mice were divided into two groups. One group was injected with Photofrin II, intravenously (2.5mg/kg body weight). The other group received no Photofrin II. Mice from both groups were irradialed for approximately 15 minutes at 100, 300, or 500 mW/cm2 with the argon (488nm/514.5 nm), dye(628nm) and gold vapor (pulsed 628 nm) laser light. A photosensitizer behaved as an added absorber. Under our experimental conditions, the presence of Photolfrin II increased surface temperature by at least 40% and the temperature rise due to 300 mW/cm2 irradiation exceeded values for hyperthermia. Light and temperature distributions with depth were estimated by a computer model. The model demonstrated the influence of wavelength on the thermal process and proved to be a valuable tool to investigate internal temperature rise. 30536 T00401030536 ^x We investigated the structural geometry of thirty-eight Korean femurs. The purpose of this study is to identify major geometrical differences between Korean femurs 3nd others that we believe belong to Caucasians so that we would be able to get insights into the femoral component design that fits Asians including Koreans. We utilized computerized tomography (CT) images of femurs extracted from cadavers. The CT images were transformed into bitmap data by using a film scanner, and then analyzed by using a commercially available software called Image v.1.0 and a Macintosh IIci computer.The resulting data were compared with already published data. The major results show that the geometry of the Korean femurs is significantly different from that of Caucasians: (1) the anteversion angle and the canal flare index are greater by the amount of approximately 8˚ and 0.5, respectively, (2) the shape of the isthmus cross section is more round, and (3) the distance between the teaser trochanter and the proximal border of the isthmus is shelter by about 15 mm. The results suggested that the femoral component suitable for Asians should be different from the currently-used components designed and manufactured mostly by European or American companies. 30537 T00401030537 ^x It is well known that nonlinear propagation characteristics of the wave in the tissue may give very useful information for the medical diagnoisis. In this paper, a new method to detect nonlinear propagation characteristics of the internal vibration in the tissue for the low frequency mechanical vibration by using bispectral analysis is proposed. In the method, low frequency vibration of f0( = 100Hz) is applied on the surface of the object, and the waveform of the internal vibration x (t) is measured from Doppler frequency modulation of silmultaneously transmitted probing ultrasonic waves. Then, the bispectra of the signal x (t) at the frequencies (f0, f0) and (f0, 2f0) are calculated to estimate the nonlinear propagation characteristics as their magnitude ratio, w here since bispectrum is free from the gaussian additive noise we can get the value with high S/N. Basic experimental system is constructed by using 3.0 MHz probing ultrasonic waves and the several experiments are carried out for some phantoms. Results show the superiority of the proposed method to the conventional method using power spectrum and also its usefulness for the tissue characterization. 30541 T00401030541 ^x This paper describes the implementation of a computerized radial pulse diagnosis by aids of a clinical expert. On this base, we composed of the radial pulse diagnosis system in korean traditional medicine. The system composed of a radial pulse wave detection system and a radial pulse diagnosis system. With a detection system, we detected Inyoung and Cheongu radial pulse wave and processed it. Then, we have got the characteristic parameters of radial pulse wave and also quantified that according to the method of Inyoung-Cheongu Comparison Radial Pulse Diagnosis. We defined the jugement standard of radial pulse diagnosis system and then we confirmed the possibility for realization of automatic radial pulse diagnosis in korean traditional medicine. 30545 T00401030545 ^x Microspheres are expected to be applied to biomedical areas such as solid-phase immunoassays, drug delivery systems, immunomagnetic cell separation. To synthesize microspheres for biomedical application, "two stage shot growth method" was developed. The uniformity ratio of synthesized microspheres was always smaller than 1.05. And the surface charge density (or the number of ionizable functional groups) of the microspheres synthesized by "two stage shot growth method" was 6~13 times higher than that of the microspheres synthesized by conventional seeded batch copolymerization. As a previous step for biomedical application, adsorption experiments of bovine albumin on microspheres were carried out under various conditions. The maximum adsorbed amount was obtained in the neighborhood of pH 4.5. Isoelectric point of bovine albumin is pH 5.0, so experimental result shows that it shifted to acid area. The adsorption isotherm was obtained, the plateau region was always reached at 2.Og/L (bulk concentration of bovine albumin).The effect of the kind and the amount of surface functional group was also examined. 30575 T00401030575 ^x A medical image workstation was developed using multimedia technique. The system based on PC-486DX was designed to acquire medical images produced by medical imaging instruments and related audio information, that is, doctors' reporting results. Input information was processed and analyzed, then the results were presented in the form of graph and animation. All the informations of the system were hierarchically related with the image as the apex. Processing and analysis algorithms were implemented so that the diagnostic accuracy could be improved. The diagnosed information can be transferred for patient diagnosis through LAN(local area network). 30592 T00401030592 ^x In the conventional infrared imaging system, complex infrared lens systems are usually used for directing collimated narrow infrared beams into the high speed 2-dimensional optic scanner. In this paper, a simple reflective infrared optic system with a 2-dimensional optic scanner is proposed for the realization of medical infrared thermography system. It has been experimentally proven that the intfrared thermography system composed of the proposed optic system has the temperature resolution of 0.1˚c under the spatial resolution of lmrad, the image matrix size of 256 X 240, and tile imaging time of 4 seconds. 30593 T00401030593 ^x In this paper, MIIS (Medical Image Information System) has been designed and implemented using INGRES RDBMS, which is based on a client/server architecture. The implemented system allows users to register and retrieve patient information, medical images and diagnostic reports. It also provides the function to display these information on workstation windows simultaneously by using the designed menu-driven graphic user interface. The medical image compression/decompression techniques are implemented and integrated into the medical image database system for the efficient data storage and the fast access through the network. 30594 T00401030594 ^x In this paper, computerized BEAM was implemented for the space domain analysis of EEG. Trans-formation from temporal summation to two-dimensional mappings is formed by 4 nearest point inter-polaton method. Methods of representation of BEAM are two. One is dot density method which classify brain electrical potential 9 levels by dot density of gray levels and the other is colour method which classify brain electrical 12 levels by red-green colours. In this BEAM, instantaneous change and average energy distribution over any arbitrary time interval of brain electrical activity could be observed and analyzed easily. In the frequency domain, the distribution of energy spectrum of a special band can easily be distinguished normality and abnormality. 30608 T00401030608 ^x Laboratory information system (LIS) is a key tool to manage laboratory data in clinical pathology. Our department has developed an information system for routine hematology using down-sized computer system. We have used an IBM 486 compatible PC with 16MB main memory, 210 MB hard disk drive, 9 RS-232C port and 24 pin dot printer. The operating system and database management system were SCO UNIX and SCO foxbase, respectively. For program development, we used Xbase language provided by SCO foxbase. The C language was used for interface purpose. To make the system use friendly, pull-down menu was used. The system connected to our hospital information system via application program interface (API), so the information related to patient and request details is automatically transmitted to our computer. Our system interfaced with fwd complete blood count analyzers(Sysmex NE-8000 and Coulter STKS) for unidirectional data tansmission from analyzer to computer. The authors suggests that this system based on down-sized computer could provide a progressive approach to total LIS based on local area network, and the implemented system could serve as a model for other hospital's LIS for routine hematology. 30609 T00401030609 ^x To develop an artificial bone substitute that is gradually degraded and replaced by the regenerated natural bone, the authors designed a composite that is consisted of calcium phosphate and collagen. To use as the structural matrix of the composite, collagen was purified from human umbilical cord. The obtained collagen was treated by pepsin to remove telopeptides, and finally, the immune-free atelocollagen was produced: The cross linked atelocollagen was highly resistant to the collagenase induced collagenolysis. The cross linked collagen demonstrated an improved tensile strength. 30618 T00401030618 ^x This paper is a study on the design of adptive filter for QRS complex detection. We propose a simple adaptive algorithm to increase capability of noise cancelation in QRS complex detection with two stage adaptive filter. At the first stage, background noise is removed and at the next stage, only spectrum of QRS complex components is passed. Two adaptive filters can afford to keep track of the changes of both noise and QRS complex. Each adaptive filter consists of prediction error filter and FIR filter The impulse response of FIR filter uses coefficients of prediction error filter. The detection rates for 105 and 108 of MIT/BIH data base were 99.3% and 97.4% respectively. 30619 T00401030619 ^x To develop an artificial bone substitute that is gradually degraded and replaced by the regenerated natural bone, the authors designed and produced a composite that is consisted of calcium phosphate and collagen. Human umbilical cord origin pepsin treated type I atelocollagen was used as the structural matrix, by which sintered or non-sintered carbonate apatite was encapsulated to form an inorganic-organic composite. With cross linking atelocollagen by UV ray irradiation, the resistance to both compressive and tensile strength was increased. Collagen degradation by the collagenase induced collagenolysis was also decreased. 30620 T00401030620 ^x We have developed a monoleaflet polymer valve as an inexpensive and viable alternative, especially for short-term use in the ventricular assist device or total artificial heart. The frame and leaflet of the polymer valve were made from polyurethane, To evaluate the hemodynamic performance of the polymer valve a comparative study of flow dynamics past a polymer valve and a St. Jude Medical prosthetic valve under physiological pulsatile flow conditions in vitro was made. Comparisons between the valves were made on the transvalvular pressure drop, regurgitation volume and maximum valve opening area. The polymer valve showed smaller regurgitation volume and transvalvular pressure drop compared to the mechanical valve at higher heart rate. The results showed that the functional characteristics of the polymer valve compared favorably with those of the mechanical valve at higher heart rate. 30621 T00401030621 ^x Explosive evaporative removal process of biological tissue by absorption of a CW laser has been simulated by using gelatin and a multimode Nd:YAG laser. Because the point of maximun temperature of laser-irradiated gelatin exists below the surface due to surface cooling, evaporation at the boiling temperature is made explosively from below the surface. The important parameters of this process are the conduction loss to laser power absorption (defined as the conduction-to-laser power parameter, Nk), the convection heat transfer at the surface to conduction loss (defined as Bi), dimensionless extinction coefficient (defined as Br.), and dimensionless irradiation time (defined as Fo). Dependence of Fo on Nk and Bi has been observed by experiment, and the results have been compared with the numerical results obtained by solving a 2-dimensional conduction equation. Fo and explosion depth (from the surface to the point of maximun temperature) are increased when Nk and Bi are increased.To find out the minimum laser power for explosive evaporative removal process, steady state analysis has been also made. The limit of Nk to induce evaporative removal, which is proportional to the inverse of the laser power, has been obtained. 30622 T00401030622 ^x N1 and N2 gross neural action potentials were measured from the round window of the guinea pig cochlea at the onset of the acoustic stimuli. N1-N2 audiograms were made by means of regulating stimulant intensities in order to produce constant N1-N2 potentials as criteria for different input tone pip frequencies. The lowest threshold was measured with an input tone pip I5 dB SPL in intensity and 12 KHz in frequency when the animal was in normal physiological condition. The procedure of experimental measurements is explained in detail. This experimental approach is very useful for the investigation of the Cochlear function. Both noN1inear and active functions of the Cochlea can be monitored by N1-N2 audiograms. 30623 T00401030623 ^x In electrical impedance tomography(EIT), we use boundary current and voltage measurements toprovide the information about the cross-sectional distribution of electrical impedance or resistivity. One of the major problems in EIT has been the inaccessibility of internal voltage or current data in finding the internal impedance values. We propose a new image reconstruction method using internal current density data measured by NMR. We obtained a two-dimensional current density distribution within a phantom by processing the real and imaginary MR images from a 4.77 NMR machine. We implemented a resistivity mage reconstruction algorithm using the finite element method and sensitivity matrix. We presented computer simulation results of the mage reconstruction algorithm and furture direction of the research. 30624 T00401030624 ^x A new method of digital image analysis technique for discrimination of cancer cell was presented in this paper. The object image was the Thyroid eland cells image that was diagnosed as normal and abnormal (two types of abnormal: follicular neoplastic cell, and papillary neoplastic cell), respectively. By using the proposed region segmentation algorithm, the cells were segmented into nucleus. The 16 feature parameters were used to calculate the features of each nucleus. A9 a consequence of using dominant feature parameters method proposed in this paper, discrimination rate of 91.11% was obtained for Thyroid Gland cells. 30625 T00401030625 ^x An electrical stimulator was designed to induce locomotion for paraplegic patients caused by central nervous system injury. Optimal stimulus parameters, which can minimize muscle fatigue and can achieve effective muscle contraction were determined in slow and fast muscles in Sprague-Dawley rats. Stimulus patterns of our stimulator were designed to simulate electromyographic activity monitored during locomotion of normal subjects. Muscle types of the lower extremity were classified according to their mechanical property of contraction, which are slow muscle (msoleus m.) and fast muscle (medial gastrocneminus m., rectus femoris m., vastus lateralis m.). Optimal parameters of electrical stimulation for slow muscles were 20 Hz, 0.2 ms square pulse. For fast muscle, 40 Hz, 0.3 ms square pulse was optimal to produce repeated contraction. Higher stimulus intensity was required when synergistic muscles were stimulated simultaneously than when they were stimulated individually. Electrical stimulation for each muscle was designed to generate bipedal locomotion, so that individual muscles alternate contraction and relaxation to simulate stance and swing phases. Portable electrical stimulator with 16 channels built in microprocessor was constructed and applied to paraplegic patients due to lumbar cord injury. The electrical stimulator restored partially gait function in paraplegic patients. 30626 T00401030626 ^x Two-Dimensional modelling of the Cochlear biomechanics is presented in this paper. The Laplace partial differential equation which represents the fluid mechanics of the Cochlea has been transformed into two-dimensional electrical transmission line. The procedure of this transformation is explained in detail. The comparison between one and two dimensional models is also presented. This electrical modelling of the basilar membrane (BM) is clearly useful for the next approach to the further. Development of active elements which are essential in the producing of the sharp tuning of the BM. This paper shows that two-dimension model is qualitatively better than one-dimensional model both in amplitude and phase responses of the BM displacement. The present model is only for frequency response. However because the model is electrical, the two-dimensional transmission line model can be extended to time response without any difficult. 30627 T00401030627 ^x A method has been proposed for the fully automatic detection of left ventricular endocardial boundary in 2D short axis echocardiogram using geometric model. The procedure has the following three distinct stages. First, the initial center is estimated by the initial center estimation algorithm which is applied to decimated image. Second, the center estimation algorithm is applied to original image and then best-fit elliptic model estimation is processed. Third, best-fit boundary is detected by the cost function which is based on the best-fit elliptic model. The proposed method shows effective result without manual intervention by a human operator. 30628 T00401030628 ^x The intelligent trajectory control method that controls moving direction and average velocity for a prosthetic arm is proposed by pattern recognition and force estimations using EMG signals. Also, we propose the real time trajectory planning method which generates continuous accelleration paths using 3 stage linear filters to minimize the impact to human body induced by arm motions and to reduce the muscle fatigue. We use combination of MLP and fuzzy filter for pattern recognition to estimate the direction of a muscle and Hogan's method for the force estimation. EMG signals are acquired by using a amputation simulator and 2 dimensional joystick motion. The simulation results of proposed prosthetic arm control system using the EMf signals show that the arm is effectively followed the desired trajectory depended on estimated force and direction of muscle movements. 30638 T00401030638 ^x A new neural network architecture for the recognition of patterns from images is proposed, which is partially based on the results of physiological studies. The proposed network is composed of multi-layers and the nerve cells in each layer are connected by spatial filters which approximate receptive fields in optic nerve fields. In the proposed method, patterns recognition for complicated images is carried out using global features as well as local features such as lines and end-points. A new generating method of matched filers representing global features is proposed in this network. 30659 T00401030659 ^x An implementation scheme of the magnetic nerve stimulator using a switching mode power supply is proposed. By using a switching mode power supply rather than a conventional linear power supply for charging high voltage capacitors, the weight and size of the magnetic nerve stimulator can be considerably reduced. Maximum output voltage of the developed magnetic nerve stimulator using the switching mode power supply is 3, 000 volts and switching time is about 100 msec. Experimental results or human nerve stimulations using the developed stimulator are presented. 30768 T00401030768 ^x In this paper, we describe the design methodology and specifications of the developed module-based bedside monitors for patient monitoring. The bedside monitor consists of a main unit and module cases with various parameter modules. The main unit includes a 12.1" TFT color LCD, a main CPU board, and peripherals such as a module controller, Ethernet LAN card, video card, rotate/push button controller, etc. The main unit can connect at maximum three module cases each of which can accommodate up to 7 parameter modules. They include the modules for electrocardiograph, respiration, invasive blood pressure, noninvasive blood pressure, temperature, and SpO2 with Plethysmograph.SpO2 with Plethysmograph.

  • PDF

Curvature stroke modeling for the recognition of on-line cursive korean characters (온라인 흘림체 한글 인식을 위한 곡률획 모델링 기법)

  • 전병환;김무영;김창수;박강령;김재희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.11
    • /
    • pp.140-149
    • /
    • 1996
  • Cursive characters are written on an economical principle to reduce the motion of a pen in the limit of distinction between characters. That is, the pen is not lifted up to move for writing a next stroke, the pen is not moved at all, or connected two strokes chance their shapes to a similar and simple shape which is easy to be written. For these reasons, strokes and korean alphabets are not only easy to be changed, but also difficult to be splitted. In this paper, we propose a curvature stroke modeling method for splitting and matching by using a structural primitive. A curvature stroke is defined as a substroke which does not change its curvanture. Input strokes handwritten in a cursive style are splitted into a sequence of curvature strokes by segmenting the points which change the direction of rotation, which occur a sudden change of direction, and which occur an excessive rotation Each reference of korean alphabets is handwritten in a printed style and is saved as a sequence of curvature strikes which is generated by splitting process. And merging process is used to generate various sequences of curvature strikes for matching. Here, it is also considered that imaginary strokes can be written or omitted. By using a curvature stroke as a unit of recognition, redundant splitting points in input characters are effectively reduced and exact matching is possible by generating a reference curvature stroke, which consists of the parts of adjacent two korean alphasbets, even when the connecting points between korean alphabets are not splitted. The results showed 83.6% as recognition rate of the first candidate and 0.99sec./character (CPU clock:66MHz) as processing time.

  • PDF

Analysis of the 'Problem Solving and Invention' Units of Technology and Home Economics 1 Textbook (기술.가정 1 교과서 '문제해결과 발명' 단원 분석)

  • Jung, Jin Woo
    • 대한공업교육학회지
    • /
    • v.38 no.1
    • /
    • pp.49-67
    • /
    • 2013
  • The purpose of this study is to analyze the external systems and the units 'problem solving and invention' of the middle school technology and home economics 1 textbooks of the revised 2011 national curriculum in an effort to provide some information on the content system of invention education in technology class, as invention education was provided as part of a regular subject for the first time. The findings of the study were as follows: First, 'Technology and Inventions' chapter of Technology and Home Economics 1 Textbooks occupied 10-18% share, with the subchapter of 'Problem Solving and Invention' unit taking up 6.7-29% of the textbooks. Second, for most textbooks, 'Technological Problem Solving', 'Idea Generation' 'Multi-dimensional Projection Method', 'Expansive Thought-Processing Methodology', 'Converging Thought Methodology' and 'Invention in Everyday Lives' were included as main contents based on the accomplishment criteria presented in education process interpretation documents. Third, the detailed structures were generally made up as follows: Introduction (Broad Chapter Title, Subchapter Table of Contents, Introduction, Subchapter Title, Study Objectives, Open Thinking); Development (Unit Title, Thinking Ahead, Core Terms, Main Text, Study Helper, Activities, Research Exercises, Supplemental Readings, In-depth Study Topics, Technology in Everyday Lives, Reading Topics, Discussion Topics, and Career Helpers); and Summary (Subchapter Summary, Study Summary, Terms Summary, Writing Follow-up, Self Review, Broad Chapter Evaluation). Fourth, based on the analysis of figures included, photographs had the largest share, followed by figures, tables, and graphs. The photos were used to illustrate various inventions, invention methodologies, and exercise activities, while figures were included to depict the contents included in the main text, and the tables to assist to preparation of process diagrams or materials lists. Fifth, based on the analysis of content weights, greater weights were placed on 'Inventions and Thoughts', and 'Invention Experiment Activities,' while 'Understanding Inventions' and 'Invention and Patents' chapters did not have a lot of texts involved. Sixth, based on the analysis of content presentation methods, most textbooks combined figures, tables, illustrations and texts to discuss the topics. Based on the above study results, we suggest the following: First, a consistent education curriculum should be developed over the topic of invention; and second, more precise and systematic analysis of textbooks would need to be performed.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

The Research on Online Game Hedonic Experience - Focusing on Moderate Effect of Perceived Complexity - (온라인 게임에서의 쾌락적 경험에 관한 연구 - 지각된 복잡성의 조절효과를 중심으로 -)

  • Lee, Jong-Ho;Jung, Yun-Hee
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.147-187
    • /
    • 2008
  • Online game researchers focus on the flow and factors influencing flow. Flow is conceptualized as an optimal experience state and useful explaining game experience in online. Many game studies focused on the customer loyalty and flow in playing online game, In showing specific game experience, however, it doesn't examine multidimensional experience process. Flow is not construct which show absorbing process, but construct which show absorbing result. Hence, Flow is not adequate to examine multidimensional experience of games. Online game is included in hedonic consumption. Hedonic consumption is a relatively new field of study in consumer research and it explores the consumption experience as a experiential view(Hirschman and Holbrook 1982). Hedonic consumption explores the consumption experience not as an information processing event but from a phenomenological of experiential view, which is a primarily subjective state. It includes various playful leisure activities, sensory pleasures, daydreams, esthetic enjoyment, and emotional responses. In online game experience, therefore, it is right to access through a experiential view of hedonic consumption. The objective of this paper was to make up for lacks in our understanding of online game experience by developing a framework for better insight into the hedonic experience of online game. We developed this framework by integrating and extending existing research in marketing, online game and hedonic responses. We then discussed several expectations for this framework. We concluded by discussing the results of this study, providing general recommendation and directions for future research. In hedonic response research, Lacher's research(1994)and Jongho lee and Yunhee Jung' research (2005;2006) has served as a fundamental starting point of our research. A common element in this extended research is the repeated identification of the four hedonic responses: sensory response, imaginal response, emotional response, analytic response. The validity of these four constructs finds in research of music(Lacher 1994) and movie(Jongho lee and Yunhee Jung' research 2005;2006). But, previous research on hedonic response didn't show that constructs of hedonic response have cause-effect relation. Also, although hedonic response enable to different by stimulus properties. effects of stimulus properties is not showed. To fill this gap, while largely based on Lacher(1994)' research and Jongho Lee and Yunhee Jung(2005, 2006)' research, we made several important adaptation with the primary goal of bringing the model into online game and compensating lacks of previous research. We maintained the same construct proposed by Lacher et al.(1994), with four constructs of hedonic response:sensory response, imaginal response, emotional response, analytical response. In this study, the sensory response is typified by some physical movement(Yingling 1962), the imaginal response is typified by images, memories, or situations that game evokes(Myers 1914), and the emotional response represents the feelings one experiences when playing game, such as pleasure, arousal, dominance, finally, the analytical response is that game player engaged in cognition seeking while playing game(Myers 1912). However, this paper has several important differences. We attempted to suggest multi-dimensional experience process in online game and cause-effect relation among hedonic responses. Also, We investigated moderate effects of perceived complexity. Previous studies about hedonic responses didn't show influences of stimulus properties. According to Berlyne's theory(1960, 1974) of aesthetic response, perceived complexity is a important construct because it effects pleasure. Pleasure in response to an object will increase with increased complexity, to an optimal level. After that, with increased complexity, pleasure begins with a linearly increasing line for complexity. Therefore, We expected this perceived complexity will influence hedonic response in game experience. We discussed the rationale for these suggested changes, the assumptions of the resulting framework, and developed some expectations based on its application in Online game context. In the first stage of methodology, questions were developed to measure the constructs. We constructed a survey measuring our theoretical constructs based on a combination of sources, including Yingling(1962), Hargreaves(1962), Lacher (1994), Jongho Lee and Yunhee Jung(2005, 2006), Mehrabian and Russell(1974), Pucely et al(1987). Based on comments received in the pretest, we made several revisions to arrive at our final survey. We investigated the proposed framework through a convenience sample, where participation in a self-report survey was solicited from various respondents having different knowledges. All respondents participated to different degrees, in these habitually practiced activities and received no compensation for their participation. Questionnaires were distributed to graduates and we used 381 completed questionnaires to analysis. The sample consisted of more men(n=225) than women(n=156). In measure, the study used multi-item scales based previous study. We analyze the data using structural equation modeling(LISREL-VIII; Joreskog and Sorbom 1993). First, we used the entire sample(n=381) to refine the measures and test their convergent and discriminant validity. The evidence from both the factor analysis and the analysis of reliability provides support that the scales exhibit internal consistency and construct validity. Second, we test the hypothesized structural model. And, we divided the sample into two different complexity group and analyze the hypothesized structural model of each group. The analysis suggest that hedonic response plays different roles from hypothesized in our study. The results indicate that hedonic response-sensory response, imaginal response, emotional response, analytical response- are related positively to respondents' level of game satisfaction. And game satisfaction is related to higher levels of game loyalty. Additionally, we found that perceived complexity is important to online game experience. Our results suggest that importance of each hedonic response different by perceived game complexity. Understanding the role of perceived complexity in hedonic response enables to have a better understanding of underlying mechanisms at game experience. If game has high complexity, analytical response become important response. So game producers or marketers have to consider more cognitive stimulus. Controversy, if game has low complexity, sensorial response respectively become important. Finally, we discussed several limitations of our study and suggested directions for future research. we concluded with a discussion of managerial implications. Our study provides managers with a basis for game strategies.

  • PDF

Implementation of Reporting Tool Supporting OLAP and Data Mining Analysis Using XMLA (XMLA를 사용한 OLAP과 데이타 마이닝 분석이 가능한 리포팅 툴의 구현)

  • Choe, Jee-Woong;Kim, Myung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.3
    • /
    • pp.154-166
    • /
    • 2009
  • Database query and reporting tools, OLAP tools and data mining tools are typical front-end tools in Business Intelligence environment which is able to support gathering, consolidating and analyzing data produced from business operation activities and provide access to the result to enterprise's users. Traditional reporting tools have an advantage of creating sophisticated dynamic reports including SQL query result sets, which look like documents produced by word processors, and publishing the reports to the Web environment, but data source for the tools is limited to RDBMS. On the other hand, OLAP tools and data mining tools have an advantage of providing powerful information analysis functions on each own way, but built-in visualization components for analysis results are limited to tables or some charts. Thus, this paper presents a system that integrates three typical front-end tools to complement one another for BI environment. Traditional reporting tools only have a query editor for generating SQL statements to bring data from RDBMS. However, the reporting tool presented by this paper can extract data also from OLAP and data mining servers, because editors for OLAP and data mining query requests are added into this tool. Traditional systems produce all documents in the server side. This structure enables reporting tools to avoid repetitive process to generate documents, when many clients intend to access the same dynamic document. But, because this system targets that a few users generate documents for data analysis, this tool generates documents at the client side. Therefore, the tool has a processing mechanism to deal with a number of data despite the limited memory capacity of the report viewer in the client side. Also, this reporting tool has data structure for integrating data from three kinds of data sources into one document. Finally, most of traditional front-end tools for BI are dependent on data source architecture from specific vendor. To overcome the problem, this system uses XMLA that is a protocol based on web service to access to data sources for OLAP and data mining services from various vendors.

Matching Points Filtering Applied Panorama Image Processing Using SURF and RANSAC Algorithm (SURF와 RANSAC 알고리즘을 이용한 대응점 필터링 적용 파노라마 이미지 처리)

  • Kim, Jeongho;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.144-159
    • /
    • 2014
  • Techniques for making a single panoramic image using multiple pictures are widely studied in many areas such as computer vision, computer graphics, etc. The panorama image can be applied to various fields like virtual reality, robot vision areas which require wide-angled shots as an useful way to overcome the limitations such as picture-angle, resolutions, and internal informations of an image taken from a single camera. It is so much meaningful in a point that a panoramic image usually provides better immersion feeling than a plain image. Although there are many ways to build a panoramic image, most of them are using the way of extracting feature points and matching points of each images for making a single panoramic image. In addition, those methods use the RANSAC(RANdom SAmple Consensus) algorithm with matching points and the Homography matrix to transform the image. The SURF(Speeded Up Robust Features) algorithm which is used in this paper to extract featuring points uses an image's black and white informations and local spatial informations. The SURF is widely being used since it is very much robust at detecting image's size, view-point changes, and additionally, faster than the SIFT(Scale Invariant Features Transform) algorithm. The SURF has a shortcoming of making an error which results in decreasing the RANSAC algorithm's performance speed when extracting image's feature points. As a result, this may increase the CPU usage occupation rate. The error of detecting matching points may role as a critical reason for disqualifying panoramic image's accuracy and lucidity. In this paper, in order to minimize errors of extracting matching points, we used $3{\times}3$ region's RGB pixel values around the matching points' coordinates to perform intermediate filtering process for removing wrong matching points. We have also presented analysis and evaluation results relating to enhanced working speed for producing a panorama image, CPU usage rate, extracted matching points' decreasing rate and accuracy.

An Interface Technique for Avatar-Object Behavior Control using Layered Behavior Script Representation (계층적 행위 스크립트 표현을 통한 아바타-객체 행위 제어를 위한 인터페이스 기법)

  • Choi Seung-Hyuk;Kim Jae-Kyung;Lim Soon-Bum;Choy Yoon-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.9
    • /
    • pp.751-775
    • /
    • 2006
  • In this paper, we suggested an avatar control technique using the high-level behavior. We separated behaviors into three levels according to level of abstraction and defined layered scripts. Layered scripts provide the user with the control over the avatar behaviors at the abstract level and the reusability of scripts. As the 3D environment gets complicated, the number of required avatar behaviors increases accordingly and thus controlling the avatar-object behaviors gets even more challenging. To solve this problem, we embed avatar behaviors into each environment object, which informs how the avatar can interact with the object. Even with a large number of environment objects, our system can manage avatar-object interactions in an object-oriented manner Finally, we suggest an easy-to-use user interface technique that allows the user to control avatars based on context menus. Using the avatar behavior information that is embedded into the object, the system can analyze the object state and filter the behaviors. As a result, context menu shows the behaviors that the avatar can do. In this paper, we made the virtual presentation environment and applied our model to the system. In this paper, we suggested the technique that we controling an the avatar control technique using the high-level behavior. We separated behaviors into three levels byaccording to level of abstract levelion and defined multi-levellayered script. Multi-leveILayered script offers that the user can control avatar behavior at the abstract level and reuses script easily. We suggested object models for avatar-object interaction. Because, TtThe 3D environment is getting more complicated very quickly, so that the numberss of avatar behaviors are getting more variableincreased. Therefore, controlling avatar-object behavior is getting complex and difficultWe need tough processing for handling avatar-object interaction. To solve this problem, we suggested object models that embedded avatar behaviors into object for avatar-object interaction. insert embedded ail avatar behaviors into object. Even though the numbers of objects areis large bigger, it can manage avatar-object interactions by very efficientlyobject-oriented manner. Finally Wewe suggested context menu for ease ordering. User can control avatar throughusing not avatar but the object-oriented interfaces. To do this, Oobject model is suggested by analyzeing object state and filtering the behavior, behavior and context menu shows the behaviors that avatar can do. The user doesn't care about the object or avatar state through the related object.