• Title/Summary/Keyword: two-step approach

Search Result 590, Processing Time 0.035 seconds

Rice Proteomics: A Functional Analysis of the Rice Genome and Applications (프로테옴 해석에 의한 벼 게놈 기능해석과 응용)

  • Woo, Sun-Hee;Kim, Hong-Sig;Song, Berm-Heun;Lee, Chul-Won;Park, Young-Mok;Jong, Seung-Keun;Cho, Yong-Gu
    • Journal of Plant Biotechnology
    • /
    • v.30 no.3
    • /
    • pp.281-291
    • /
    • 2003
  • In this review, we described the catalogues of the rice proteome which were constructed in our program, and functional characterization of some of these proteins was discussed. Mass-spectrometry is the most prevalent technique to rapidly identify a large number of proteome analysis. However, the conventional Western blotting/sequencing technique has been used in many laboratories. As a first step to efficiently construct protein cata-file in proteome analysis of major cereals, we have analyzed the N-terminal sequences of 100 rice embryo proteins and 70 wheat spike proteins separated by two-dimensional electrophoresis. Edman degradation revealed the N-terminal peptide sequences of only 31 rice proteins and 47 wheat proteins, suggesting that the rest of separated protein sports are N-terminally blocked. To efficiently determine the internal sequence of blocked proteins, we have developed a modified Cleveland peptide mapping method. Using this above method, the internal sequences of all blocked rice proteins(i, e., 69 proteins) were determined. Among these 100 rice proteins, thirty were proteins for which homologous sequence in the rice genome database could be identified. However, the rest of the proteins lacked homologous proteins. This appears to be consistent with the fact that about 45% of total rice cDNA have been deposited in the EMBL database. Also, the major proteins involved in the growth and development of rice can be identified using the proteome approach. Some of these proteins, including a calcium-binding protein that tuned out to be calreticulin, gibberellin-binding protein, which is ribulose-1.5-bisphosphate carboxylase/oxygense active in rice, and leginsulin-binding protein in soybean have functions in the signal transduction pathway. Proteomics is well suited not only to determine interaction between pairs of proteins, but also to identify multisubunit complexes. Currently, a protein-protein interaction database for plant proteins(http://genome.c.kanazawa-u.ac.jp/Y2H)could be a very useful tool for the plant research community. Also, the information thus obtained from the plant proteome would be helpful in predicting the function of the unknown proteins and would be useful be in the plant molecular breeding.

Establishment of a Simple and Effective Method for Isolating Male Germline Stem Cells (GSCs) from Testicular Cells of Neonatal and Adult Mice

  • Kim Kye-Seong;Lim Jung-Jin;Yang Yun-Hee;Kim Soo-Kyoung;Yoon Tae-Ki;Cha Kwang-Yul;Lee Dong-Ryul
    • Journal of Microbiology and Biotechnology
    • /
    • v.16 no.9
    • /
    • pp.1347-1354
    • /
    • 2006
  • The aims of this study were to establish a simple and effective method for isolating male germline stem cells (GSCs), and to test the possibility of using these cells as a new approach for male infertility treatment. Testes obtained from neonatal and adult mice were manually decapsulated. GSCs were collected from seminiferous tubules by a two-step enzyme digestion method and plated on gelatin-coated dishes. Over 5-7 days of culture, GSCs obtained from neonates and adults gave rise to large multicellular colonies that were subsequently grown for 10 passages. During in vitro proliferation, oct-4 and two immunological markers (Integrin ${\beta}1,\;{\alpha}6$) for GSCs were highly expressed in the cell colonies. During another culture period of 6 weeks to differentiate to later stage germ cells, the expression of oct-4 mRNA decreased in GSCs and Sertoli cells encapsulated with calcium alginate, but the expression of c-kit and testis-specific histone protein 2B(TH2B) mRNA as well as the localization of c-kit protein was increased. Expression of transition protein (TP-l) and localization of peanut agglutinin were not seen until 3 weeks after culturing, and appeared by 6 weeks of culture. The putative spermatids derived from GSCs supported embryonic development up to the blastocyst stage with normal chromosomal ploidy after chemical activation. Thus, GSCs isolated from neonatal and adult mouse testes were able to be maintained and proliferated in our simple culture conditions. These GSCs have the potential to differentiate into haploid germ cells during another long-term culture.

Application and perspectives of proteomics in crop science fields (작물학 분야 프로테오믹스의 응용과 전망)

  • Woo Sun-Hee
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2004.04a
    • /
    • pp.12-27
    • /
    • 2004
  • Thanks to spectacular advances in the techniques for identifying proteins separated by two-dimensional electrophoresis and in methods for large-scale analysis of proteome variations, proteomics is becoming an essential methodology in various fields of plant sciences. Plant proteomics would be most useful when combined with other functional genomics tools and approaches. A combination of microarray and proteomics analysis will indicate whether gene regulation is controlled at the level of transcription or translation and protein accumulation. In this review, we described the catalogues of the rice proteome which were constructed in our program, and functional characterization of some of these proteins was discussed. Mass-spectrometry is a most prevalent technique to identify rapidly a large of proteins in proteome analysis. However, the conventional Western blotting/sequencing technique us still used in many laboratories. As a first step to efficiently construct protein data-file in proteome analysis of major cereals, we have analyzed the N-terminal sequences of 100 rice embryo proteins and 70 wheat spike proteins separated by two-dimensional electrophoresis. Edman degradation revealed the N-terminal peptide sequences of only 31 rice proteins and 47 wheat proteins, suggesting that the rest of separated protein spots are N-terminally blocked. To efficiently determine the internal sequence of blocked proteins, we have developed a modified Cleveland peptide mapping method. Using this above method, the internal sequences of all blocked rice proteins (i. e., 69 proteins) were determined. Among these 100 rice proteins, thirty were proteins for which homologous sequence in the rice genome database could be identified. However, the rest of the proteins lacked homologous proteins. This appears to be consistent with the fact that about 30% of total rice cDNA have been deposited in the database. Also, the major proteins involved in the growth and development of rice can be identified using the proteome approach. Some of these proteins, including a calcium-binding protein that fumed out to be calreticulin, gibberellin-binding protein, which is ribulose-1,5-bisphosphate carboxylase/oxygenase activate in rice, and leginsulin-binding protein in soybean have functions in the signal transduction pathway. Proteomics is well suited not only to determine interaction between pairs of proteins, but also to identify multisubunit complexes. Currently, a protein-protein interaction database for plant proteins (http://genome .c .kanazawa-u.ac.jp/Y2H)could be a very useful tool for the plant research community. Recently, we are separated proteins from grain filling and seed maturation in rice to perform ESI-Q-TOF/MS and MALDI-TOF/MS. This experiment shows a possibility to easily and rapidly identify a number of 2-DE separated proteins of rice by ESI-Q-TOF/MS and MALDI-TOF/MS. Therefore, the Information thus obtained from the plant proteome would be helpful in predicting the function of the unknown proteins and would be useful in the plant molecular breeding. Also, information from our study could provide a venue to plant breeder and molecular biologist to design their research strategies precisely.

  • PDF

An Improvement in K-NN Graph Construction using re-grouping with Locality Sensitive Hashing on MapReduce (MapReduce 환경에서 재그룹핑을 이용한 Locality Sensitive Hashing 기반의 K-Nearest Neighbor 그래프 생성 알고리즘의 개선)

  • Lee, Inhoe;Oh, Hyesung;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.681-688
    • /
    • 2015
  • The k nearest neighbor (k-NN) graph construction is an important operation with many web-related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Despite its many elegant properties, the brute force k-NN graph construction method has a computational complexity of $O(n^2)$, which is prohibitive for large scale data sets. Thus, (Key, Value)-based distributed framework, MapReduce, is gaining increasingly widespread use in Locality Sensitive Hashing which is efficient for high-dimension and sparse data. Based on the two-stage strategy, we engage the locality sensitive hashing technique to divide users into small subsets, and then calculate similarity between pairs in the small subsets using a brute force method on MapReduce. Specifically, generating a candidate group stage is important since brute-force calculation is performed in the following step. However, existing methods do not prevent large candidate groups. In this paper, we proposed an efficient algorithm for approximate k-NN graph construction by regrouping candidate groups. Experimental results show that our approach is more effective than existing methods in terms of graph accuracy and scan rate.

Development and Application of Two-Dimensional Numerical Tank using Desingularized Indirect Boundary Integral Equation Method (비특이화 간접경계적분방정식방법을 이용한 2차원 수치수조 개발 및 적용)

  • Oh, Seunghoon;Cho, Seok-kyu;Jung, Dongho;Sung, Hong Gun
    • Journal of Ocean Engineering and Technology
    • /
    • v.32 no.6
    • /
    • pp.447-457
    • /
    • 2018
  • In this study, a two-dimensional fully nonlinear transient wave numerical tank was developed using a desingularized indirect boundary integral equation method. The desingularized indirect boundary integral equation method is simpler and faster than the conventional boundary element method because special treatment is not required to compute the boundary integral. Numerical simulations were carried out in the time domain using the fourth order Runge-Kutta method. A mixed Eulerian-Lagrangian approach was adapted to reconstruct the free surface at each time step. A numerical damping zone was used to minimize the reflective wave in the downstream region. The interpolating method of a Gaussian radial basis function-type artificial neural network was used to calculate the gradient of the free surface elevation without element connectivity. The desingularized indirect boundary integral equation using an isolated point source and radial basis function has no need for information about the element connectivity and is a meshless method that is numerically more flexible. In order to validate the accuracy of the numerical wave tank based on the desingularized indirect boundary integral equation method and meshless technique, several numerical simulations were carried out. First, a comparison with numerical results according to the type of desingularized source was carried out and confirmed that continuous line sources can be replaced by simply isolated sources. In addition, a propagation simulation of a $2^{nd}$-order Stokes wave was carried out and compared with an analytical solution. Finally, simulations of propagating waves in shallow water and propagating waves over a submerged bar were also carried and compared with published data.

Design Evaluation Model Based on Consumer Values: Three-step Approach from Product Attributes, Perceived Attributes, to Consumer Values (소비자 가치기반 디자인 평가 모형: 제품 속성, 인지 속성, 소비자 가치의 3단계 접근)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.57-76
    • /
    • 2017
  • Recently, consumer needs are diversifying as information technologies are evolving rapidly. A lot of IT devices such as smart phones and tablet PCs are launching following the trend of information technology. While IT devices focused on the technical advance and improvement a few years ago, the situation is changed now. There is no difference in functional aspects, so companies are trying to differentiate IT devices in terms of appearance design. Consumers also consider design as being a more important factor in the decision-making of smart phones. Smart phones have become a fashion items, revealing consumers' own characteristics and personality. As the design and appearance of the smartphone become important things, it is necessary to examine consumer values from the design and appearance of IT devices. Furthermore, it is crucial to clarify the mechanisms of consumers' design evaluation and develop the design evaluation model based on the mechanism. Since the influence of design gets continuously strong, various and many studies related to design were carried out. These studies can classify three main streams. The first stream focuses on the role of design from the perspective of marketing and communication. The second one is the studies to find out an effective and appealing design from the perspective of industrial design. The last one is to examine the consumer values created by a product design, which means consumers' perception or feeling when they look and feel it. These numerous studies somewhat have dealt with consumer values, but they do not include product attributes, or do not cover the whole process and mechanism from product attributes to consumer values. In this study, we try to develop the holistic design evaluation model based on consumer values based on three-step approach from product attributes, perceived attributes, to consumer values. Product attributes means the real and physical characteristics each smart phone has. They consist of bezel, length, width, thickness, weight and curvature. Perceived attributes are derived from consumers' perception on product attributes. We consider perceived size of device, perceived size of display, perceived thickness, perceived weight, perceived bezel (top - bottom / left - right side), perceived curvature of edge, perceived curvature of back side, gap of each part, perceived gloss and perceived screen ratio. They are factorized into six clusters named as 'Size,' 'Slimness,' 'No-Frame,' 'Roundness,' 'Screen Ratio,' and 'Looseness.' We conducted qualitative research to find out consumer values, which are categorized into two: look and feel values. We identified the values named as 'Silhouette,' 'Neatness,' 'Attractiveness,' 'Polishing,' 'Innovativeness,' 'Professionalism,' 'Intellectualness,' 'Individuality,' and 'Distinctiveness' in terms of look values. Also, we identifies 'Stability,' 'Comfortableness,' 'Grip,' 'Solidity,' 'Non-fragility,' and 'Smoothness' in terms of feel values. They are factorized into five key values: 'Sleek Value,' 'Professional Value,' 'Unique Value,' 'Comfortable Value,' and 'Solid Value.' Finally, we developed the holistic design evaluation model by analyzing each relationship from product attributes, perceived attributes, to consumer values. This study has several theoretical and practical contributions. First, we found consumer values in terms of design evaluation and implicit chain relationship from the objective and physical characteristics to the subjective and mental evaluation. That is, the model explains the mechanism of design evaluation in consumer minds. Second, we suggest a general design evaluation process from product attributes, perceived attributes to consumer values. It is an adaptable methodology not only smart phone but also other IT products. Practically, this model can support the decision-making when companies initiative new product development. It can help product designers focus on their capacities with limited resources. Moreover, if its model combined with machine learning collecting consumers' purchasing data, most preferred values, sales data, etc., it will be able to evolve intelligent design decision support system.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Interpreting Bounded Rationality in Business and Industrial Marketing Contexts: Executive Training Case Studies (집행관배훈안례연구(阐述工商业背景下的有限合理性):집행관배훈안례연구(执行官培训案例研究))

  • Woodside, Arch G.;Lai, Wen-Hsiang;Kim, Kyung-Hoon;Jung, Deuk-Keyo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.3
    • /
    • pp.49-61
    • /
    • 2009
  • This article provides training exercises for executives into interpreting subroutine maps of executives' thinking in processing business and industrial marketing problems and opportunities. This study builds on premises that Schank proposes about learning and teaching including (1) learning occurs by experiencing and the best instruction offers learners opportunities to distill their knowledge and skills from interactive stories in the form of goal.based scenarios, team projects, and understanding stories from experts. Also, (2) telling does not lead to learning because learning requires action-training environments should emphasize active engagement with stories, cases, and projects. Each training case study includes executive exposure to decision system analysis (DSA). The training case requires the executive to write a "Briefing Report" of a DSA map. Instructions to the executive trainee in writing the briefing report include coverage in the briefing report of (1) details of the essence of the DSA map and (2) a statement of warnings and opportunities that the executive map reader interprets within the DSA map. The length maximum for a briefing report is 500 words-an arbitrary rule that works well in executive training programs. Following this introduction, section two of the article briefly summarizes relevant literature on how humans think within contexts in response to problems and opportunities. Section three illustrates the creation and interpreting of DSA maps using a training exercise in pricing a chemical product to different OEM (original equipment manufacturer) customers. Section four presents a training exercise in pricing decisions by a petroleum manufacturing firm. Section five presents a training exercise in marketing strategies by an office furniture distributer along with buying strategies by business customers. Each of the three training exercises is based on research into information processing and decision making of executives operating in marketing contexts. Section six concludes the article with suggestions for use of this training case and for developing additional training cases for honing executives' decision-making skills. Todd and Gigerenzer propose that humans use simple heuristics because they enable adaptive behavior by exploiting the structure of information in natural decision environments. "Simplicity is a virtue, rather than a curse". Bounded rationality theorists emphasize the centrality of Simon's proposition, "Human rational behavior is shaped by a scissors whose blades are the structure of the task environments and the computational capabilities of the actor". Gigerenzer's view is relevant to Simon's environmental blade and to the environmental structures in the three cases in this article, "The term environment, here, does not refer to a description of the total physical and biological environment, but only to that part important to an organism, given its needs and goals." The present article directs attention to research that combines reports on the structure of task environments with the use of adaptive toolbox heuristics of actors. The DSA mapping approach here concerns the match between strategy and an environment-the development and understanding of ecological rationality theory. Aspiration adaptation theory is central to this approach. Aspiration adaptation theory models decision making as a multi-goal problem without aggregation of the goals into a complete preference order over all decision alternatives. The three case studies in this article permit the learner to apply propositions in aspiration level rules in reaching a decision. Aspiration adaptation takes the form of a sequence of adjustment steps. An adjustment step shifts the current aspiration level to a neighboring point on an aspiration grid by a change in only one goal variable. An upward adjustment step is an increase and a downward adjustment step is a decrease of a goal variable. Creating and using aspiration adaptation levels is integral to bounded rationality theory. The present article increases understanding and expertise of both aspiration adaptation and bounded rationality theories by providing learner experiences and practice in using propositions in both theories. Practice in ranking CTSs and writing TOP gists from DSA maps serves to clarify and deepen Selten's view, "Clearly, aspiration adaptation must enter the picture as an integrated part of the search for a solution." The body of "direct research" by Mintzberg, Gladwin's ethnographic decision tree modeling, and Huff's work on mapping strategic thought are suggestions on where to look for research that considers both the structure of the environment and the computational capabilities of the actors making decisions in these environments. Such research on bounded rationality permits both further development of theory in how and why decisions are made in real life and the development of learning exercises in the use of heuristics occurring in natural environments. The exercises in the present article encourage learning skills and principles of using fast and frugal heuristics in contexts of their intended use. The exercises respond to Schank's wisdom, "In a deep sense, education isn't about knowledge or getting students to know what has happened. It is about getting them to feel what has happened. This is not easy to do. Education, as it is in schools today, is emotionless. This is a huge problem." The three cases and accompanying set of exercise questions adhere to Schank's view, "Processes are best taught by actually engaging in them, which can often mean, for mental processing, active discussion."

  • PDF

A Study on the Neumann-Kelvin Problem of the Wave Resistance (조파저항에서의 Neumann-Kelvin 문제에 대한 연구)

  • 김인철
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.21 no.2
    • /
    • pp.131-136
    • /
    • 1985
  • The calculation of the resulting fluid motion is an important problem of ship hydrodynamics. For a partially immersed body the condition of constant pressure at the free surface can be linearized. The resulting linear boundary-value problem for the velocity potential is the Neumann-Kelvin problem. The two-dimensional Neumann-Kelvin problem is studied for the half-immersed circular cylinder by Ursell. Maruo introduced a slender body approach to simplify the Neumann-Kelvin problem in such a way that the integral equation which determines the singularity distribution over the hull surface can be solved by a marching procedure of step by step integration starting at bow. In the present pater for the two-dimensional Neumann-Kelvin problem, it has been suggested that any solution of the problem must have singularities in the corners between the body surface and free surface. There can be infinitely many solutions depending on the singularities in the coroners.

  • PDF