• Title/Summary/Keyword: real experiments

Search Result 3,339, Processing Time 0.03 seconds

Association of β-Catenin with Fat Accumulation in 3T3-L1 Adipocytes and Human Population (β-catenin 유전자의 3T3-L1 지방세포 및 인체에서의 지방축적 연관성 연구)

  • Bae, Sung-Min;Lee, Hae-Yong;Chae, Soo-Ahn;Oh, Dong-Jin;Park, Suk-Won;Yoon, Yoo-Sik
    • Journal of Life Science
    • /
    • v.21 no.9
    • /
    • pp.1301-1309
    • /
    • 2011
  • The major function of adipocytes is to store fat in the form of triglycerides. One of the signaling pathways known to affect adipogenesis, i.e. fat formation, is the WNT/${\beta}$-catenin pathway which inhibits the expression and activity of key regulators of adipogenesis. The purpose of this research is to find genes among the WNT/${\beta}$-catenin pathway which regulate adipogenesis by using small interfering (si) RNA and to find the association of single nucleotide polymorphisms (SNPs) of the gene with serum triglyceride levels in the human population. To elucidate the effects of ${\beta}$-catenin siRNA on adipogenesis key factors, PPAR${\gamma}$ and C/EBP${\alpha}$, we performed real-time PCR and western blotting experiments for the analyses of mRNA and protein levels. It was found that the siRNA-mediated knockdown of ${\beta}$-catenin upregulates adipogenesis key factors. However, upstream regulators of the WNT/${\beta}$-catenin pathway, such as DVL2 and LRP6, had no significant effects compared to ${\beta}$-catenin. These results indicate that ${\beta}$-catenin is a candidate gene for human fat accumulation. In general, serum triglyceride level is a good indicator of fat accumulation in humans. According to statistical analyses of the association between serum triglyceride level and SNPs of ${\beta}$-catenin, -10,288 C>T SNP (rs7630377) in the promoter region was significantly associated with serum triglyceride levels (p<0.05) in 290 Korean subjects. On the other hand, serum cholesterol levels were not significantly associated with SNPs of the ${\beta}$-catenin gene. The results of this study showed that ${\beta}$-catenin is associated with fat accumulation both in vitro and in the human population.

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

An Efficient Algorithm for Streaming Time-Series Matching that Supports Normalization Transform (정규화 변환을 지원하는 스트리밍 시계열 매칭 알고리즘)

  • Loh, Woong-Kee;Moon, Yang-Sae;Kim, Young-Kuk
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.600-619
    • /
    • 2006
  • According to recent technical advances on sensors and mobile devices, processing of data streams generated by the devices is becoming an important research issue. The data stream of real values obtained at continuous time points is called streaming time-series. Due to the unique features of streaming time-series that are different from those of traditional time-series, similarity matching problem on the streaming time-series should be solved in a new way. In this paper, we propose an efficient algorithm for streaming time- series matching problem that supports normalization transform. While the existing algorithms compare streaming time-series without any transform, the algorithm proposed in the paper compares them after they are normalization-transformed. The normalization transform is useful for finding time-series that have similar fluctuation trends even though they consist of distant element values. The major contributions of this paper are as follows. (1) By using a theorem presented in the context of subsequence matching that supports normalization transform[4], we propose a simple algorithm for solving the problem. (2) For improving search performance, we extend the simple algorithm to use $k\;({\geq}\;1)$ indexes. (3) For a given k, for achieving optimal search performance of the extended algorithm, we present an approximation method for choosing k window sizes to construct k indexes. (4) Based on the notion of continuity[8] on streaming time-series, we further extend our algorithm so that it can simultaneously obtain the search results for $m\;({\geq}\;1)$ time points from present $t_0$ to a time point $(t_0+m-1)$ in the near future by retrieving the index only once. (5) Through a series of experiments, we compare search performances of the algorithms proposed in this paper, and show their performance trends according to k and m values. To the best of our knowledge, since there has been no algorithm that solves the same problem presented in this paper, we compare search performances of our algorithms with the sequential scan algorithm. The experiment result showed that our algorithms outperformed the sequential scan algorithm by up to 13.2 times. The performances of our algorithms should be more improved, as k is increased.

A New Item Recommendation Procedure Using Preference Boundary

  • Kim, Hyea-Kyeong;Jang, Moon-Kyoung;Kim, Jae-Kyeong;Cho, Yoon-Ho
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.81-99
    • /
    • 2010
  • Lately, in consumers' markets the number of new items is rapidly increasing at an overwhelming rate while consumers have limited access to information about those new products in making a sensible, well-informed purchase. Therefore, item providers and customers need a system which recommends right items to right customers. Also, whenever new items are released, for instance, the recommender system specializing in new items can help item providers locate and identify potential customers. Currently, new items are being added to an existing system without being specially noted to consumers, making it difficult for consumers to identify and evaluate new products introduced in the markets. Most of previous approaches for recommender systems have to rely on the usage history of customers. For new items, this content-based (CB) approach is simply not available for the system to recommend those new items to potential consumers. Although collaborative filtering (CF) approach is not directly applicable to solve the new item problem, it would be a good idea to use the basic principle of CF which identifies similar customers, i,e. neighbors, and recommend items to those customers who have liked the similar items in the past. This research aims to suggest a hybrid recommendation procedure based on the preference boundary of target customer. We suggest the hybrid recommendation procedure using the preference boundary in the feature space for recommending new items only. The basic principle is that if a new item belongs within the preference boundary of a target customer, then it is evaluated to be preferred by the customer. Customers' preferences and characteristics of items including new items are represented in a feature space, and the scope or boundary of the target customer's preference is extended to those of neighbors'. The new item recommendation procedure consists of three steps. The first step is analyzing the profile of items, which are represented as k-dimensional feature values. The second step is to determine the representative point of the target customer's preference boundary, the centroid, based on a personal information set. To determine the centroid of preference boundary of a target customer, three algorithms are developed in this research: one is using the centroid of a target customer only (TC), the other is using centroid of a (dummy) big target customer that is composed of a target customer and his/her neighbors (BC), and another is using centroids of a target customer and his/her neighbors (NC). The third step is to determine the range of the preference boundary, the radius. The suggested algorithm Is using the average distance (AD) between the centroid and all purchased items. We test whether the CF-based approach to determine the centroid of the preference boundary improves the recommendation quality or not. For this purpose, we develop two hybrid algorithms, BC and NC, which use neighbors when deciding centroid of the preference boundary. To test the validity of hybrid algorithms, BC and NC, we developed CB-algorithm, TC, which uses target customers only. We measured effectiveness scores of suggested algorithms and compared them through a series of experiments with a set of real mobile image transaction data. We spilt the period between 1st June 2004 and 31st July and the period between 1st August and 31st August 2004 as a training set and a test set, respectively. The training set Is used to make the preference boundary, and the test set is used to evaluate the performance of the suggested hybrid recommendation procedure. The main aim of this research Is to compare the hybrid recommendation algorithm with the CB algorithm. To evaluate the performance of each algorithm, we compare the purchased new item list in test period with the recommended item list which is recommended by suggested algorithms. So we employ the evaluation metric to hit the ratio for evaluating our algorithms. The hit ratio is defined as the ratio of the hit set size to the recommended set size. The hit set size means the number of success of recommendations in our experiment, and the test set size means the number of purchased items during the test period. Experimental test result shows the hit ratio of BC and NC is bigger than that of TC. This means using neighbors Is more effective to recommend new items. That is hybrid algorithm using CF is more effective when recommending to consumers new items than the algorithm using only CB. The reason of the smaller hit ratio of BC than that of NC is that BC is defined as a dummy or virtual customer who purchased all items of target customers' and neighbors'. That is centroid of BC often shifts from that of TC, so it tends to reflect skewed characters of target customer. So the recommendation algorithm using NC shows the best hit ratio, because NC has sufficient information about target customers and their neighbors without damaging the information about the target customers.

Prefetching based on the Type-Level Access Pattern in Object-Relational DBMSs (객체관계형 DBMS에서 타입수준 액세스 패턴을 이용한 선인출 전략)

  • Han, Wook-Shin;Moon, Yang-Sae;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.529-544
    • /
    • 2001
  • Prefetching is an effective method to minimize the number of roundtrips between the client and the server in database management systems. In this paper we propose new notions of the type-level access pattern and the type-level access locality and developed an efficient prefetchin policy based on the notions. The type-level access patterns is a sequence of attributes that are referenced in accessing the objects: the type-level access locality a phenomenon that regular and repetitive type-level access patterns exist. Existing prefetching methods are based on object-level or page-level access patterns, which consist of object0ids of page-ids of the objects accessed. However, the drawback of these methods is that they work only when exactly the same objects or pages are accessed repeatedly. In contrast, even though the same objects are not accessed repeatedly, our technique effectively prefetches objects if the same attributes are referenced repeatedly, i,e of there is type-level access locality. Many navigational applications in Object-Relational Database Management System(ORDBMs) have type-level access locality. Therefore our technique can be employed in ORDBMs to effectively reduce the number of roundtrips thereby significantly enhancing the performance. We have conducted extensive experiments in a prototype ORDBMS to show the effectiveness of our algorithm. Experimental results using the 007 benchmark and a real GIS application show that our technique provides orders of magnitude improvements in the roundtrips and several factors of improvements in overall performance over on-demand fetching and context-based prefetching, which a state-of the art prefetching method. These results indicate that our approach significantly and is a practical method that can be implemented in commercial ORDMSs.

  • PDF

Intercomparison of Daegwallyeong Cloud Physics Observation System (CPOS) Products and the Visibility Calculation by the FSSP Size Distribution during 2006-2008 (대관령 구름물리관측시스템 산출물 평가 및 FSSP를 이용한 시정환산 시험연구)

  • Yang, Ha-Young;Jeong, Jin-Yim;Chang, Ki-Ho;Cha, Joo-Wan;Jung, Jae-Won;Kim, Yoo-Chul;Lee, Myoung-Joo;Bae, Jin-Young;Kang, Sun-Young;Kim, Kum-Lan;Choi, Young-Jean;Choi, Chee-Young
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.2
    • /
    • pp.65-73
    • /
    • 2010
  • To observe and analyze the characteristics of cloud and precipitation properties, the Cloud physics Observation System (CPOS) has been operated from December 2003 at Daegwallyeong ($37.4^{\circ}N$, $128.4^{\circ}E$, 842 m) in the Taebaek Mountains. The major instruments of CPOS are follows: Forward Scattering Spectrometer Probe (FSSP), Optical Particle Counter (OPC), Visibility Sensor (VS), PARSIVEL disdrometer, Microwave Radiometer (MWR), and Micro Rain Radar (MRR). The former four instruments (FSSP, OPC, visibility sensor, and PARSIVEL) are for the observation and analysis of characteristics of the ground cloud (fog) and precipitation, and the others are for the vertical cloud characteristics (http://weamod.metri.re.kr) in real time. For verification of CPOS products, the comparison between the instrumental products has been conducted: the qualitative size distributions of FSSP and OPC during the hygroscopic seeding experiments, the precipitable water vapors of MWR and radiosonde, and the rainfall rates of the PARSIVEL(or MRR) and rain gauge. Most of comparisons show a good agreement with the correlation coefficient more than 0.7. These reliable CPOS products will be useful for the cloud-related studies such as the cloud-aerosol indirect effect or cloud seeding. The visibility value is derived from the droplet size distribution of FSSP. The derived FSSP visibility shows the constant overestimation by 1.7 to 1.9 times compared with the values of two visibility sensors (SVS (Sentry Visibility Sensor) and PWD22 (Present Weather Detect 22)). We believe this bias is come from the limitation of the droplet size range ($2{\sim}47\;{\mu}m$) measured by FSSP. Further studies are needed after introducing new instruments with other ranges.

Estimation of Nondestructive Rice Leaf Nitrogen Content Using Ground Optical Sensors (지상광학센서를 이용한 비파괴 벼 엽 질소함량 추정)

  • Kim, Yi-Hyun;Hong, Suk-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.40 no.6
    • /
    • pp.435-441
    • /
    • 2007
  • Ground-based optical sensing over the crop canopy provides information on the mass of plant body which reflects the light, as well as crop nitrogen content which is closely related to the greenness of plant leaves. This method has the merits of being non-destructive real-time based, and thus can be conveniently used for decision making on application of nitrogen fertilizers for crops standing in fields. In the present study relationships among leaf nitrogen content of rice canopy, crop growth status, and Normalized Difference Vegetation Index (NDVI) values were investigated. We measured Green normalized difference vegetation index($gNDVI=({\rho}0.80{\mu}m-{\rho}0.55{\mu}m)/({\rho}0.80{\mu}m+{\rho}0.55{\mu}m)$) and NDVI($({\rho}0.80{\mu}m-{\rho}0.68{\mu}m)/({\rho}0.80{\mu}m+{\rho}0.68{\mu}m)$) were measured by using two different active sensors (Greenseeker, NTech Inc. USA). The study was conducted in the years 2005-06 during the rice growing season at the experimental plots of National Institute of Agricultural Science and Technology located at Suwon, Korea. The experiments carried out with randomized complete block design with the application of four levels of nitrogen fertilizers (0, 70, 100, 130kg N/ha) and same amount of phosphorous and potassium content of the fertilizers. gNDVI and rNDVI increased as growth advanced and reached to maximum values at around early August, G(NDVI) were a decrease in values of observed with the crop maturation. gNDVI values and leaf nitrogen content were highly correlated at early July in 2005 and 2006. On the basis of this finding we attempted to estimate the leaf N contents using gNDVI data obtained in 2005 and 2006. The determination coefficients of the linear model by gNDVI in the years 2005 and 2006 were 0.88 and 0.94, respectively. The measured and estimated leaf N contents using gNDVI values showed good agreement ($R^2=0.86^{***}$). Results from this study show that gNDVI values represent a significant positive correlation with leaf N contents and can be used to estimate leaf N before the panicle formation stage. gNDVI appeared to be a very effective parameter to estimate leaf N content the rice canopy.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

An Experimental Study on the Hydration Heat of Concrete Using Phosphate based Inorganic Salt (인산계 무기염을 이용한 콘크리트의 수화 발열 특성에 관한 실험적 연구)

  • Jeong, Seok-Man;Kim, Se-Hwan;Yang, Wan-Hee;Kim, Young-Sun;Ki, Jun-Do;Lee, Gun-Cheol
    • Journal of the Korea Institute of Building Construction
    • /
    • v.20 no.6
    • /
    • pp.489-495
    • /
    • 2020
  • Whereas the control of the hydration heat in mass concrete has been important as the concrete structures enlarge, many conventional strategies show some limitations in their effectiveness and practicality. Therefore, In this study, as a solution of controling the heat of hydration of mass concrete, a method to reduce the heat of hydration by controlling the hardening of cement was examined. The reduction of the hydration heat by the developed Phosphate Inorganic Salt was basically verified in the insulated boxes filled with binder paste or concrete mixture. That is, the effects of the Phosphate Inorganic Salt on the hydration heat, flow or slump, and compressive strength were analyzed in binary and ternary blended cement which is generally used for low heat. As a result, the internal maximum temperature rise induced by the hydration heat was decreased by 9.5~10.6% and 10.1~11.7% for binder paste and concrete mixed with the Phosphate Inorganic Salt, respectively. Besides, the delay of the time corresponding to the peak temperature was apparently observed, which is beneficial to the emission of the internal hydration heat in real structures. The Phosphate Inorganic Salt that was developed and verified by a series of the aforementioned experiments showed better performance than the existing ones in terms of the control of the hydration heat and other performance. It can be used for the purpose of hydration heat of mass concrete in the future.