• Title/Summary/Keyword: ADAM 10

Search Result 245, Processing Time 0.024 seconds

Does an extensive diagnostic workup for upfront resectable pancreatic cancer result in a delay which affects survival? Results from an international multicentre study

  • Thomas B. Russell;Peter L. Labib;Jemimah Denson;Fabio Ausania;Elizabeth Pando;Keith J. Roberts;Ambareen Kausar;Vasileios K. Mavroeidis;Gabriele Marangoni;Sarah C. Thomasset;Adam E. Frampton;Pavlos Lykoudis;Manuel Maglione;Nassir Alhaboob;Hassaan Bari;Andrew M. Smith;Duncan Spalding;Parthi Srinivasan;Brian R. Davidson;Ricky H. Bhogal;Daniel Croagh;Ashray Rajagopalan;Ismael Dominguez;Rohan Thakkar;Dhanny Gomez;Michael A. Silva;Pierfrancesco Lapolla;Andrea Mingoli;Alberto Porcu;Teresa Perra;Nehal S. Shah;Zaed Z. R. Hamady;Bilal Al-Sarrieh;Alejandro Serrablo;Somaiah Aroori
    • Annals of Hepato-Biliary-Pancreatic Surgery
    • /
    • v.27 no.4
    • /
    • pp.403-414
    • /
    • 2023
  • Backgrounds/Aims: Pancreatoduodenectomy (PD) is recommended in fit patients with a carcinoma (PDAC) of the pancreatic head, and a delayed resection may affect survival. This study aimed to correlate the time from staging to PD with long-term survival, and study the impact of preoperative investigations (if any) on the timing of surgery. Methods: Data were extracted from the Recurrence After Whipple's (RAW) study, a multicentre retrospective study of PD outcomes. Only PDAC patients who underwent an upfront resection were included. Patients who received neoadjuvant chemo-/radiotherapy were excluded. Group A (PD within 28 days of most recent preoperative computed tomography [CT]) was compared to group B (> 28 days). Results: A total of 595 patents were included. Compared to group A (median CT-PD time: 12.5 days, interquartile range: 6-21), group B (49 days, 39-64.5) had similar one-year survival (73% vs. 75%, p = 0.6), five-year survival (23% vs. 21%, p = 0.6) and median time-to-death (17 vs. 18 months, p = 0.8). Staging laparoscopy (43 vs. 29.5 days, p = 0.009) and preoperative biliary stenting (39 vs. 20 days, p < 0.001) were associated with a delay to PD, but magnetic resonance imaging (32 vs. 32 days, p = 0.5), positron emission tomography (40 vs. 31 days, p > 0.99) and endoscopic ultrasonography (28 vs. 32 days, p > 0.99) were not. Conclusions: Although a treatment delay may give rise to patient anxiety, our findings would suggest this does not correlate with worse survival. A delay may be necessary to obtain further information and minimize the number of PD patients diagnosed with early disease recurrence.

Do some patients receive unnecessary parenteral nutrition after pancreatoduodenectomy? Results from an international multicentre study

  • Thomas B. Russell;Peter L. Labib;Paula Murphy;Fabio Ausania;Elizabeth Pando;Keith J. Roberts;Ambareen Kausar;Vasileios K. Mavroeidis;Gabriele Marangoni;Sarah C. Thomasset;Adam E. Frampton;Pavlos Lykoudis;Manuel Maglione;Nassir Alhaboob;Hassaan Bari;Andrew M. Smith;Duncan Spalding;Parthi Srinivasan;Brian R. Davidson;Ricky H. Bhogal;Daniel Croagh;Ismael Dominguez;Rohan Thakkar;Dhanny Gomez;Michael A. Silva;Pierfrancesco Lapolla;Andrea Mingoli;Alberto Porcu;Nehal S. Shah;Zaed Z. R. Hamady;Bilal Al-Sarrieh;Alejandro Serrablo;Somaiah Aroori
    • Annals of Hepato-Biliary-Pancreatic Surgery
    • /
    • v.28 no.1
    • /
    • pp.70-79
    • /
    • 2024
  • Backgrounds/Aims: After pancreatoduodenectomy (PD), an early oral diet is recommended; however, the postoperative nutritional management of PD patients is known to be highly variable, with some centers still routinely providing parenteral nutrition (PN). Some patients who receive PN experience clinically significant complications, underscoring its judicious use. Using a large cohort, this study aimed to determine the proportion of PD patients who received postoperative nutritional support (NS), describe the nature of this support, and investigate whether receiving PN correlated with adverse perioperative outcomes. Methods: Data were extracted from the Recurrence After Whipple's study, a retrospective multicenter study of PD outcomes. Results: In total, 1,323 patients (89%) had data on their postoperative NS status available. Of these, 45% received postoperative NS, which was "enteral only," "parenteral only," and "enteral and parenteral" in 44%, 35%, and 21% of cases, respectively. Body mass index < 18.5 kg/m2 (p = 0.03), absence of preoperative biliary stenting (p = 0.009), and serum albumin < 36 g/L (p = 0.009) all correlated with receiving postoperative NS. Among those who did not develop a serious postoperative complication, i.e., those who had a relatively uneventful recovery, 20% received PN. Conclusions: A considerable number of patients who had an uneventful recovery received PN. PN is not without risk, and should be reserved for those who are unable to take an oral diet. PD patients should undergo pre- and postoperative assessment by nutrition professionals to ensure they are managed appropriately, and to optimize perioperative outcomes.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.