Research

Team 1 Project: Multi-Task Spatiotemporal Deep Learning Based Arctic Sea Ice Prediction

Team members:

  • REU Student: Jamal Bourne Jr., Department of Mathematics and Computer Science, McDaniel College, Westminster, MD
  • REU Student: Michael Hu, Department of Computer Science, Georgia Institute of Technology, Atlanta, GA
  • REU Student: Eliot Kim, Nelson Institute of Environmental Studies Department of Statistics, University of Wisconsin-Madison, Madison, WI
  • REU Student: Peter Kruse, Department of Accounting, Business, and Economics, Juniata College, Huntingdon, PA
  • REU Student: Skylar Lama, Department of Atmospheric and Oceanic Science, University of Maryland, College Park, MD
  • RA Student: Sahara Ali, PhD student, Department of Information Systems, UMBC
  • Collaborator: Dr. Yiyi Huang, Research Scientist at NASA Langley Research Center and Adjunct Research Assistant Professor at UMBC
  • Research Mentor: Dr. Jianwu Wang, Associate Professor of Data Science, Department of Information Systems, UMBC

Abstract: Important natural resources in the Arctic rely heavily on sea ice, making it important to forecast Arctic sea ice changes. Arctic sea ice forecasting often involves two connected tasks: sea ice concentration at each pixel and overall sea ice extent. Instead of having two separate models for two forecasting tasks, in this report, we study how to use multi-task learning techniques and leverage the connections between ice concentration and ice extent to improve accuracy for both prediction tasks. Because of the spatiotemporal nature of the data, we designed two novel multi-task learning models based on CNNs and ConvLSTMs, respectively. We also developed a custom loss function which trains the models to ignore land pixels when making predictions. Our experiments show our models can have better accuracies than separate models that predict sea ice extent and concentration separately, and that our accuracies are better than or comparable with results in the state-of-the-art studies.

Deliverables:

Team 2 Project: Determining Optimal Configurations for Deep Fully Connected Neural Networks to Improve Image Reconstruction in Proton Radiotherapy

Team members:

  • REU Student: Alina M. Ali, Department of Mathematics and Statistics, University of Houston-Downtown, Houston, TX
  • REU Student: David C. Lashbrooke Jr., Departments of Mathematics and of Statistics, Purdue University, West Lafayette, IN
  • REU Student: Rodrigo Yepez-Lopez, Department of Computer Science, American University, Washington D.C.
  • REU Student: Aida York, Center for Data, Mathematics, and Computational Sciences, Goucher College, Baltimore, MD
  • RA Student: Carlos Barajas, PhD student, Department of Mathematics and Statistics, UMBC
  • Collaborator: Dr. Jerimy Polf, Associate Professor, Department of Radiation Oncology, University of Maryland School of Medicine
  • Research Mentor: Matthias K. Gobbert, Professor of Mathematics, Department of Mathematics and Statistics, UMBC

Abstract: Proton therapy is a unique form of radiotherapy that utilizes protons to treat cancer by irra- diating cancerous tumors while avoiding unnecessary radiation exposure to surrounding healthy tissues. Real-time imaging of prompt gamma rays can be used as a tool to make this form of therapy more effective. The use of Compton cameras is one proposed method for the real-time imaging of prompt gamma rays that are emitted by the proton beams as they travel through a patient’s body. The non-zero time resolution of the Compton camera, during which all inter- actions are recorded as occurring simultaneously, causes the reconstructed images to be noisy and insufficiently detailed to evaluate the proton delivery for the patient. Deep Learning has been a promising method used to remove and correct the different problems existing within the Compton Camera’s data. Previous papers have demonstrated the effectiveness of using deep fully connected networks to correct improperly ordered gamma interactions within the data. We do a moderately large hyperparameter grid search to find a promising set which yields com- petitive performance but contains fewer neurons making it compact. The studies which have many neurons, many layers, and a non-zero dropout rate have the best testing accuracy. These many neuron and many layer networks still have significantly fewer total neurons than the cur- rent neural network implementation. If given considerably more training time these compact networks could yield equal, if not superior, testing accuracy when compared to larger networks. More improvements are still needed for clinical use and we are currently experimenting with recurrent neural networks to test the viability of this type of architecture for this application.

Deliverables: