The Fourth International Workshop on Deep and Transfer Learning (DTL2026)

May 12-15, 2026. San Antonio, Texas, USA

Co-located with

9th International Conference on Modern Computing, Networking and Applications (MCNA2026)

Welcome to The International Workshop on Deep and Transfer Learning (DTL2026)

Call for Papers

Deep learning has fundamentally transformed computer science, serving as the backbone for breakthroughs in computer vision, natural language processing (NLP), speech recognition, and robotics. While the efficacy of deep and complex architectures—such as Transformers and Recurrent Convolutional Neural Networks—is undeniable, the field faces significant hurdles regarding computational cost, data dependency, and adaptability.

As we move toward 2026, the focus has shifted from training models from scratch to adapting robust pre-trained models. Transfer Learning and Multi-task Learning have emerged as critical methodologies to exploit available data, adapting previously learned knowledge to emerging domains with limited labeled datasets. Simultaneously, Deep Reinforcement Learning (DRL) continues to evolve, creating systems capable of autonomous decision-making and real-world adaptation through trial-and-error and reward optimization.

Despite rapid progress, many challenges remain unsolved, particularly regarding resource efficiency, domain adaptation in dynamic environments, and the integration of generative capabilities. The DTL 2026 Workshop aims to bring together researchers working at the intersection of Deep Learning, Reinforcement Learning, and Transfer Learning. We seek to bridge the gap between theory and practice by providing a platform for researchers and practitioners to discuss novel architectures, criticize current theories, and share results on adapting models to new tasks efficiently.

Topics of Interest

We invite the submission of original papers on all topics related to Deep Learning, Deep Reinforcement Learning, and Transfer/Multi-task Learning. Given the current landscape, we have a special interest in the following areas:

  1. Deep Learning Foundations & Applications
    • Deep learning for Generative AI and Large Language Models (LLMs)
    • Deep learning for Computer Vision and Multimodal Systems
    • Deep learning for Natural Language Processing (NLP) and Sentiment Analysis
    • Deep learning for Recommender Systems and Social Network Analysis
    • Optimization techniques for large-scale Deep Learning
    • Computational Biology and Healthcare applications
  2. Deep Reinforcement Learning (DRL)
    • Reinforcement Learning from Human Feedback (RLHF)
    • Stabilizing learning dynamics in Deep RL
    • Deep Transfer Learning for Robots and Control Systems
    • Reward shaping and inverse reinforcement learning
    • Deep RL for Game Playing and Simulation
    • Scaling up prior RL solutions for real-world application
    • Energy-efficient Deep Reinforcement Learning
  3. Transfer and Multi-task Learning
    • New perspectives and theories on Transfer and Multi-task Learning
    • Domain Adaptation and Domain Generalization
    • Parameter-Efficient Fine-Tuning (PEFT) for Foundation Models
    • Cross-modality Transfer (e.g., Image-to-Text, Text-to-Audio)
    • Transfer learning across different architectures (e.g., CNN to Transformer/RNN)
    • Few-shot, Zero-shot, and One-shot learning approaches
    • Transfer from weakly labeled, noisy, or synthetic data
  4. Emerging Challenges & Systems
    • Federated Learning and Privacy-Preserving AI
    • Resource-Efficient Deep Learning: Green AI, compression, and edge computing
    • Robustness and Safety: Dataset bias, concept drift, and adversarial robustness
    • Systems Management: Deep learning for network resource management
    • Benchmarks, Open-source packages, and reproducible research

Organization Committee

  • Jaime Lloret Mauri, Universidad Politécnica de Valencia, Spain
  • Zilong Ye, California State University, Los Angeles, USA
  • Thar Baker, John Moors Liverpool University, UK
  • Moayad Aloqaily, Carleton University, Canada
  • Mohammad Alsmirat, East Texas A&M University, USA

KEYNOTE SPEAKERS

Will be announced soon.

Authors Submission Guidelines:

Submission Site:

https://easychair.org/conferences/?conf=mcna2026

Paper format

Submitted papers (.pdf format) must use the A4 IEEE Manuscript Templates for Conference Proceedings. Please remember to add Keywords to your submission.

Length

Submitted papers may be 6 to 8 pages. Up to two additional pages may be added for references. The reference pages must only contain references. Overlength papers will be rejected without review.

Originality

Papers submitted to DTL must be the original work of the authors. The may not be simultaneously under review elsewhere. Publications that have been peer-reviewed and have appeared at other conferences or workshops may not be submitted to DTL. Authors should be aware that IEEE has a strict policy with regard to plagiarism https://www.ieee.org/publications/rights/plagiarism/plagiarism-faq.html The authors' prior work must be cited appropriately.

Author list

Please ensure that you submit your papers with the full and final list of authors in the correct order. The author list registered for each submission is not allowed to be changed in any way after the paper submission deadline.

Proofreading:

Please proofread your submission carefully. It is essential that the language use in the paper is clear and correct so that it is easily understandable. (Either US English or UK English spelling conventions are acceptable.)

Publication:

All accepted papers in DTL2026 and the workshops co-located with it will be submitted to IEEEXplore for possible publication.

Program

The program will be announced with the IDSTA2026 program at https://mcna-conference.org/2026/program.php

Venue

For venue and acomodoation information, please visit https://mcna-conference.org/2026/venue.php

Registration

For registration information, please visit https://mcna-conference.org/2026/registration.php

Camera Ready

For registration information, please visit https://mcna-conference.org/2026/cameraready.php