Post-Doctorant F/H Post-doc in Hardware-aware Neural Architecture Optimization on the Edge

Type de contrat : CDD

Niveau de diplôme exigé : Thèse ou équivalent

Fonction : Post-Doctorant

Niveau d'expérience souhaité : Jeune diplômé

A propos du centre ou de la direction fonctionnelle

Le centre Inria de l'Université de Lille, créé en 2008, emploie 360 personnes dont 305 scientifiques répartis dans 15 équipes de recherche. Reconnu pour sa forte implication dans le développement socio-économique de la région des Hauts-De-France, le centre Inria de l'Université de Lille entretient des relations étroites avec les grandes entreprises et les PME. En favorisant les synergies entre chercheurs et industriels, Inria participe au transfert de compétences et d'expertise dans le domaine des technologies numériques et donne accès au meilleur de la recherche européenne et internationale au bénéfice de l'innovation et des entreprises, notamment dans la région.

Depuis plus de 10 ans, le centre Inria de l'Université de Lille est situé au cœur de l'écosystème universitaire et scientifique lillois, ainsi qu'au cœur de la Frenchtech, avec un showroom technologique basé avenue de Bretagne à Lille, sur le site d'excellence économique EuraTechnologies dédié aux technologies de l'information et de la communication (TIC).

Contexte et atouts du poste

 

 

Mission confiée

Deep Neural Networks (DNN) and hardware accelerators are both leading forces for the recent progress in Edge AI. On the one hand, a new neural architectural paradigm is proposed each month, striving for more accuracy and efficiency. On the other hand, the hardware market has shifted towards designing devices that ensure both flexibility and generality for less energy demands while satisfying the user experience with less latency.
When DNN models are implemented on resource-constrained systems (e.g., edge computing), it becomes inevitable to meticulously optimize them to strike the optimal balance between accuracy, execution latency and energy efficiency. In order to address this particular difficulty, our objective in this project is to tackle Hardware-aware Neural Architecture Search (HW-aware NAS) as a new AutoML paradigm, targeting edge systems. HW-aware NAS incorporates hardware efficiency as an additional optimization objective during the neural architecture design space exploration.


Main objectives of the project:


  • New multi-objective performance surrogates: In our previous work, we have widely used two types
    of accuracy estimation strategies: Predictive Models and Weight-sharing Supernetworks. Several
    other methods, including zero-shot estimation and learning-curve extrapolation have recently
    emerged. However, these methods still face certain limitations in terms of multi-objectivity,
    scalability, and accuracy.

  • - Search Algorithms and Large Search Spaces: It is crucial to develop efficient and scalable search
    algorithms that can effectively explore large heterogenous search spaces within practical time
    constraints. This would allow for the discovery of highly optimized architectures that align with the
    hardware constraints of edge devices, while still meeting performance requirements. Exploring
    spaces with new approaches such quantum-inspired search algorithms holds promise in tackling
    the challenges of searching large spaces more efficiently. These algorithms have the potential to
    enhance the search process and expedite the discovery of optimal architectures for edge
    computing. Large Language Model (LLM) based search algorithms for HW-NAS are another
    interesting approach that will be explored in the project.

  • - Multi-task and multi-modality NN investigation: Multi-task deep learning models are crucial to
    reducing the memory occupancy and execution time especially for edge devices. Investigating how
    sharing knowledge and architectural components across multiple related tasks can lead to more
    efficient and effective neural architectures. This approach exploits shared representations to
    enhance model performance and reduce resource requirements. In the same context we will
    explore HW-NAS for Multimodal Neural Networks (MM-NN). These NN have the ability to
    effectively process and integrate multiscale information from diverse data sources.

Principales activités

Main objectives of the project:


  • New multi-objective performance surrogates: In our previous work, we have widely used two types
    of accuracy estimation strategies: Predictive Models and Weight-sharing Supernetworks. Several
    other methods, including zero-shot estimation and learning-curve extrapolation have recently
    emerged. However, these methods still face certain limitations in terms of multi-objectivity,
    scalability, and accuracy.

  • - Search Algorithms and Large Search Spaces: It is crucial to develop efficient and scalable search
    algorithms that can effectively explore large heterogenous search spaces within practical time
    constraints. This would allow for the discovery of highly optimized architectures that align with the
    hardware constraints of edge devices, while still meeting performance requirements. Exploring
    spaces with new approaches such quantum-inspired search algorithms holds promise in tackling
    the challenges of searching large spaces more efficiently. These algorithms have the potential to
    enhance the search process and expedite the discovery of optimal architectures for edge
    computing. Large Language Model (LLM) based search algorithms for HW-NAS are another
    interesting approach that will be explored in the project.

  • - Multi-task and multi-modality NN investigation: Multi-task deep learning models are crucial to
    reducing the memory occupancy and execution time especially for edge devices. Investigating how
    sharing knowledge and architectural components across multiple related tasks can lead to more
    efficient and effective neural architectures. This approach exploits shared representations to
    enhance model performance and reduce resource requirements. In the same context we will
    explore HW-NAS for Multimodal Neural Networks (MM-NN). These NN have the ability to
    effectively process and integrate multiscale information from diverse data sources.

Compétences

Required qualifications
• A doctoral degree in Computer Science, Computer Engineering or Electrical Engineering.
• A good background in the domain of AI, Edge Computing, Optimization, GPU/FPGA/Multi-core
platforms.
• A good experience in SW development such as PyTorch or TensorFlow.

Avantages

  • Restauration subventionnée
  • Transports publics remboursés partiellement
  • Congés: 7 semaines de congés annuels + 10 jours de RTT (base temps plein) + possibilité d'autorisations d'absence exceptionnelle (ex : enfants malades, déménagement)
  • Possibilité de télétravail et aménagement du temps de travail
  • Équipements professionnels à disposition (visioconférence, prêts de matériels informatiques, etc.)
  • Prestations sociales, culturelles et sportives (Association de gestion des œuvres sociales d'Inria)
  • Accès à la formation professionnelle
  • Sécurité sociale