Services

A Tiered Approach to High-Performance AI

From foundational support to advanced optimization, MINERVA’s services scale to meet the evolving needs of the European AI community.

NOTE: services are open also to EU-based private entities. When providing support services to such entities, however, potential obligations related to state aid regulations might apply.

Support for porting AI Applications and workflows to HPC infrastructure (L1)

We aim to help AI user communities use HPC infrastructures.

We assist AI user communities in accessing HPC infrastructures by providing technical assistance in writing proposals for HPC resources at national and European levels. Once resources are secured, we offer support for porting applications, data, and workflows, adapting them to HPC infrastructure specifics. This includes optimizing storage, preprocessing data, and adapting to software stacks. We aid in using debugging and profiling tools and in becoming familiar with development and execution tools like Jupyter Notebook and Kubernetes. We also provide technical documentation for each HPC infrastructure.

Support for the use and mastery of AI libraries on HPC architectures (L2)

We focus on supporting the use and mastering of AI libraries on HPC architectures, going beyond basic porting.

This includes scaling up workloads on supercomputers for efficient training, tuning, and deployment of large-scale AI models [see prior conversation]. We provide guidance on emerging challenges from regulations on ethical and responsible AI and how to concretely address them. Our goal is to bridge the knowledge gap often experienced by the AI community, particularly concerning HPC resources .
We also assist with the adaptation of large multimodal models to downstream applications, experimental techniques for obtaining compact versions of foundation models, and handling different data types for multimodality. Furthermore, support is given on how to address regulations on ethical and responsible AI in specific cases.

Support for the pre-training of open large-scale and foundation models (L3)

We focus on providing expert guidance to AI practitioners in the customization of open foundation models.

Our support includes adapting and scaling pre-trained models for specific downstream tasks and datasets, as well as optimizing models to use HPC resources efficiently. We aid in integrating diverse data types for multimodality. We provide guidance on the implementation of techniques for efficient model compression, quantization, or pruning.
Our goal is to promote the reuse of high-quality models and datasets, which enhances competitiveness and fosters innovation within the European digital ecosystem. This includes support for model deployment and validation.

Support for the specialization of various open large-scale and foundation models

We guide users in implementing classic fine-tuning and specialization methods for large scale and foundation models, specifically for scaled use on HPC infrastructure.

This includes:
Parameter Efficient Fine-Tuning methods: Optimizing LLMs/LMMs or large-scale models by training an adapter or a subset of the model’s parameters.
Multi Model System: Improving LLMs/LMMs or large-scale models generations with new methods (e.g. Retrieval Augmented Generation, Preference Alignment with reinforcement learning from human feedback, Direct Preference Optimization).
Evaluation & Inference: Evaluating pretrained LLMs/LLMs or large-scale models efficiently with optimized evaluation frameworks and inference techniques (Paged Attention, Quantization, etc).

Guidance and Support on regulations on ethical and responsible AI

We assist users requesting support to address emerging challenges from regulations on ethical and responsible AI in their applications.

This includes assessing adherence with EU regulations and checking/solving issues related to ethical and responsible AI. Examples of these include:
NSFW/inappropriate content removal from training datasets
Training and data selection techniques to avoid unfair/unethical behavior
We offer direct support to users on an on-demand basis, implemented with an external service provider via subcontracting. We also collaborate with projects funded through calls HORIZON-CL4-2021-HUMAN-01 and HORIZON-CL4-2023-HUMAN-01 for support and gap analysis in the service portfolio and internal competences.

Specialised/Advanced trainings for the AI communities

We prepare and implement specialized training activities to provide users with the expertise to efficiently use supercomputers hosted by participating partners and other EuroHPC supercomputers.

These training activities are dedicated to both academic and industrial users, bridging the knowledge gap on the efficient use of HPC resources typically experienced by the AI community. Training involves guidance on HPC architectures and schedulers and promotes the adoption of common methodologies and software stacks, including libraries, tools, and distributed training pipelines.
We also cover available open-source models, like open foundation models, and how to efficiently adapt them to user needs. Training includes ramp-up workshops for entry-level users for a fast start in HPC environments, as well as specialized and domain-specific courses dedicated to large-scale use cases. To expand the user base, training activities target ML/AI communities and entities/projects that interface with such users, such as NCCs and EDIHs.

Request SErvices

write us or contact us on our social media