Sources
We gather information from major conferences (IPDPS, MICRO, ISCA, HPCA, PACT) and leading companies to cover the latest AI advancements.
Our platform is designed to serve researchers, developers, and enthusiasts seeking in-depth knowledge about AI technologies.
Objective
FAiNDER started as an internal BSC repository to handle the overwhelming volume of AI publications. Over time it became an essential tool for research and collaboration, and now it's open to the entire community. Our platform offers a comprehensive, filterable table and interactive charts that make it easier to navigate, compare, and understand the AI landscape.


Methodology
We believe it's important to detail our methodology to ensure the validity and reliability of the data we present.
We gather information from major conferences (IPDPS, MICRO, ISCA, HPCA, PACT) and leading companies to cover the latest AI advancements.
We synthesize data into concise summaries, focusing on hardware requirements and AI model performance, providing relevant and actionable insights.
We validate our data by cross-referencing multiple sources to ensure accuracy and reliability.
Team
Our research team specializes in computer architecture and is dedicated to advance the future of AI technology.

RNNs, DLRS & Transformers

DNNs, CNNs & GNNs

Developer & Support

Memory Systems Group Leader
Experts in hardware architecture and high-performance computing dedicated to advancing next-generation memory systems. Through our research, we aim to enhance memory system design to meet the rigorous demands of future HPC applications.
Learn moreFrequently Asked Questions
We've compiled a list of commonly asked questions to provide you with quick and informative answers.
FAiNDER serves everyone from students doing their first AI literature review to advanced researchers comparing hardware platforms across dozens of models. Junior users get a clear starting point and curated references; advanced users get detailed breakdowns of software pipelines, data types, memory footprint, and hardware targets.
Use the filters in the AI Explorer page to narrow down models by hardware type, data type, phase (training or inference), sparsity, and more. You can combine multiple filters to quickly identify the exact models you need.
In the AI Explorer, use the hardware filter and select CPU or GPU. You can also combine this with the phase filter to distinguish between training and inference hardware.
FAiNDER synthesizes data from top-tier research papers across major conferences (IPDPS, MICRO, ISCA, HPCA, PACT) and leading companies. All entries are cross-referenced against multiple sources to ensure accuracy and include direct links to the original papers.
The database is continuously updated as new publications are released and reviewed by the team. Subscribe to our newsletter to be notified when new models or datasets are added.
Yes. Use the 'Generate Reference' button on this page to get a ready-to-use citation in your preferred format.