👋 Following Up on Our FastAI Tutorial
In our last post, “How to Train a Basic Vision Model Using FastAI“, we walked you through a simple, fast pipeline for training an image classifier using FastAI. Many of you asked a great follow-up question:
“Where did the
resnet34model come from — and how did you decide to use it?”
This post answers that — and uses that question as a springboard to explore a much bigger trend: the explosive growth of AI model repositories over the last 12 months.
🧠 Why This Matters: Picking the Right Model Is Easier Than Ever
Twelve months ago, you might have had to download models manually from obscure GitHub repos or retrain everything from scratch.
Today? You have access to millions of pretrained models — instantly usable in your pipeline — thanks to the rise of model hubs.
🚀 What Are Model Hubs?
Model hubs (or repositories) are platforms that host pretrained machine learning models for public or private reuse. They allow developers to:
- 
Browse models by task (e.g. image classification, text summarization) 
- 
Load models directly into code with one line 
- 
See metadata like architecture, training dataset, license, size, and performance 
These hubs are reshaping how we build and deploy AI.
📈 How Model Repositories Have Grown in the Past 12 Months
1. Hugging Face
- 
📊 Over 1.7 million models (as of mid-2025) 
- 
💼 Used by 50,000+ organizations 
- 
🌍 Hosts models for vision, NLP, audio, and multimodal tasks 
- 
🔧 Features include Spaces (interactive demos), Transformers library, datasets, and inference APIs 
Hugging Face has gone from an NLP library to the GitHub of AI — and it’s now the first stop for finding modern models.
2. TensorFlow Hub
- 
🧠 Focuses on models trained with TensorFlow/Keras 
- 
🤝 Tight integration with Google tools 
- 
📚 Best for vision, image embedding, and TF-based pipelines 
- 
✅ Simple API for loading and fine-tuning modules 
While not as large as Hugging Face, it’s still a reliable source for solid, production-ready models.
3. PyTorch Hub
- 
🔥 Offers a PyTorch-native way to load pretrained models (e.g. resnet34)
- 
🧪 Supports reproducible research via torch.hub.load()
- 
⚙️ Great for users deep in the PyTorch ecosystem 
Many FastAI models (including the one we used) pull from this ecosystem — which brings us back to…
🧪 So… Where Did resnet34 Come From?
In our FastAI tutorial, we used the line:
That resnet34 model?
✔️ It’s a pretrained convolutional neural network
✔️ Available from PyTorch Hub (via the torchvision.models library)
✔️ Originally trained on ImageNet — a dataset with over 14 million images
✔️ Popular for its balance of speed, size, and accuracy
By using FastAI’s vision_learner, we load that model directly, wrap it in training logic, and fine-tune it for our task — without having to manually download or retrain anything from scratch.
🤖 Why Use ResNet34 for a Demo?
- 
✅ Lightweight — fast to train even without a top-tier GPU 
- 
🔁 Transfer learning-ready — pretrained weights help bootstrap accuracy 
- 
🧠 Interpretable — well-documented architecture 
- 
🧩 Widely supported — works out-of-the-box with FastAI, PyTorch, and Hugging Face 
It’s not the only choice — but it’s one of the most accessible and reliable backbones for image classification.
🧬 Final Thoughts: Model Hubs Are the New Standard
When we say “you don’t need to build everything from scratch,” model hubs are the reason why.
They’ve grown dramatically over the past year, making deep learning faster, cheaper, and more open:
- 
💾 Less time training from scratch 
- 
📥 More time building value on top of strong foundations 
- 
🔍 Easier to inspect, compare, and version models 
- 
💬 More reproducibility and community validation 
Just like package managers (npm, pip) revolutionized software dev, model hubs are transforming AI development.
💡 Next Up: Build Smarter With the Model Registry Mindset
In a future post, we’ll show how to create your own internal model registry — so your team can reuse, track, and deploy models at scale. This turns AI from “that thing one team does” into a repeatable, sharable internal asset.
—
🔁 If you haven’t seen the original FastAI demo yet, check it out here: How to Train a Basic Vision Model Using FastAI
Questions? Drop them in the comments or DM us.
 
                                    
	
