Phase I (Initial use cases)
- DVC - Data Versioning and Control
- Databases - Sales / Historical data
- External Marketing Campaign data
- Feature Engineering - Database / Feature Store
- Model Building - ML / DL Algos
- Model Experiment Tracking - MLflow
- Drift / Monitoring - Evidently AI
- Model Deployment - API / Serverless function
- Actuals Tracking Loading - Database
- Model Scheduling / ETL Scheduling - Jobs based on AWS / GCP / Custom scheduler cron jobs
- I prefer dockerizing it and deploy it in GCP APP engine like a Nocode / Low code approach for the first few models
- Custom Reporting for trends/patterns making it more accessible/relatable to business
- Reporting for Past - Present - Future. Making predictions relatable
- Explainable AI to compare predictions vs actuals to interpret cause - -reason in a more Non-ML Approach
Phase II (Serving large models > 30)
- Kubernetes based Platform
- Once the platform is deployed you can leverage out of box images / notebook / pipeline / monitoring options available
- Kubeflow Pipelines
Kubernetes as a Service: GKE vs. AKS vs. EKS
Keep Exploring!!!
No comments:
Post a Comment