Seldon Core is a machine learning platform that helps your data science team deploy models into production.
It provides an open-source data science stack that runs within a Kubernetes Cluster. You can use Seldon to deploy machine learning and deep learning models into production on-premise or in the cloud (e.g. GCP, AWS, Azure).
Seldon supports models built with TensorFlow, Keras, Vowpal Wabbit, XGBoost, Gensim and any other model-building tool — it even supports models built with commercial tools and services where the model is exportable.
It includes an API with two key endpoints:
- Predict - Build and deploy supervised machine learning models created in any machine learning library or framework at scale using containers and microservices.
- Recommend - High-performance user activity and content based recommendation engine with various algorithms ready to run out of the box.
Other features include:
- Complex dynamic algorithm configuration and combination with no downtime: run A/B and Multivariate tests, cascade algorithms and create ensembles.
- Command Line Interface (CLI) for configuring and managing Seldon Core.
- Secure OAuth 2.0 REST and gRPC APIs to streamline integration with your data and application.
- Grafana dashboard for real-time analytics built with Kafka Streams, Fluentd and InfluxDB.
Seldon is used by some of the world's most innovative organisations — it's the perfect machine learning deployment platform for start-ups and can scale to meet the demands of large enterprises.
Community & Support
- Join the Seldon Users Group.
- Register for our newsletter to be the first to receive updates about our products and events.
- Visit our website, follow @seldon_io on Twitter and like our Facebook page.
- If you're in London, meet us at TensorFlow London - a community of over 1200 data scientists that we co-organise.
- We also offer commercial support plans and managed services.
Seldon is available under Apache Licence, Version 2.0