Developer Center
Vector embeddings are key to ML, and here we describe how to use OpenCV, OpenAI CLIP and pgvector to generate vectors and use them to perform image recognition on a corpus of photos.
Join this workshop to learn how to connect text to image search using the OpenAI's CLIP model and the OpenSearch® search engine.
Date and Time:
Registration will be open soon!
Join this workshop to learn how to connect text to image search using the OpenAI's CLIP model and the OpenSearch® search engine. We'll use a multi-modal vision and language model named CLIP. This model is special in the way that it can work with both images and text interchangeably, producing embeddings for input data that is either a text snippet or an image. We'll guide you step by step to build a system to find relevant photos using Python, OpenSearch and an AI model.
Related resource in our developer center: Image recognition with Python, OpenCV, OpenAI CLIP and pgvector
You’ll also need
We will lead you through setting that up in the workshop, if you don’t already have one.
Live and interactive sessions to upgrade your skills with expert guidance covering a range of open source technologies.
Explore all workshopsDeveloper Center
Vector embeddings are key to ML, and here we describe how to use OpenCV, OpenAI CLIP and pgvector to generate vectors and use them to perform image recognition on a corpus of photos.
Developer Center
Get an introduction to machine learning using Aiven services and the Hugging Face API to recommend movies based on Wikipedia synopses.
Developer Center
Leveraging TensorFlow, PostgreSQL®, PGVector, and Next.js for vector search with this step-by-step video guide.