Twelve Labs

Twelve Labs Embed API provides powerful embeddings that represent videos, texts, images, and audio in a unified vector space. This space enables any-to-any searches across different types of content.

By natively processing all modalities, it captures interactions like visual expressions, speech, and context, enabling advanced applications such as sentiment analysis, anomaly detection, and recommendation systems with precision and efficiency.

We’ll look at how to work with Twelve Labs embeddings in Qdrant via the Python and Node SDKs.

Installing the SDKs

$ pip install twelvelabs qdrant-client
$ npm install twelvelabs-js @qdrant/js-client-rest

Setting up the clients

from twelvelabs import TwelveLabs
from qdrant_client import QdrantClient

# Get your API keys from:
# https://playground.twelvelabs.io/dashboard/api-key
TL_API_KEY = "<YOUR_TWELVE_LABS_API_KEY>"

twelvelabs_client = TwelveLabs(api_key=TL_API_KEY)
qdrant_client = QdrantClient(url="http://localhost:6333/")
import { QdrantClient } from '@qdrant/js-client-rest';
import { TwelveLabs, EmbeddingsTask, SegmentEmbedding } from 'twelvelabs';

// Get your API keys from:
// https://playground.twelvelabs.io/dashboard/api-key
const TL_API_KEY = "<YOUR_TWELVE_LABS_API_KEY>"

const twelveLabsClient = new TwelveLabs({ apiKey: TL_API_KEY });
const qdrantClient = new QdrantClient({ url: 'http://localhost:6333' });

The following example uses the "Marengo-retrieval-2.6" engine to embed a video. It generates vector embeddings of 1024 dimensionality and works with cosine similarity.

You can use the same engine to embed audio, text and images into a common vector space. Enabling cross-modality searches!

Embedding videos

task = twelvelabs_client.embed.task.create(
    engine_name="Marengo-retrieval-2.6",
    video_url="https://sample-videos.com/video321/mp4/720/big_buck_bunny_720p_2mb.mp4"
)

task.wait_for_done(sleep_interval=3)

task_result = twelvelabs_client.embed.task.retrieve(task.id)
const task = await twelveLabsClient.embed.task.create("Marengo-retrieval-2.6", {
    url: "https://sample-videos.com/video321/mp4/720/big_buck_bunny_720p_2mb.mp4"
})

await task.waitForDone(3)

const taskResult = await twelveLabsClient.embed.task.retrieve(task.id)

Converting the model outputs to Qdrant points

from qdrant_client.models import PointStruct

points = [
    PointStruct(
        id=idx,
        vector=v.embeddings_float,
        payload={
            "start_offset_sec": v.start_offset_sec,
            "end_offset_sec": v.end_offset_sec,
            "embedding_scope": v.embedding_scope,
        },
    )
    for idx, v in enumerate(task_result.video_embedding.segments)
]
let points = taskResult.videoEmbedding.segments.map((data, i) => {
    return {
        id: i,
        vector: data.embeddingsFloat,
        payload: {
            startOffsetSec: data.startOffsetSec,
            endOffsetSec: data.endOffsetSec,
            embeddingScope: data.embeddingScope
        }
    }
})

Creating a collection to insert the vectors

from qdrant_client.models import VectorParams, Distance

collection_name = "twelve_labs_collection"

qdrant_client.create_collection(
    collection_name,
    vectors_config=VectorParams(
        size=1024,
        distance=Distance.COSINE,
    ),
)
qdrant_client.upsert(collection_name, points)
const COLLECTION_NAME = "twelve_labs_collection"

await qdrantClient.createCollection(COLLECTION_NAME, {
    vectors: {
        size: 1024,
        distance: 'Cosine',
    }
});

await qdrantClient.upsert(COLLECTION_NAME, {
    wait: true,
    points
})

Once the vectors are added, you can run semantic searches across different modalities. Let’s try text.

segment = twelvelabs_client.embed.create(
    engine_name="Marengo-retrieval-2.6",
    text="<YOUR_QUERY_TEXT>",
).text_embedding.segments[0]


qdrant_client.query_points(
    collection_name=collection_name,
    query=segment.embeddings_float,
)
const segment = (await twelveLabsClient.embed.create({
    engineName: "Marengo-retrieval-2.6",
    text: "<YOUR_QUERY_TEXT>"
})).textEmbedding.segments[0]

await qdrantClient.query(COLLECTION_NAME, {
    query: segment.embeddingsFloat,
});

Further Reading

Was this page useful?

Thank you for your feedback! 🙏

We are sorry to hear that. 😔 You can edit this page on GitHub, or create a GitHub issue.