How TwelveLabs' Semantic Search Makes Sports Footage Access Easy

未能成功加载,请稍后再试
0/0

Let's say you're a sports organization, you're a sports team, and you're trying to find when there's a touchdown happening.

Maybe there's a crowd, the fans applauding.

Maybe there's a really cool logo in the background while something really exciting is happening during a game.

How do you actually describe and find that exact segment so that you can create a highlight from it or so that you can show it to fans?

And the same problem and challenge exists for every organization that has video, whether you're trying to go through petabytes of evidence to be able to do an investigation or write reports, whether you're trying to create and repurpose content from older shows, it's all the same problem of how do you understand footage at scale?

In traditional ways of tags, you would not be able to do this.

You would not be able to describe something you're looking for and be able to find it exactly when you need to.

There is no control F for video.

And so the technology that 12Labs builds is multimodal video understanding.

We build foundation models that allow you to do semantic search and retrieval across vast amounts of multimodal data, in this case video.

下载全新《每日英语听力》客户端,查看完整内容