Meta, opens new tab, stated on Friday that it had constructed a new AI model dubbed Movie Gen that can generate realistic-looking video and audio snippets in response to user requests, saying it can compete with tools from leading media generation businesses such as OpenAI and ElevenLabs.
Meta gave samples of Movie Gen's productions, which included movies of animals swimming and surfing, as well as videos depicting people executing acts like as painting on a canvas.
Movie Gen can also generate background music and sound effects that are synced to the movie content, according to Meta in a blog post, and it can be used to alter existing videos.
Meta stated that Movie Gen's videos can be up to 16 seconds long and its audio can be up to 45 seconds long.
It published data from blind testing demonstrating that the model outperforms offerings from businesses such as Runway, OpenAI, ElevenLabs, and Kling.
According to Meta officials, the company is unlikely to release Movie Gen for free use by developers, as it has with its Llama series of large-language models, and that the dangers are assessed separately for each model.
They declined to comment on Meta's evaluation of Movie Gen directly. Instead, they added, Meta was working directly with the entertainment community and other content creators on uses of Movie Gen, which will be incorporated into Meta's own products sometime next year.