Meta has launched two groundbreaking AI models, Emu Video and Emu Edit, focused on video and image generation and editing. Emu Video employs a factorized approach, creating dynamic videos based on text and image inputs, while Emu Edit offers precise image manipulation capabilities. Though in the research stage, these models show promise for creators, artists, and animators, introducing novel possibilities in content creation.
Social media giant Meta has entered the AI frontier with the introduction of two transformative models: Emu Video and Emu Edit. Emu Video, utilizing a factorized approach, generates videos based on text and image inputs, demonstrating adaptability to diverse creative needs. Emu Edit, specializing in image manipulation, ensures precision in tasks like background alteration, color transformations, and localized editing. Meta’s blog post emphasizes the efficiency of the “factorized” video generation process, employing only two diffusion models.
Despite being in the research phase, Meta sees potential applications for creators and artists. Emu Video’s ability to animate images based on text prompts and Emu Edit’s nuanced image alterations signify a leap forward in AI-assisted content creation.
Meta’s commitment to responsible deployment aligns with recent disclosures, limiting AI tool use in political campaigns on Facebook and Instagram, showcasing a cautious approach amid regulatory scrutiny.