What is Runway ML
Runway is a New York-based AI research company that builds creative tools for filmmakers and content creators. They have raised over $235 million in funding and are considered one of the most research-focused AI video labs in the world.
The company is probably best known as one of the three labs behind Stable Diffusion, but their main product is a web-based creative suite with a focus on video generation and editing. Professional filmmakers have used Runway for visual effects work on actual film productions.
Did you know? Runway was one of the original developers of Stable Diffusion before spinning off to focus on video AI. The company has collaborated with film studios to use its tools in actual movie productions, not just for hobbyist content.
Source: Runway ML company history, 2024
The current flagship model is Gen-3 Alpha. It represents a significant jump in quality over Gen-2, with better motion coherence, more detailed textures, and stronger prompt following. It is not perfect - AI video still has quirks - but it is genuinely impressive for short clips.
Gen-3 Alpha Capabilities
Gen-3 Alpha is Runway's current video generation model. Here is what it can actually do.
| Feature | Gen-3 Alpha Spec |
|---|---|
| Max clip length | 10 seconds |
| Max resolution | 1080p (1920x1080) |
| Frame rate | 24fps |
| Input modes | Text, image, video |
| Generation time | 30-90 seconds per clip |
| Credit cost | 5 credits per 5 seconds |
The most important limitation is the 10-second clip length. Runway is not a tool for generating full scenes or long videos in one shot. You build longer videos by generating multiple clips and editing them together in a video editor. Think of each generation as one shot in a film, not an entire sequence.
Quality is strongest for cinematic-style content: landscapes, slow camera movements, abstract visuals, and simple scenes with one or two subjects. It struggles with complex multi-person scenes, precise text rendering, and rapid action sequences.
Text-to-Video Prompting
Text-to-video in Runway works similarly to image generation prompting - the more specific you are, the better your results. Vague prompts produce generic results. Specific prompts produce what you actually envisioned.
Here is the difference between weak and strong prompts:
Weak: "A forest at night"
Strong: "Slow dolly shot through a misty old-growth forest at night, moonlight filtering through the canopy, ground fog at knee height, cinematic color grading, 4K"
The strong prompt specifies the camera movement (dolly shot), lighting conditions (moonlight, fog), mood, and technical quality. Each specific detail guides the model toward what you want.
- Describe the scene - What is in the frame? What is the setting? What is happening?
- Specify camera movement - Static shot, pan left, zoom in, dolly forward, handheld shake. Camera movement dramatically changes the feel of a clip.
- Set the lighting - Golden hour, overcast, neon lights, candlelight. Lighting is half the mood.
- Add a style reference - "Cinematic" and "4K" tend to improve output quality. "Film grain," "anamorphic lens," and "depth of field" add visual texture.
- Generate multiple versions - The same prompt will produce different results each time. Generate 3-4 versions and pick the best one.
Pro Tip
Add a negative prompt to exclude things you do not want. Common useful negatives: "blurry, low quality, watermark, text, cartoon, anime." This helps steer the model away from common failure modes.
Image-to-Video Animation
Image-to-video is where Runway really shines. You upload a static image - a photo, an AI-generated image, an illustration - and Runway animates it into a 5-10 second video clip. The AI infers how objects in the scene would naturally move and generates that motion.
This is the most reliable mode in Runway because you control the starting frame completely. You can use an AI image generator to create exactly the scene you want, then bring it to life with Runway.
A workflow many creators use: generate a high-quality image with Midjourney or Stable Diffusion, upload it to Runway, add a text prompt to describe the motion you want ("the leaves gently sway in a breeze, camera slowly zooms in"), and generate.
Pro Tip
Use images with clear subjects and simple backgrounds for image-to-video. Complex scenes with many overlapping elements tend to produce motion artifacts and flickering. One strong subject with a clean background gives the AI room to generate believable motion.
Motion Brush Tool
Motion Brush is Runway's most unique feature and the one that has no direct equivalent in other tools. It gives you control over which parts of an image move and in what direction.
Here is how it works: you upload an image, paint over the areas you want to animate using a brush tool, then set the motion direction and intensity for each painted area. You can have the sky move one direction, a person's hair move another, and keep the background perfectly still.
This level of control is a big deal. Standard image-to-video generates motion semi-randomly based on the scene. Motion Brush lets you be the director and specify exactly what moves.
- Upload your source image - Works best with clear, high-resolution images. AI-generated images often work better than photos because they have cleaner shapes.
- Paint the motion zones - Use the brush to paint over elements you want to move. Use different colors for different motion directions.
- Set motion parameters - Adjust the intensity and direction for each painted area. Start with low intensity to avoid unnatural-looking movement.
- Generate and review - Watch the result. If movement looks unnatural, reduce intensity or repaint the zones with more precision.
Video-to-Video Transformation
Video-to-video mode takes existing footage and applies a new visual style to it while keeping the underlying motion. You might transform a phone video of a city street into a stylized painted animation, or apply a specific cinematic look to footage shot in flat lighting.
The results vary widely. When it works, it is spectacular. When it does not, you get flickering and inconsistency between frames. The key to good results is using footage with smooth, slow motion and high-quality source video.
Watch Out
Video-to-video uses a lot of credits because it processes every frame. A 10-second clip at 24fps is 240 frames. Make sure you test your style on a short 2-3 second clip before committing to processing longer footage.
Credits and Pricing
Runway uses a credit system. Different operations cost different amounts of credits, and credits do not roll over month to month on most plans.
| Plan | Monthly Credits | Price/mo | Best For |
|---|---|---|---|
| Free | 125 (one-time) | $0 | Testing the tool |
| Standard | 625 | $15 | Hobbyists, students |
| Pro | 2,250 | $35 | Regular creators |
| Unlimited | Unlimited (slower) | $95 | Heavy production use |
At 5 credits per 5 seconds of video, the Standard plan gives you about 125 five-second clips per month. That sounds like a lot but credits disappear fast when you are iterating on prompts. Most serious users end up on Pro.
Creative Applications
Where does Runway actually make sense to use? Here are the use cases where it delivers real value.
- Pre-visualization: Filmmakers use Runway to visualize scenes before shooting. Generate rough footage of the camera angles and lighting you plan to use, then use it to brief your crew.
- Social media content: Animated backgrounds, abstract loops, and cinematic b-roll clips for social posts and ads. Content that would cost hundreds to shoot can be generated in minutes.
- Music videos: Short clips stitched together make compelling music video content. Each section of a song gets a few generated clips that match the mood.
- Product visuals: Animate product photography to make it more eye-catching for ads. A static product shot becomes a short cinematic clip.
- Concept demonstrations: Show a client what a finished project might look like before any real production begins.