This library provides Griptape Nodes for interacting with the RunwayML video generation services. You can use these nodes to generate videos from images and text prompts.
IMPORTANT: To use these nodes, you will need an API key from RunwayML. Please visit the RunwayML website and their API documentation for more information on how to obtain your key.
To configure your key within the Griptape Nodes IDE:
- Open the Settings menu.
- Navigate to the API Keys & Secrets panel.
- Add a new secret configuration for the service named
RunwayML
. - Enter your
RUNWAYML_API_SECRET
in the respective field.
Below is a description of the node and its parameters.
Generates a video from a reference image and a text prompt using the RunwayML API.
Parameter | Type | Description | Default Value |
---|---|---|---|
image |
ImageArtifact / str |
Input image (required). Accepts ImageArtifact , ImageUrlArtifact , a public HTTPS URL string, or a base64 data URI string. Local HTTP URLs will be converted to data URIs. |
|
prompt |
str / TextArtifact |
Text prompt describing the desired video content. | "" |
model |
str |
RunwayML model to use for generation. | gen4_turbo |
ratio |
str |
Aspect ratio for the output video. Must be one of the specific values supported by RunwayML API (e.g., "1280:720"). | 1280:720 |
seed |
int |
Seed for generation. 0 for random. (Note: May not be supported by all models or the current API version for this endpoint). | 0 |
motion_score |
int |
Controls the amount of motion. (Note: May not be supported by all models or the current API version for this endpoint). | 10 |
upscale |
bool |
Whether to upscale the generated video. (Note: May not be supported by all models or the current API version for this endpoint). | False |
video_output |
VideoUrlArtifact |
Output: URL of the generated video. | None |
task_id_output |
str |
Output: The Task ID of the generation job from RunwayML. | None |
Note: Inputs
and Generation Settings
parameters are grouped and may be collapsed by default in the UI. The seed
, motion_score
, and upscale
parameters are included for potential future API support or use with other models but are not currently sent to the image_to_video
endpoint based on observed API behavior.
If you haven't already installed your Griptape Nodes engine, follow the installation steps HERE. After you've completed those and you have your engine up and running:
- Copy the path to your
griptape_nodes_library.json
file within thisrunwayml
directory. Right click on the file, andCopy Path
(NotCopy Relative Path
). - Start up the engine!
- Navigate to settings.
- Open your settings and go to the App Events tab. Add an item in Libraries to Register.
- Paste your copied
griptape_nodes_library.json
path from earlier into the new item. - Exit out of Settings. It will save automatically!
- Open up the Libraries dropdown on the left sidebar.
- Your newly registered library should appear! Drag and drop nodes to use them!