Forum Navigation
You need to log in to create posts and topics.

Animate custom data using Python interface

I'm trying to generate an animation using custom data, i.e. a numpy array with positions, using the Python interface. While I can render a single frame, I cannot find out how and where to store the data to use Viewport.render_anim(). Looking at the source code suggests that the frames are stored in Viewport.dataset.pipelines, but manually adding each pipeline work did not work.

Below a MWE of my code so far. In this code I add data for a single time and use render_image to render a single frame. I hope someone can help me to change this such that I can add data for all time steps and use render_anim to generate a movie.

from ovito.data import *
from ovito.pipeline import *
from ovito.vis import Viewport
import numpy as np

# create a dictionary with positions for a series of time steps
steps = 10
positions = {step : 10*np.random.random((3,3)) for step in range(steps)}

# Create the data collection containing a Particles object:
particles = Particles()
data = DataCollection()
data.objects.append(particles)

# Create the particle position property for the first time step
pos_prop = particles.create_property('Position', data=positions[0])

# Create a pipeline, set source and insert it into the scene:
pipeline = Pipeline(source = StaticSource(data = data))
pipeline.add_to_scene()

# Render result
vp = Viewport(type = Viewport.Type.Ortho, camera_dir = (2, 1, -1))
vp.zoom_all()
vp.render_image(filename='simulation.png',size=(320, 240))

Hi Margriet,

the trick is to create an empty Data Collection, and then add a python modifier to your pipeline that creates the particle positions for every frame:

from ovito.data import *
from ovito.pipeline import *
from ovito.vis import Viewport
import numpy as np

# create a dictionary with positions for a series of time steps                                                                                                                                                                                                                 
steps = 10
positions = {step : 10*np.random.random((3,3)) for step in range(steps)}

# Create a pipeline, set source and insert it into the scene:                                                                                                                                                                                                                   
pipeline = Pipeline(source = StaticSource(data = DataCollection()))

# Python modifier that creates Position property for each frame                                                                                                                                                                                                                 
def create_particle_pos(frame, data):
    particles = Particles()
    data.objects.append(particles)
    pos_prop = data.particles_.create_property('Position', data=positions[frame])

pipeline.modifiers.append(create_particle_pos)
pipeline.add_to_scene()

# Render result                                                                                                                                                                                                                                                                 
vp = Viewport(type = Viewport.Type.Ortho, camera_dir = (2, 1, -1))
#pipeline.compute only needed here for zoom_all() to work correctly                                                                                                                                                                                                             
pipeline.compute()
vp.zoom_all()
vp.render_anim(filename='simulation.mp4',size=(320, 240), fps = 1, range=(0,9))

Note that the calling pipeline.compute() is usually not necessary, but it's needed here for the zoom_all() function to work properly.
Let us know if you have questions.

-Constanze

Thanks for the quick reply! Works like a charm :D!

I'm looking to combine this with drawing (some) particle trajectories. However, simply adding:

# Insert the modifier into the pipeline for creating the trajectory lines.
modifier = GenerateTrajectoryLinesModifier(only_selected = False)
pipeline.modifiers.append(modifier)

# Now let the modifier generate the trajectory lines by sampling the 
# particle positions over the entire animation interval.
modifier.generate()

(as described in the docs) results in the following message:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-9-b4dff8df2d47> in <module>
     41 # Now let the modifier generate the trajectory lines by sampling the
     42 # particle positions over the entire animation interval.
---> 43 modifier.generate()
     44 
     45 # Configure trajectory line visualization:

RuntimeError: The current simulation sequence consists only of a single frame. Thus, no trajectory lines were created.

So it seems that I first need to add all data, and only then I can generate the trajectories. This makes a lot of sense ;), but I don't see how I can achieve this.

Hi,

you're right, in your special case the animation frames don't exist yet, so the Generate Trajectory Modifier will complain. However, when you specifically define the animation frame interval over which the particle positions shall be sampled it will work, i.e. you need to pass the frame_interval parameter to the modifier constructor. If you like you can try the following script and let me know if you have further questions.

from ovito.data import *
from ovito.pipeline import *
from ovito.vis import Viewport
from ovito.modifiers import *
from ovito.io import import_file, export_file
import numpy as np

# create a dictionary with positions for a series of time steps                                                                                                                                                                               
steps = 10
positions = {step : 10*np.random.random((3,3)) for step in range(steps)}

# Create a pipeline, set source and insert it into the scene:                                                                                                                                                                                 
pipeline = Pipeline(source = StaticSource(data = DataCollection()))

# Python modifier that creates Position property and Simulation Cell for each frame                                                                                                                                                           
def create_particle_pos(frame, data):
    particles = Particles()
    data.objects.append(particles)
    pos_prop = data.particles_.create_property('Position', data=positions[frame])

def create_cell(frame, data):
    cell = SimulationCell()
    data.objects.append(cell)
    data.cell_[...] = [ [ 10, 0, 0, 0], [0, 10, 0, 0], [0, 0, 10, 0]]
    data.cell.vis.line_width = 0.05

pipeline.modifiers.append(create_particle_pos)
pipeline.modifiers.append(create_cell)

#Generate Trajectories                                                                                                                                                                                                                        
modifier = GenerateTrajectoryLinesModifier(only_selected = False, unwrap_trajectories = False, frame_interval=(0,9))
pipeline.modifiers.append(modifier)
modifier.generate()

# Render                                                                                                                                                                                                                                      
pipeline.add_to_scene()
vp = Viewport(type = Viewport.Type.Ortho, camera_dir = (2, 1, -1))
vp.zoom_all()
pipeline.compute()
vp.render_anim(filename='simulation.mp4',size=(320, 240), fps = 1, range=(0,9))

 

Again thanks for the clear and quick answer.