# ovito.io¶

This module primarily provides two high-level functions for reading and writing external data files:

ovito.io.import_file(location, **params)

Imports data from an external file.

This Python function corresponds to the Load File menu command in OVITO’s user interface. The format of the imported file is automatically detected (see list of supported formats). Depending on the file’s format, additional keyword parameters may be required to specify how the data should be interpreted. These keyword parameters are documented below.

Parameters

location – The file to import. This can be a local file path or a remote sftp:// or https:// URL.

Returns

The new Pipeline that has been created for the imported data.

The function creates and returns a new Pipeline object, which uses the contents of the external data file as input. The pipeline will be wired to a FileSource, which reads the input data from the external file and passes it on to the pipeline. You can access the data by calling the Pipeline.compute() method or, alternatively, FileSource.compute() on the data source. As long as the new Pipeline contains no modifiers yet, both methods will return the same data.

Note that the Pipeline is not automatically inserted into the three-dimensional scene. That means the loaded data won’t appear in rendered images or the interactive viewports of OVITO by default. For that to happen, you need to explicitly insert the pipeline into the scene by calling its add_to_scene() method if desired.

Furthermore, note that you can re-use the returned Pipeline if you want to load a different data file later on. Instead of calling import_file() again to load another file, you can use the pipeline.source.load(...) method to replace the input file of the already existing pipeline.

File columns

When importing XYZ files or binary LAMMPS dump files, the mapping of file columns to OVITO’s particle properties must be specified using the columns keyword parameter:

pipeline = import_file("file.xyz", columns =
["Particle Identifier", "Particle Type", "Position.X", "Position.Y", "Position.Z"])


The length of the string list must match the number of data columns in the input file. See this table for standard particle property names. Alternatively, you can specify user-defined names for file colums that should be read as custom particle properties by OVITO. For vector properties, the component name must be appended to the property’s base name as demonstrated for the Position property in the example above. To completely ignore a file column during import, specify None in the columns list.

For text-based LAMMPS dump files, OVITO automatically determines a reasonable column-to-property mapping, but you may override it using the columns keyword. This can make sense, for example, if the file columns containing the particle coordinates do not follow the standard naming scheme x, y, and z (e.g. when reading time-averaged atomic positions computed by LAMMPS).

Frame sequences

OVITO automatically detects if the imported file contains multiple data frames (timesteps). Alternatively (and additionally), it is possible to load a sequence of files in the same directory by using the * wildcard character in the filename. Note that * may appear only once, only in the filename component of the path, and only in place of numeric digits. Furthermore, it is possible to pass an explicit list of file paths to the import_file() function, which will be loaded as an animatable sequence. All variants can be combined. For example, to load two file sets from different directories as one consecutive sequence:

import_file('sim.xyz')     # Loads all frames from the given file
import_file('sim.*.xyz')   # Loads 'sim.0.xyz', 'sim.100.xyz', 'sim.200.xyz', etc.
import_file(['sim_a.xyz', 'sim_b.xyz'])  # Loads an explicit list of files
import_file([
'dir_a/sim.*.xyz',
'dir_b/sim.*.xyz']) # Loads several file sequences from different directories


The number of frames found in the input file(s) is reported by the num_frames attribute of the pipeline’s FileSource You can step through the frames with a for-loop as follows:

from ovito.io import import_file

# Import a sequence of files.
pipeline = import_file('input/simulation.*.dump')

# Loop over all frames of the sequence.
for frame_index in range(pipeline.source.num_frames):

# Calling FileSource.compute() loads the requested frame
# from the sequence into memory and returns the data as a new
# DataCollection:
data = pipeline.source.compute(frame_index)

# The source path and the index of the current frame
# are attached as attributes to the data collection:
print("Frame source:", data.attributes['SourceFile'])
print("Frame index:", data.attributes['SourceFrame'])

# Accessing the loaded frame data, e.g the particle positions:
print(data.particles.positions[...])


LAMMPS atom style

When loading a LAMMPS data file, the atom style may need to be specified using the atom_style keyword parameter so that OVITO can correctly map the variable set of file columns to particle properties. Exceptions are data files generated with the write_data command of LAMMPS that contain a hint indicating the atom style. In this case the atom_style function parameter is not required.

Particle ordering

Particles are read and stored by OVITO in the same order as they are listed in the input file. Some file formats contain unique particle identifiers or tags which allow OVITO to track individual particles over time even if the storage order changes from frame to frame. OVITO will automatically make use of that information where appropriate without touching the original storage order. However, in some situations it may be desirable to explicitly have the particles sorted with respect to the IDs. You can request this reordering by passing the sort_particles=True option to import_file(). Note that this option is without effect if the input file contains no particle identifiers.

Topology and trajectory files

Some simulation codes write a topology file and separate trajectory file. The former contains only static information like the bonding between atoms, the atom types, etc., which do not change during a simulation run, while the latter stores the varying data (primarily the atomic trajectories). To load such a topology-trajectory pair of files, first read the topology file with the import_file() function, then insert a LoadTrajectoryModifier into the returned Pipeline to also load the trajectory data.

ovito.io.export_file(data, file, format, **params)

High-level function that exports data to a file. See the Data export section for an overview of this topic.

Parameters
• data – The object to be exported. See below for options.

• file (str) – The output file path.

• format (str) – The type of file to write. See below for options.

Data to export

Various kinds of objects are accepted by the function as data argument:

• Pipeline: Exports the dynamically generated output of a data pipeline. Since pipelines can be evaluated at different animation times, multi-frame sequences can be produced when passing a Pipeline object to the export_file() function.

• DataCollection: Exports the static data of a data collection. Data objects contained in the collection that are not compatible with the chosen output format are ignored.

• DataObject: Exports just the data object as if it were the only part of a DataCollection. The provided data object must be compatible with the selected output format. For example, when exporting to the "txt/table" format (see below), a DataTable object should be passed to the export_file() function.

• None: All pipelines that are part of the current scene (see ovito.Scene.pipelines) are exported. This option makes sense for scene description formats such as the POV-Ray format.

Output format

The format parameter determines the type of file to write; the filename suffix is ignored. However, for filenames that end with .gz, automatic gzip compression is activated if the selected format is text-based. The following format strings are supported:

• "txt/attr" – Export global attributes to a text file (see below)

• "txt/table" – Export a DataTable to a text file

• "lammps/dump" – LAMMPS text-based dump format

• "lammps/data" – LAMMPS data format

• "imd" – IMD format

• "vasp" – POSCAR format

• "xyz" – XYZ format

• "fhi-aims" – FHI-aims format

• "gsd/hoomd" – GSD format used by the HOOMD simulation code

• "netcdf/amber" – Binary format for MD data following the AMBER format convention

• "vtk/trimesh" – ParaView VTK format for exporting SurfaceMesh objects

• "vtk/disloc" – ParaView VTK format for exporting DislocationNetwork objects

• "vtk/grid" – ParaView VTK format for exporting VoxelGrid objects

• "ca"Text-based format for storing dislocation lines

• "povray" – POV-Ray scene format

Depending on the selected output format, additional keyword arguments must be passed to export_file(), which are documented below.

File columns

For the output formats lammps/dump, xyz, imd and netcdf/amber, you must specify the set of particle properties to export using the columns keyword parameter:

export_file(pipeline, "output.xyz", "xyz", columns =
["Particle Identifier", "Particle Type", "Position.X", "Position.Y", "Position.Z"]
)


You can export the standard particle properties and any user-defined properties present in the pipeline’s output DataCollection. For vector properties, the component name must be appended to the base name as demonstrated above for the Position property.

Exporting several simulation frames

By default, only the current animation frame (frame 0 by default) is exported by the function. To export a different frame, pass the frame keyword parameter to the export_file() function. Alternatively, you can export all frames of an animation sequence at once by passing multiple_frames=True. Refined control of the exported frame sequence is available through the keyword arguments start_frame, end_frame, and every_nth_frame.

The lammps/dump and xyz file formats can store multiple frames in a single output file. For other formats, or if you intentionally want to generate one file per frame, you must pass a wildcard filename to export_file(). This filename must contain exactly one * character as in the following example, which will be replaced with the animation frame number:

export_file(pipeline, "output.*.dump", "lammps/dump", multiple_frames=True)


The above call is equivalent to the following for-loop:

for i in range(pipeline.source.num_frames):
export_file(pipeline, "output.%i.dump" % i, "lammps/dump", frame=i)


Floating-point number precision

For text-based file formats, you can set the desired formatting precision for floating-point values using the precision keyword parameter. The default output precision is 10 digits; the maximum is 17.

LAMMPS atom style

When writing files in the lammps/data format, the LAMMPS atom style “atomic” is used by default. If you want to create a data file that uses a different atom style, specify it with the atom_style keyword parameter:

export_file(pipeline, "output.data", "lammps/data", atom_style="bond")


The following LAMMPS atom styles are currently supported by OVITO: angle, atomic, bond, charge, dipole, full, molecular, sphere.

If at least one ParticleType in the system has a non-zero mass set, OVITO will output a Masses section to the LAMMPS data file. You can suppress this behavior by passing omit_masses=True to the export function.

VASP (POSCAR) format

When exporting to the vasp file format, OVITO will output atomic positions and velocities in Cartesian coordinates by default. You can request output in reduced cell coordinates instead by specifying the reduced keyword parameter:

export_file(pipeline, "structure.poscar", "vasp", reduced=True)


Global attributes

The txt/attr file format allows you to export global quantities computed by the data pipeline to a text file. For example, to write out the number of FCC atoms identified by a CommonNeighborAnalysisModifier as a function of simulation time, one would use the following:

export_file(pipeline, "data.txt", "txt/attr",
columns=["Timestep", "CommonNeighborAnalysis.counts.FCC"],
multiple_frames=True)


See the documentation of the individual modifiers to find out which global quantities they generate. You can also determine at runtime which attributes are available in the output data collection of a Pipeline:

print(pipeline.compute().attributes)