Get desktop application:
View/edit binary Protocol Buffers messages
Contains a list of blend shape entries wherein each item maps a specific blend shape location to its associated coefficient.
Used in:
Used in:
Identifier for the specific facial feature.
Indicates the current position of the feature relative to its neutral configuration, ranging from 0.0 (neutral) to 1.0 (maximum movement).
Information about the camera position and imaging characteristics for a captured video frame.
Used in:
,4x4 row-major matrix expressing position and orientation of the camera in world coordinate space.
The width and height, in pixels, of the captured camera image.
3x3 row-major matrix that converts between the 2D camera plane and 3D world coordinate space.
4x4 row-major transform matrix appropriate for rendering 3D content to match the image captured by the camera.
4x4 row-major transform matrix appropriate for converting from world-space to camera space. Relativized for the captured_image orientation (i.e. UILandscapeOrientationRight).
The orientation of the camera, expressed as roll, pitch, and yaw values.
Used in:
The general quality of position tracking available when the camera captured a frame.
Used in:
Camera position tracking is not available.
Tracking is available, but the quality of results is questionable.
Camera position tracking is providing optimal results.
A possible diagnosis for limited position tracking quality as of when the frame was captured.
Used in:
The current tracking state is not limited.
Not yet enough camera or motion data to provide tracking information.
The device is moving too fast for accurate image-based position tracking.
Not enough distinguishable features for image-based position tracking.
Tracking is limited due to a relocalization in progress.
Information about the pose, topology, and expression of a detected face.
Used in:
A coarse triangle mesh representing the topology of the detected face.
A map of named coefficients representing the detected facial expression in terms of the movement of specific facial features.
4x4 row-major matrix encoding the position, orientation, and scale of the anchor relative to the world coordinate space.
Indicates whether the anchor's transform is valid. Frames that have a face anchor with this value set to NO should probably be ignored.
Container for a 3D mesh describing face topology.
Used in:
The number of elements in the vertices list.
The number of elements in the texture_coordinates list.
Each integer value in this ordered list represents an index into the vertices and texture_coordinates lists. Each set of three indices identifies the vertices comprising a single triangle in the mesh. Each set of three indices forms a triangle, so the number of indices in the triangle_indices buffer is three times the triangle_count value.
The number of triangles described by the triangle_indices buffer.
Each texture coordinate represents UV texture coordinates for the vertex at the corresponding index in the vertices buffer.
Used in:
Each vertex represents a 3D point in the face mesh, in the face coordinate space.
Used in:
Video image and face position tracking information.
The timestamp for the frame.
The depth data associated with the frame. Not all frames have depth data.
The depth data object timestamp associated with the frame. May differ from the frame timestamp value. Is only set when the frame has depth_data.
Camera information associated with the frame.
Light information associated with the frame.
Face anchor information associated with the frame. Not all frames have an active face anchor.
Plane anchors associated with the frame. Not all frames have a plane anchor. Plane anchors and face anchors are mutually exclusive.
The current intermediate results of the scene analysis used to perform world tracking.
Snapshot of Core Motion CMMotionManager object containing most recent motion data associated with the frame. Since motion data capture rates can be higher than rates of AR capture, the entities of this object reflect all of the aggregated events which have occurred since the last ARFrame was recorded.
Estimated scene lighting information associated with a captured video frame.
Used in:
The estimated intensity, in lumens, of ambient light throughout the scene.
The estimated color temperature, in degrees Kelvin, of ambient light throughout the scene.
Data describing the estimated lighting environment in all directions. Second-level spherical harmonics in separate red, green, and blue data planes. Thus, this buffer contains 3 sets of 9 coefficients, or a total of 27 values.
A vector indicating the orientation of the strongest directional light source, normalized in the world-coordinate space.
The estimated intensity, in lumens, of the strongest directional light source in the scene.
Used in:
A subdividision of the reconstructed, real-world scene surrounding the user.
Used in:
The ID of the mesh.
4x4 row-major matrix encoding the position, orientation, and scale of the anchor relative to the world coordinate space.
3D information about the mesh such as its shape and classifications.
Container object for mesh data of real-world scene surrounding the user. Even though each ARFrame may have a set of ARMeshAnchors associated with it, only a single frame's worth of mesh data is written separately at the end of each recording due to concerns regarding latency and memory bloat.
The timestamp for the data.
Set of mesh anchors containing the mesh data.
Mesh geometry data stored in an array-based format.
Used in:
The vertices of the mesh.
The faces of the mesh.
Rays that define which direction is outside for each face. Normals contain 'rays that define which direction is outside for each face', in practice the normals count is always identical to vertices count which looks like vertices normals and not faces normals.
Classification for each face in the mesh.
Used in:
/ Indices of vertices defining the face from correspondent array of parent / message. A typical face is triangular.
Type of objects
Used in:
Used in:
Information about the position and orientation of a real-world flat surface.
Used in:
The ID of the plane.
4x4 row-major matrix encoding the position, orientation, and scale of the anchor relative to the world coordinate space.
The general orientation of the detected plane with respect to gravity.
A coarse triangle mesh representing the general shape of the detected plane.
The center point of the plane relative to its anchor position. Although the type of this property is a 3D vector, a plane anchor is always two-dimensional, and is always positioned in only the x and z directions relative to its transform position. (That is, the y-component of this vector is always zero.)
The estimated width and length of the detected plane.
A Boolean value that indicates whether plane classification is available on the current device. On devices without plane classification support, all plane anchors report a classification value of NONE and a classification_status value of UNAVAILABLE.
A general characterization of what kind of real-world surface the plane anchor represents.
The current state of process for classifying the plane anchor. When this property's value is KNOWN, the classification property represents characterization of the real-world surface corresponding to the plane anchor.
Used in:
The plane is perpendicular to gravity.
The plane is parallel to gravity.
Used in:
The classification status for the plane.
Used in:
The classfication process for the plane anchor has completed but the result is inconclusive.
No classication information can be provided (set on error or if the device does not support plane classification).
The classification process has not completed.
The classfication process for the plane anchor has completed.
Wrapper for a 3D point / vector within the plane. See extent and center values for more information.
Used in:
Container for a 3D mesh.
Used in:
A buffer of vertex positions for each point in the plane mesh.
The number of elements in the vertices buffer.
A buffer of texture coordinate values for each point in the plane mesh.
The number of elements in the texture_coordinates buffer.
Each integer value in this ordered list represents an index into the vertices and texture_coordinates lists. Each set of three indices identifies the vertices comprising a single triangle in the mesh. Each set of three indices forms a triangle, so the number of indices in the triangle_indices buffer is three times the triangle_count value.
Each set of three indices forms a triangle, so the number of indices in the triangle_indices buffer is three times the triangle_count value.
Each value in this buffer represents the position of a vertex along the boundary polygon of the estimated plane. The owning plane anchor's transform matrix defines the coordinate system for these points.
The number of elements in the boundary_vertices buffer.
Each texture coordinate represents UV texture coordinates for the vertex at the corresponding index in the vertices buffer.
Used in:
Used in:
A collection of points in the world coordinate space.
Used in:
The number of points in the cloud.
The list of detected points.
A list of unique identifiers corresponding to detected feature points. Each identifier in this list corresponds to the point at the same index in the points array.
Used in:
Info about the camera characteristics used to capture images and depth data.
Used in:
3x3 row-major matrix relating a camera's internal properties to an ideal pinhole-camera model.
The image dimensions to which the intrinsic_matrix values are relative.
3x4 row-major matrix relating a camera's position and orientation to a world or scene coordinate system. Consists of a unitless 3x3 rotation matrix (R) on the left and a translation (t) 3x1 vector on the right. The translation vector's units are millimeters. For example: |r1,1 r2,1 r3,1 | t1| [R | t] = |r1,2 r2,2 r3,2 | t2| |r1,3 r2,3 r3,3 | t3| is stored as [r11, r21, r31, t1, r12, r22, r32, t2, ...]
The size, in millimeters, of one image pixel.
A list of floating-point values describing radial distortions imparted by the camera lens, for use in rectifying camera images.
A list of floating-point values describing radial distortions for use in reapplying camera geometry to a rectified image.
The offset of the distortion center of the camera lens from the top-left corner of the image.
Container for depth data information.
Used in:
PNG representation of the grayscale depth data map. See discussion about depth_data_map_original_minimum_value, below, for information about how to interpret the pixel values.
Pixel format type of the original captured depth data.
Indicates whether the depth_data_map contains temporally smoothed data.
Associated calibration data for the depth_data_map.
The original range of values expressed by the depth_data_map, before grayscale normalization. For example, if the minimum and maximum values indicate a range of [0.5, 2.2], and the depth_data_type value indicates it was a depth map, then white pixels (255, 255, 255) will map to 0.5 and black pixels (0, 0, 0) will map to 2.2 with the grayscale range linearly interpolated inbetween. Conversely, if the depth_data_type value indicates it was a disparity map, then white pixels will map to 2.2 and black pixels will map to 0.5.
The width of the depth buffer map.
The height of the depth buffer map.
The row-major flattened array of the depth buffer map pixels. This will be either a float32 or float16 byte array, depending on 'depth_data_type'.
Indicates the general accuracy of the depth_data_map.
Used in:
Values in the depth map are usable for foreground/background separation but are not absolutely accurate in the physical world.
Values in the depth map are absolutely accurate in the physical world.
Quality of the depth_data_map.
Used in:
A sample of raw accelerometer data.
Used in:
The accelerometer data object timestamp. May differ from the frame timestamp value since the data may be collected at higher rate.
Raw acceleration measured by the accelerometer which effectively is a sum of gravity and user_acceleration of CMDeviceMotion object.
Represents calibrated magnetic field data and accuracy estimate of it.
Used in:
Vector of magnetic field estimate.
Calibration accuracy of a magnetic field estimate.
Indicates the calibration accuracy of a magnetic field estimate.
Used in:
A sample of device motion data. Encapsulates measurements of the attitude, rotation rate, magnetic field, and acceleration of the device. Core Media applies different algorithms of bias-reduction and stabilization to rotation rate, magnetic field and acceleration values. For raw values check correspondent fields in CMMotionManagerSnapshot object.
Used in:
The device motion data object timestamp. May differ from the frame timestamp value since the data may be collected at higher rate.
The quaternion representing the device’s orientation relative to a known frame of reference at a point in time.
The gravity acceleration vector expressed in the device's reference frame.
The acceleration that the user is giving to the device.
Returns the magnetic field vector filtered with respect to the device bias.
The rotation rate of the device adjusted by bias-removing Core Motion algoriths.
Used in:
A sample of raw gyroscope data.
Used in:
The gyroscope data object timestamp. May differ from the frame timestamp value since the data may be collected at higher rate.
Raw rotation rate as measured by the gyroscope.
A sample of raw magnetometer data.
Used in:
The magnetometer data object timestamp. May differ from the frame timestamp value since the data may be collected at higher rate.
Raw magnetic field measured by the magnetometer.
Contains most recent snapshots of device motion data
Used in:
Most recent samples of device motion data.
Most recent samples of raw accelerometer data.
Most recent samples of raw gyroscope data.
Most recent samples of raw magnetometer data.
A 3D vector
Used in:
, , , ,