I am placing marker objects on a model using data taken from drone surveys. I have access to high accuracy GPS data and also omega/phi/kappa rotation data.
The goal is to move the viewer camera into position when I select a photo, so that we get a fairly good view of that part of the model from the photo.
So far, we are working with a single model and I want to verify that I'm using the transforms correctly so that this works with other models. Also, I need to match camera orientation using omega/phi/kappa, and I want to know if I also need to transform orientation data.
The model comes from Revit originally.
Here are the various transforms I have found so far using NOP_VIEWER.model.getData().
- GlobalOffset (Vector3)
- placementWithOffset (Matrix4) - seems to be just the inverse of GlobalOffset as a matrix?
- placementTransform (Matrix4) - generally undefined, I've seen some hints that this is a user defined matrix.
- refPointTransform (Matrix4)
Also, there are some transforms in the NOP_VIEWER.model.getData().metadata:
- metadata.georeference.positionLL84 (Array[3]) - this is where the model's GPS coords are stored
- metadata.georeference.refPointLMV (Array[3]) - no idea what this is, and it has huge and seemingly random values on many models. For example, on my current model it is
[-17746143.211481072, -6429345.318822183, 27.360225423452952] - metadata.[custom values].angleToTrueNorth - I guess this is specifying whether the model is aligned to true or magnetic north?
- metadata.[custom values].refPointTransform - (Array[12]) - data used to create the
refPointTransformmatrix above
I have been able to get the position data into viewer space using these steps:
- Use the
Autodesk.geolocationextensionlonLatToLmvfunction to convert lon/lat/alt to viewer coords. - Take the converted data and apply various transforms until it is correctly positioned in model space.
const gpsPosition = new THREE.Vector3(
longitude,
latitude,
altitude,
);
const position = viewer
.getExtension('Autodesk.Geolocation')
.lonLatToLmv(gpsPosition);
const data = viewer.model.getData();
const globalOffset = data.globalOffset;
const refPointTransform = data.refPointTransform;
const angleToTrueNorth = THREE.Math.degToRad(
data.metadata['custom values'].angleToTrueNorth
);
// applying the transform
position.add(globalOffset)
position.applyMatrix4(refPointTransform);
// finally, rotate the position based on angle to true north.
const quaterion = new THREE.Quaternion().setFromEuler(
new THREE.Euler(0, 0, -angleToTrueNorth),
);
position.applyQuaternion(quaterion);
Questions:
- do I need to apply some transforms to rotation data as well?
- Am I applying the transforms correctly?
EDIT: figured out that the data.refPointTransform matrix already encodes the angleToTrueNorth, so I'm clearly doing something wrong in applying that twice.
I don't currently have access to the drone photo data specifying whether they are aligned to true or magnetic north, I assume it's true north though.