2

I am placing marker objects on a model using data taken from drone surveys. I have access to high accuracy GPS data and also omega/phi/kappa rotation data.

The goal is to move the viewer camera into position when I select a photo, so that we get a fairly good view of that part of the model from the photo.

So far, we are working with a single model and I want to verify that I'm using the transforms correctly so that this works with other models. Also, I need to match camera orientation using omega/phi/kappa, and I want to know if I also need to transform orientation data.

The model comes from Revit originally.

Here are the various transforms I have found so far using NOP_VIEWER.model.getData().

  1. GlobalOffset (Vector3)
  2. placementWithOffset (Matrix4) - seems to be just the inverse of GlobalOffset as a matrix?
  3. placementTransform (Matrix4) - generally undefined, I've seen some hints that this is a user defined matrix.
  4. refPointTransform (Matrix4)

Also, there are some transforms in the NOP_VIEWER.model.getData().metadata:

  1. metadata.georeference.positionLL84 (Array[3]) - this is where the model's GPS coords are stored
  2. metadata.georeference.refPointLMV (Array[3]) - no idea what this is, and it has huge and seemingly random values on many models. For example, on my current model it is [-17746143.211481072, -6429345.318822183, 27.360225423452952]
  3. metadata.[custom values].angleToTrueNorth - I guess this is specifying whether the model is aligned to true or magnetic north?
  4. metadata.[custom values].refPointTransform - (Array[12]) - data used to create the refPointTransform matrix above

I have been able to get the position data into viewer space using these steps:

  1. Use the Autodesk.geolocation extension lonLatToLmv function to convert lon/lat/alt to viewer coords.
  2. Take the converted data and apply various transforms until it is correctly positioned in model space.
const gpsPosition = new THREE.Vector3(
  longitude,
  latitude,
  altitude,
);

const position = viewer
  .getExtension('Autodesk.Geolocation')
  .lonLatToLmv(gpsPosition);

const data = viewer.model.getData();

const globalOffset = data.globalOffset;
const refPointTransform = data.refPointTransform;
const angleToTrueNorth = THREE.Math.degToRad(
    data.metadata['custom values'].angleToTrueNorth
);

// applying the transform

position.add(globalOffset)
position.applyMatrix4(refPointTransform);

// finally, rotate the position based on angle to true north. 

const quaterion = new THREE.Quaternion().setFromEuler(
  new THREE.Euler(0, 0, -angleToTrueNorth),
);

position.applyQuaternion(quaterion);

Questions:

  1. do I need to apply some transforms to rotation data as well?
  2. Am I applying the transforms correctly?

EDIT: figured out that the data.refPointTransform matrix already encodes the angleToTrueNorth, so I'm clearly doing something wrong in applying that twice.

I don't currently have access to the drone photo data specifying whether they are aligned to true or magnetic north, I assume it's true north though.

1
  • I added a suggestion on your other question. A colleague is looking into this question and should reply soon. Commented Jan 27, 2020 at 23:17

1 Answer 1

1

The geo-positioning params you've discovered are internal and shouldn't be used directly, especially since different input file formats (Revit, IFC, Navisworks, etc) may output this information in different forms. Using the geolocation extension (as you do in your example code) and its method lonLatToLmv should give you the final lat/long/alt value mapped into the scene coordinate system. If it doesn't, please send us a sample file and the snippet of your code to forge (dot) help (at) autodesk (dot) com and we will investigate it on our end.

As far as the various xform properties you found:

  • globalOffset is sometimes defined on the model when the vertex data of the model is moved close to origin to avoid precision issues
  • placementWithOffset is internally computed by applying globalOffset to placementTransform
  • placementTransform is an optional parameter that can be passed in when loading a model; the transformation will be applied to all xforms of individual elements in the model
  • refPointTransform is another type of metadata that can sometimes be defined on models (typically converted from AEC designs) when globalOffset is not sufficient
Sign up to request clarification or add additional context in comments.

2 Comments

> The geo-positioning params you've discovered are internal... Yes, I had guessed that. However in our case we will only be using Revit models so we have some more leeway there.
I've had a support request related to this issue open for over two months now. Their latest response from Dec 12: > I'm sorry to say that I haven't received this information from our engineering team yet. Currently, they are busying for other high prioritized tasks and bug fixings, and out of resources to do the further investigation as I know. After that they recommended to calculate the error offset and try to fix it on our end. That's where I'm at now. I figured I was more likely to get a response here rather than trying to follow up with support again.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.