1

I am trying to tune a system by using memory optimized tables as a cache for frequently used data. The problem is that my system design has separate database (on the same server) siloed by core functionality. I would like to for example have a Curves database that is highly performant in the management of curves. Other databases use the data on this database for valuations etc. What I would like is for the Curves database to have optimized tables that the other database can quickly read, call a function that gets the data or performs some in memory manipulations. Problem is that that I come across this limitation:

A user transaction that accesses memory optimized tables or natively compiled modules cannot access more than one user database or databases model and msdb, and it cannot write to master.

I have tried to work the Microsoft example of creating a memory optimized table type and use that to park the data for use in a destination database, but that did not work - got the same error. I have see examples of copying the data via the tempdb. Both examples require a copy of the data in the destination database, but all I really want to do is read the data and use it in the destination database. Has anyone been able to make this work successfully across databases?

From some testing - they are blisteringly fast so very keen to get them to work. There are a few generic examples on line, but can't seem to find any real world examples of how to make them really useful.

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.