1

Our current system was architectured like;

We have around 5 million records in a DB table. Depending on the need, we get, say a resultset of 1 million records and keep them in cache throughout the application, when we are done, get rid of them.

Now, instead of using .NET application's memory, "Is it possible to use in-memory tables to keep those 1 million records in an in-memory table whilst disk based table still keeps 5 million records?"

2 Answers 2

1

That's possible. Performance will still be less than with in-process data. One of the most expensive things to do when executing a cheap SQL statement is all the overhead of executing anything at all (network, serialization, ...).

You will need to measure (or have a good understanding of) whether the now reduced performance is still enough.

If the existing system works without problems there is no need to change anything.

Sign up to request clarification or add additional context in comments.

2 Comments

Are we supposed to handle that, each time like "Select into...". Any built-in mechanism to accomplish this, just to automate it?
Nothing automated there. Also, you will find that Hekaton requires more labor to implement than other tables do. If you want to save work this option is not that attractive. If you elaborate more on your performance targets and the queries you run I might be able to recommend something
0

SQL Server already has advanced caching algorithms built-in. How big is your table? 5 million rows are not that big these days, the entire table might be already cached into memory and you can just use SELECT queries by primary key.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.