azure sql managed instance
322 TopicsRemoving barriers to migrating databases to Azure with Striim’s Unlimited Database Migration program
Alok Pareek, co-founder and Executive Vice President of Product and Engineering at Striim Shireesh Thota, Corporate Vice President of Databases at Microsoft Every modernization strategy starts with data. It’s what enables advanced analytics and AI agents today, and prepares enterprises for what’s to come in the future. But before services like Microsoft Fabric, Azure AI Foundry, or Copilot can create that value, the underlying data needs to move into Microsoft’s cloud platforms. It’s within that first step, database migration, where the real complexity often lies. To simplify the process, Microsoft has expanded its investment in the Striim partnership. Striim continuously replicates data from existing databases into Azure in real time, enabling online migrations with zero downtime. Through this partnership, we have collaborated to enable modernization and migration into Azure at no additional cost to our customers. We’ve designed this Unlimited Database Migration program to accelerate adoption by making migrations easier to start, easier to scale, and easier to complete, all without disrupting business operations. Since launch, this joint program has already driven significant growth in customer adoption, indicating the demand for faster, more seamless modernization. And with Microsoft’s continued investment in this partnership, enterprises now have a proven, repeatable path to modernize their databases and prepare their data for the AI era. Watch or listen to our recent podcast episode (Apple Podcasts, Spotify, YouTube) to learn more. Striim’s Unlimited Migration Program Striim’s Unlimited Database Migration Program was designed to make modernization as straightforward as possible for Microsoft customers. Through this initiative, enterprises gain unlimited Striim licenses to migrate as many databases as they need at no additional cost. Highlights and benefits of the program include: Zero-downtime, zero-data-loss migrations. Supported sources include SQL Server, MongoDB, Oracle, MySQL, PostgreSQL, and Sybase. Supported targets include Azure Database for MySQL, Azure Database for PostgreSQL, Azure Cosmos DB, and Azure SQL. Mission-critical, heterogeneous workloads supported. Applies for SQL, Oracle, NoSQL, OSS. Drives faster AI adoption. Once migrated, data is ready for analytics & AI. Access is streamlined through Microsoft’s Cloud Factory Accelerator team, which manages program enrollment and coordinates the distribution of licenses. Once onboarded, customers receive installation walkthroughs, an enablement kit, and direct support from Striim architects. Cutover support, hands-on labs, and escalation paths are all built in to help migrations run smoothly from start to finish. Enterprises can start migrations quickly, scale across business units, and keep projects moving without slowing down for procurement hurdles. Now, migrations can begin when the business is ready, not when budgets or contracts catch up. How Striim Powers Online Migrations Within Striim’s database migrations, schema changes and metadata evolution are automatically detected and applied, preserving data accuracy and referential integrity. As the migration progresses, Striim automatically coordinates both the initial bulk load of historical data and the ongoing synchronization of live transactions. This ongoing synchronization keeps source and target systems in sync for as long as needed to actively test the target applications with real data before doing the cutoff, thereby minimizing risk. However, the foundation of Striim’s approach is log-based Change Data Capture (CDC), which streams database changes in real time from source to target with sub-second latency. This helps migrations avoid just moving the static snapshot of a database. Rather, they continuously replicate every update as it happens, so both environments remain aligned with minimal impact on operational systems throughout the process. While the snapshot (initial load) is being applied to the target system, Striim captures all the changes that occur. Once the initial load process is complete, Striim applies the changes using CDC, and from this point on, the source and target systems are in sync. This eliminates the need for shutting down the source system during the initial load process and enables customers to complete their migrations without any downtime of the source database. Striim is also designed to work across hybrid and multi-cloud architectures. It can seamlessly move workloads from on-premises databases, SaaS applications, or other clouds into Microsoft databases. By maintaining exactly-once delivery and ensuring downstream systems stay in sync, Striim can reduce risk and accelerates the path to modernization. Striim is available in the Azure Marketplace, giving customers a native, supported way to integrate it directly into their Azure environment. This means migrations can be deployed quickly, governed centrally, and scaled as business needs evolve, all while still aligning with Azure’s security and compliance standards. From Migration to Value With workloads fully landed in Azure, enterprises can immediately take advantage of the broader Microsoft data ecosystem. Fabric, Azure AI Foundry, and Copilot become available as extensions of the database foundation, allowing teams to analyze, visualize, and enrich data without delay. Enterprises can begin adopting Microsoft AI services with data that is current, trusted, and governed. Instead of treating migration as an isolated project, customers gain an integrated pathway to analytics and AI, creating value as soon as databases go live in Azure. How Enterprises Are Using the Program Today Across industries, we’re already seeing how this program changes the way enterprises approach modernization. Financial Services Moving from Oracle to Azure SQL, one global bank used Striim to keep systems in sync throughout the migration. With transactions flowing in real time, they stood up a modern fraud detection pipeline on Azure that identifies risks as they happen. Logistics For a logistics provider, shifting package-tracking data from MongoDB to Azure Cosmos DB meant customers could monitor shipments in real time. Striim’s continuous replication kept data consistent throughout the cutover, so the company didn’t have to trade accuracy for speed. Healthcare A provider modernizing electronic medical records from Sybase to Azure SQL relied on Striim to ensure clinicians never lost access. With data now in Azure, they can meet compliance requirements while building analytics that improve patient care. Technology InfoCert, a leading provider of digital trust services specializing in secure digital identity solutions, opted to migrate its critical Legalmail Enterprise application from Oracle to Azure Database for PostgreSQL. Using Striim and Microsoft, they successfully migrated 2 TB of data across 12 databases and completed the project within a six-month timeframe, lowering licensing costs, enhancing scalability, and improving security. What unites these stories is a common thread: once data is in Azure, it becomes part of a foundation that’s ready for analytics and AI. Accelerate Your Path to Azure Now, instead of database migration being the bottleneck for modernization, it’s the starting point for what comes next. With the Unlimited Database Migration Program, Microsoft and Striim have created a path that removes friction and clears the way for innovation. Most customers can simply reach out to their Microsoft account team or seller to begin the process. Your Microsoft representative will validate that your migration scenario is supported by Striim, and Striim will allocate the licenses, provide installation guidance, and deliver ongoing support. If you’re unsure who your Microsoft contact is, you can connect directly with Striim, and we’ll coordinate with Microsoft on your behalf. There’s no lengthy procurement cycle or complex setup to navigate. With Microsoft and Striim jointly coordinating the program, enterprises can begin migrations as soon as they’re ready, with confidence that support is in place from start to finish. Simplify your migration and move forward with confidence. Talk to your Microsoft representative or book a call with Striim team today to take advantage of the Unlimited Database Migration Program and start realizing the value of Azure sooner. Or if you’re attending Microsoft Ignite, visit Striim at booth 6244 to learn more, ask questions, and see how Striim and Microsoft can help accelerate your modernization journey together.
Generally Available: Azure SQL Managed Instance Next-gen General Purpose
Overview Next-gen General Purpose is the evolution of General Purpose service tier that brings significantly improved performance and scalability to power up your existing Azure SQL Managed Instance fleet and helps you bring more mission-critical SQL workloads to Azure. We are happy to announce that Next-gen General Purpose is now Generally Available (GA) delivering even more scalability, flexibility, and value for organizations looking to modernize their data platform in a cost-effective way. The new #SQLMINextGen General Purpose tier delivers a built-in performance upgrade available to all customers at no extra cost. If you are an existing SQL MI General Purpose user, you get faster I/O, higher database density, and expanded storage - automatically. Summary Table: Key Improvements Capability Current GP Next-gen GP Improvement Average I/O Latency 5-10 ms 3-4 ms 2x lower Max Data IOPS 30-50k 80k 60% better Max Storage 16 TB 32 TB 2x better Max Databases/Instance 100 500 5x better Max vCores 80 128 40% better But that’s just the beginning. The new configuration sliders for additional IOPS and memory provide enhanced flexibility to tailor performance according to your requirements. Whether you require more resources for your application or seek to optimize resource utilization, you can adjust your instance settings to maximize efficiency and output. This release isn’t just about speed - It’s about giving you improved performance where it matters, and mechanisms to go further when you need them. Customer story - A recent customer case highlights how Hexure reduced processing time by up to 97.2% using Azure SQL Managed Instance on Next-gen General Purpose. What’s new in Next-gen General Purpose (Nov 2025)? 1. Improved baseline performance with the latest storage tech The SQL Managed Instance General Purpose tier is designed with full separation of compute and storage layers. The Classic GP version uses premium page blobs for the storage layer, while the Next-generation GP tier has transitioned to Azure’s latest storage solution, Elastic SAN. Azure Elastic SAN is a cloud-native storage service that offers high performance and excellent scalability, making it a perfect fit for the storage layer of a data-intensive PaaS service like Azure SQL Managed Instance. Simplified Performance Management With ESAN as the storage layer, the performance quotas for the Next-gen General Purpose tier are no longer enforced for each database file. The entire performance quota for the instance is shared across all the database files, making performance management much easier (one fewer thing to worry about). This adjustment brings the General Purpose tier into alignment with the Business Critical service tier experience. 2. Resource flexibility and cost optimization The GA of Next-gen General Purpose comes together with the GA of a transformative memory slider, enabling up to 49 memory configurations per instance. This lets you right-size workloads for both performance and cost. Memory is billed only for the additional amount beyond the default allocation. Users can independently configure vCores, memory, and IOPS for optimal efficiency. To learn more about the new option for configuring additional memory, check the article: Unlocking More Power with Flexible Memory in Azure SQL Managed Instance. 3. Enhanced resource elasticity through decoupled compute and storage scaling operations With Next-gen GP, both storage and IOPS can be resized independently of the compute infrastructure, and these changes now typically finish within five minutes - a process known as an in-place upgrade. There are three distinct types of storage upgrade experiences depending on the kind of storage upgrade performed and whether failover occurs. In-place update: same storage (no data copy), same compute (no failover) Storage re-attach: Same storage (no data copy), changed compute (with failover) Data copy: Changed storage (data copy), changed compute (with failover) The following matrix describes user experience with management operations: Operation Data copying Failover Storage upgrade type IOPS scaling No No In-place Storage scaling* No* No In-place vCores scaling No Yes** Re-attach Memory scaling No Yes** Re-attach Maintenance Window change No Yes** Re-attach Hardware change No Yes** Re-attach Update policy change Yes Yes Data copy * If scale down is >5.5TB, seeding ** In case of update operations that do not require seeding and are not completed in place (examples are scaling vCores, scaling memory, changing hardware or maintenance window), failover duration of databases on the Next-gen General Purpose service tier scales with the number of databases, up to 10 minutes. While the instance becomes available after 2 minutes, some databases might be available after a delay. Failover duration is measured from the moment when the first database goes offline, until the moment when the last database comes online. Furthermore, resizing vCores and memory is now 50% faster following the introduction of the Faster scaling operations release. No matter if you have end-of-month peak periods, or there are ups and downs of usage during the weekdays and the weekend, with fast and reliable management operations, you can run multiple configurations over your instance and respond to peak usage periods in a cost-effective way. 4. Reserved instance (RI) pricing With Azure Reservations, you can commit to using Azure SQL resources for either one or three years, which lets you benefit from substantial discounts on compute costs. When purchasing a reservation, you'll need to choose the Azure region, deployment type, performance tier, and reservation term. Reservations are only available for products that have reached general availability (GA), and with this update, next-generation GP instances now qualify as well. What's even better is that classic and next-gen GP share the same SKU, just with different remote storage types. This means any reservations you've purchased automatically apply to Next-gen GP, whether you're upgrading an existing classic GP instance or creating a new one. What’s Next? The product group has received considerable positive feedback and welcomes continued input. The initial release will not include zonal redundancy; however, efforts are underway to address this limitation. Next-generation General Purpose (GP) represents the future of the service tier, and all existing classic GP instances will be upgraded accordingly. Once upgrade plans are finalized, we will provide timely communication regarding the announcement. Conclusion Now in GA, Next-gen General Purpose sets a new standard for cloud database performance and flexibility. Whether you’re modernizing legacy applications, consolidating workloads, or building for the future, these enhancements put more power, scalability, and control in your hands - without breaking the bank. If you haven’t already, try out the Next-gen General Purpose capabilities for free with Azure SQL Managed Instance free offer. For users operating SQL Managed Instance on the General Purpose tier, it is recommended to consider upgrading existing instances to leverage the advantages of next-gen upgrade – for free. Welcome to #SQLMINextGen. Boosted by default. Tuned by you. Learn more What is Azure SQL Managed Instance Try Azure SQL Managed Instance for free Next-gen General Purpose – official documentation Analyzing the Economic Benefits of Microsoft Azure SQL Managed Instance How 3 customers are driving change with migration to Azure SQL Accelerate SQL Server Migration to Azure with Azure Arc1KViews4likes0CommentsIntroducing Azure SQL Managed Instance Next-gen GP
The next generation of the general purpose service tier for Azure SQL Managed Instance is a major upgrade that will considerably improve the storage performance of your instances while keeping the same price as current general purpose tier. Key improvements in the next generation of general purpose storage include support for 32 TB of storage, support for 500 DBs, lower storage latency, improved storage performance, and the ability to configure the amount of IOPS (I/O operations per second).34KViews9likes9CommentsUnlocking More Power with Flexible Memory in Azure SQL Managed Instance
Service update - Nov 18th 2025 Additional memory in Next-gen General Purpose service tier is now generally available! As data workloads grow in complexity and scale, so does the need for more adaptable and performant database infrastructure. That’s why we’re excited to introduce a new capability in Azure SQL Managed Instance: Flexible Memory, now generally available. What Is Flexible Memory? Flexible Memory allows you to customize the memory-to-vCore ratio in your SQL Managed Instance, giving you the ability to fine-tune performance and cost based on your workload needs. This feature is part of the next-generation General Purpose tier, and it introduces a memory slider that lets you scale memory independently within defined limits. The memory slider is enabled only for premium series hardware. Why It Matters Traditionally, memory allocation in SQL Managed Instance was fixed per vCore. With Flexible Memory, you can now: Increase memory beyond the default allocation Optimize for memory-intensive workloads without overprovisioning compute Pay only for what you use — additional memory is billed per GB/hour This flexibility is especially valuable for scenarios like analytics, caching, or workloads with large buffer pool requirements. How It Works Memory scales based on the number of vCores and the selected hardware tier: Hardware Tier Memory per vCore (GB) Standard-series 5.1 Premium series 7–12 Premium series (memory-optimized) Up to 13.6 You can select from predefined memory ratios (e.g., 7, 8, 10, 12 GB per vCore) depending on your configuration. For example, a 10 vCore instance can be configured with 70 GB to 120 GB of memory. One of the most powerful aspects of the Flexible Memory feature is the ability to select from a range of memory-to-vCore ratios. These “click stops” allow you to tailor memory allocation precisely to your workload’s needs — whether you’re optimizing for performance, cost, or both. The table below outlines the available configurations for Premium Series hardware, showing how memory scales across 16 vCore sizes: vCores Available Ratios Total Memory Options (GB) 4 7, 8, 10, 12 28, 32, 40, 48 6 7, 8, 10, 12 42, 48, 60, 72 8 7, 8, 10, 12 56, 64, 80, 96 10 7, 8, 10, 12 70, 80, 100, 120 12 7, 8, 10, 12 84, 96, 120, 144 16 7, 8, 10, 12 112, 128, 160, 192 20 7, 8, 10, 12 140, 160, 200, 240 24 7, 8, 10, 12 168, 192, 240, 288 32 7, 8, 10, 12 224, 256, 320, 384 40 7, 8, 10, 12 280, 320, 400, 480 48 7, 8, 10 336, 384, 480 56 7, 8 392, 448 64 7 448 80 7 560 96 5.83 560 128 4.38 560 Pricing model Flexible Memory introduces a usage-based pricing model that ensures you only pay for the memory you actually consume beyond the default allocation. This model is designed to give you the flexibility to scale memory without overcommitting on compute resources - and without paying for unused capacity. How it works: Default memory is calculated based on the minimum memory-to-vCore ratio Billable memory is the difference between your configured memory and the default allocation. Billing is per GB/hour, so you’re charged only for the additional memory used over time. Let’s take an example of SQL Managed Instance running on premium series hardware with 4 vCores and 40GB of memory. Configuration Value vCores 4 Configured Memory 40 GB Default Memory (4 × 7 GB) 28 GB Billable Memory 12 GB Billing Unit Per GB/hour Charged For 12 GB of additional memory Management Experience Changing memory behaves just like changing vCores: Seamless updates via Azure Portal, PowerShell, SDK or API Failover group guidance remains the same Upgrade secondary first Configurations between primary and secondary should match Adjusting the memory is fully online operation, with a short failover at the very end of it. The operation will go through the process of allocating the new compute with specified configuration, which takes approximately 60 minutes, with new faster management operations. API Support Flexible Memory is fully supported via API (the minimal API version that can be used is 2024-08-01) and Azure Portal. Here’s a sample API snippet to configure memory: { "properties": { "memorySizeInGB": 96 } } Portal support Summary The new Flexible Memory capability in Azure SQL Managed Instance empowers you to scale memory independently of compute, offering greater control over performance and cost. With customizable memory-to-vCore ratios, a transparent pricing model, and seamless integration into existing management workflows, this feature is ideal for memory-intensive workloads and dynamic scaling scenarios. Whether you're optimizing for analytics, caching, or simply want more headroom without overprovisioning vCores, Flexible Memory gives you the tools to do it - efficiently and affordably. Next Steps Review the Documentation: Explore detailed configuration options, supported tiers, and API usage. Additional memory Management operations overview Management operations duration Test Your Workloads: Use the memory slider in the Azure Portal, PowerShell, SDK or API to experiment with different configurations. Learn more What is Azure SQL Managed Instance Try Azure SQL Managed Instance for free Next-gen General Purpose – official documentation Analyzing the Economic Benefits of Microsoft Azure SQL Managed Instance How 3 customers are driving change with migration to Azure SQL Accelerate SQL Server Migration to Azure with Azure Arc1KViews3likes0CommentsSQL Server migration in Azure Arc – Generally Available
We’re excited to announce General Availability of the SQL Server migration in Azure Arc. This experience is designed to simplify and accelerate SQL Server migration journey to Azure SQL Managed Instance offering a unified, end-to-end workflow directly within the Azure portal. About the solution SQL Server migration in Azur Arc integrates existing Azure Database Migration Service capabilities into Azure Arc by enabling the entire end to end migration journey with the following capabilities: Continuous database migration assessments with Azure SQL target recommendations and cost estimates. Seamless provisioning of Azure SQL Managed Instance as destination target, also with an option of free instance evaluation. Option to choose between two built-in migration methods: real-time database replication using Distributed Availability Groups (powered by MI link feature), or log shipping via backup and restore (powered by Log Replay Service feature). Unified interface that eliminates the need to use multiple tools or to jump between various places in Azure portal. Microsoft Copilot is integrated to assist you at select points during the migration journey. Learn more about SQL Server Migration in Azure Arc. Benefits of the solution Traditionally, migrating SQL Server workloads to Azure required juggling between multiple tools, various places in portal, and some manual steps. This new experience changes that by: Providing a single pane of glass in the Azure portal for the entire migration journey. Reducing migration timelines from months to days. Offering a choice of two migration methods – real-time replication or log-shipping Enabling validation of target environments using read-only replicas before cutover for real-time replication. Automatically capturing application client connection data to simplify mapping between applications and databases. Optional failback from Azure SQL Managed Instancy with configured upgrade policy for SQL Server 2022 and SQL Server 2025 with external tooling. Providing intelligent step-by-step guidance with Microsoft Copilot at select points of the migration journey, helping you make informed decisions. Start Your Migration Journey Today In case your SQL Server is Arc enabled, you could proceed right away to Azure portal. If you need to enable it, then onboard your SQL Server to Azure Arc today. In the portal, navigate to Arc enabled SQL Server resource, and on the left-hand side select Migration, then Database Migration. This is where you will navigate to the new database migration experience. Shown below is illustrated experience of the database migration experience you are expected to see. From the main screen, you can navigate through each stage of the migration journey - starting with the database migration readiness assessment, followed by selecting or provisioning an Azure SQL Managed Instance as the target destination, choosing the appropriate migration method, monitoring progress, and performing the final cutover. By selecting the Azure SQL Benefits tab, you’ll gain insights into the advantages of Azure SQL - helping you make well-informed decisions about your migration. By selecting the Tutorials tab, you will access information helping you getting started with the solution. We've also integrated Microsoft Copilot at select points in the journey for any guidance and support you might need to ensure confident and informed decision-making. For next steps, click on the button below: Get started with SQL Server Migration in Azure Arc today. Feedback We love hearing from our customers! Any feedback or suggestions for the product group, use the following online form to let us know: Provide feedback to the product group We hope that you will enjoy our solution, and we look forward to your feedback as you embark on your migration journey to Azure.182Views1like0CommentsGeneral Availability Announcement: Regex Support in SQL Server 2025 & Azure SQL
We’re excited to announce the General Availability (GA) of native Regex support in SQL Server 2025 and Azure SQL — a long-awaited capability that brings powerful pattern matching directly into T-SQL. This release marks a significant milestone in modernizing string operations and enabling advanced text processing scenarios natively within the database engine. What is Regex? The other day, while building LEGO with my 3-year-old — an activity that’s equal parts joy and chaos — I spent minutes digging for one tiny piece and thought, “If only Regex worked on LEGO.” That moment of playful frustration turned into a perfect metaphor. Think of your LEGO box as a pile of data — a colorful jumble of tiny pieces. Now imagine trying to find every little brick from a specific LEGO set your kid mixed into the pile. That’s tricky — you’d have to sift through each piece one by one. But what if you had a smart filter that instantly found exactly those pieces? That’s what Regex (short for Regular Expressions) does for your data. It’s a powerful pattern-matching tool that helps you search, extract, and transform text with precision. With Regex now natively supported in SQL Server 2025 and Azure SQL, this capability is built directly into T-SQL — no external languages or workarounds required. What can Regex help you do? Regex can help you tackle a wide range of data challenges, including: Enhancing data quality and accuracy by validating and correcting formats like phone numbers, email addresses, zip codes, and more. Extracting valuable insights by identifying and grouping specific text patterns such as keywords, hashtags, or mentions. Transforming and standardizing data by replacing, splitting, or joining text patterns — useful for handling abbreviations, acronyms, or synonyms. Cleaning and optimizing data by removing unwanted patterns like extra whitespace, punctuation, or duplicates. Meet the new Regex functions in T-SQL SQL Server 2025 introduces seven new T-SQL Regex functions, grouped into two categories: scalar functions (return a value per row) and table-valued functions (TVFs) (return a set of rows). Here’s a quick overview: Function Type Description REGEXP_LIKE Scalar Returns TRUE if the input string matches the Regex pattern REGEXP_COUNT Scalar Counts the number of times a pattern occurs in a string REGEXP_INSTR Scalar Returns the position of a pattern match within a string REGEXP_REPLACE Scalar Replaces substrings that match a pattern with a replacement string REGEXP_SUBSTR Scalar Extracts a substring that matches a pattern REGEXP_MATCHES TVF Returns a table of all matches including substrings and their positions REGEXP_SPLIT_TO_TABLE TVF Splits a string into rows using a Regex delimiter These functions follow the POSIX standard and support most of the PCRE/PCRE2 flavor of regular expression syntax, making them compatible with most modern Regex engines and tools. They support common features like: Character classes (\d, \w, etc.) Quantifiers (+, *, {n}) Alternation (|) Capture groups ((...)) You can also use Regex flags to modify behavior: 'i' – Case-insensitive matching 'm' – Multi-line mode (^ and $ match line boundaries) 's' – Dot matches newline 'c' – Case-sensitive matching (default) Examples: Regex in Action Let’s explore how these functions solve tricky real-world data tasks that were hard to do in earlier SQL versions. REGEXP_LIKE: Data Validation — Keeping data in shape Validating formats like email addresses or phone numbers used to require multiple functions or external tools. With REGEXP_LIKE, it’s now a concise query. For example, you can check whether an email contains valid characters before and after the @, followed by a domain with at least two letters like .com, .org, or .co.in. SELECT [Name], Email, CASE WHEN REGEXP_LIKE (Email, '^[A-Za-z0-9._+]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$') THEN 'Valid Email' ELSE 'Invalid Email' END AS IsValidEmail FROM (VALUES ('John Doe', 'john@contoso.com'), ('Alice Smith', 'alice@fabrikam.com'), ('Bob Johnson', 'bob@fabrikam.net'), ('Charlie Brown', 'charlie@contoso.co.in'), ('Eve Jones', 'eve@@contoso.com')) AS e(Name, Email); We can further use REGEXP_LIKE in CHECK constraints to enforce these rules at the column level (so no invalid format ever gets into the table). For instance: CREATE TABLE Employees ( ..., Email VARCHAR (320) CHECK (REGEXP_LIKE (Email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')), Phone VARCHAR (20) CHECK (REGEXP_LIKE (Phone, '^(\d{3})-(\d{3})-(\d{4})$')) ); This level of enforcement significantly enhances data integrity by ensuring that only correctly formatted values are accepted into the database. REGEXP_COUNT: Count JSON object keys Count how many top-level keys exist in a JSON string — no JSON parser needed! SELECT JsonData, REGEXP_COUNT(JsonData, '"[^"]+"\s*:', 1, 'i') AS NumKeys FROM (VALUES ('{"name":"Abhiman","role":"PM","location":"Bengaluru"}'), ('{"skills":["SQL","T-SQL","Regex"],"level":"Advanced"}'), ('{"project":{"name":"Regex GA","status":"Live"},"team":["Tejas","UC"]}'), ('{"empty":{}}'), ('{}')) AS t(JsonData); REGEXP_INSTR: Locate patterns in logs Find the position of the first error code (ERR-XXXX) in log messages — even when the pattern appears multiple times or in varying locations. SELECT LogMessage, REGEXP_INSTR(LogMessage, 'ERR-\d{4}', 1, 1, 0, 'i') AS ErrorCodePosition FROM (VALUES ('System initialized. ERR-1001 occurred during startup.'), ('Warning: Disk space low. ERR-2048. Retry failed. ERR-2049.'), ('No errors found.'), ('ERR-0001: Critical failure. ERR-0002: Recovery started.'), ('Startup complete. Monitoring active.')) AS t(LogMessage); REGEXP_REPLACE: Redact sensitive data Mask SSNs and credit card numbers in logs or exports — all with a single, secure query. SELECT sensitive_info, REGEXP_REPLACE(sensitive_info, '(\d{3}-\d{2}-\d{4}|\d{4}-\d{4}-\d{4}-\d{4})', '***-**-****') AS redacted_info FROM (VALUES ('John Doe SSN: 123-45-6789'), ('Credit Card: 9876-5432-1098-7654'), ('SSN: 000-00-0000 and Card: 1111-2222-3333-4444'), ('No sensitive info here'), ('Multiple SSNs: 111-22-3333, 222-33-4444'), ('Card: 1234-5678-9012-3456, SSN: 999-88-7777')) AS t(sensitive_info); REGEXP_SUBSTR: Extract and count email domains Extract domains from email addresses and group users by domain. SELECT REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1) AS Domain, COUNT(*) AS NumUsers FROM (VALUES ('Alice', 'alice@contoso.com'), ('Bob', 'bob@fabrikam.co.in'), ('Charlie', 'charlie@example.com'), ('Diana', 'diana@college.edu'), ('Eve', 'eve@contoso.com'), ('Frank', 'frank@fabrikam.co.in'), ('Grace', 'grace@example.net')) AS e(Name, Email) WHERE REGEXP_LIKE (Email, '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$') GROUP BY REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1); REGEXP_MATCHES: Extract multiple emails from text Extract all email addresses from free-form text like comments or logs — returning each match as a separate row for easy parsing or analysis. SELECT * FROM REGEXP_MATCHES ('Contact us at support@example.com or sales@example.com', '[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}'); This query identifies and returns both email addresses found in the string — no need for loops, manual parsing, or external scripting. REGEXP_SPLIT_TO_TABLE: Break down structured text Split a string into rows using a Regex delimiter — ideal for parsing logs, config entries, or form data. SELECT * FROM REGEXP_SPLIT_TO_TABLE ('Name: John Doe; Email: john.doe@example.com; Phone: 123-456-7890', '; '); This query breaks the input string into rows for each field, making it easier to parse and process the data — especially when dealing with inconsistent or custom delimiters. To explore more examples, syntax options, and usage details, head over to the https://learn.microsoft.com/en-us/sql/t-sql/functions/regular-expressions-functions-transact-sql?view=sql-server-ver17. Conclusion The addition of Regex functionality in SQL Server 2025 and Azure SQL is a major leap forward for developers and DBAs. It eliminates the need for external libraries, CLR integration, or complex workarounds for text processing. With Regex now built into T-SQL, you can: Validate and enforce data formats Sanitize and transform sensitive data Search logs for complex patterns Extract and split structured content And this is just the beginning. Regex opens the door to a whole new level of data quality, text analytics, and developer productivity — all within the database engine. So go ahead and Regex away! Your feedback and partnership continue to drive innovation in Azure SQL and SQL Server — thank you for being part of it.92Views0likes0CommentsSecuring Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide
The Secure Future Initiative is Microsoft’s strategic framework for embedding security into every layer of the data platform—from infrastructure to identity. As part of this initiative, Microsoft Entra authentication for Azure SQL Database offers a modern, password less approach to access control that aligns with Zero Trust principles. By leveraging Entra identities, customers benefit from stronger security postures through multifactor authentication, centralized identity governance, and seamless integration with managed identities and service principals. Onboarding Entra authentication enables organizations to reduce reliance on passwords, simplify access management, and improve auditability across hybrid and cloud environments. With broad support across tools and platforms, and growing customer adoption, Entra authentication is a forward-looking investment in secure, scalable data access. Migration Steps Overview Organizations utilizing SQL authentication can strengthen database security by migrating to Entra Id-based authentication. The following steps outline the process. Identify your logins and users – Review the existing SQL databases, along with all related users and logins, to assess what’s needed for migration. Enable Entra auth on Azure SQL logical servers by assigning a Microsoft Entra admin. Identify all permissions associated with the SQL logins & Database users. Recreate SQL logins and users with Microsoft Entra identities. Upgrade application drivers and libraries to min versions & update application connections to SQL Databases to use Entra based managed identities. Update deployments for SQL logical server resources to have Microsoft Entra-only authentication enabled. For all existing Azure SQL Databases, flip to Entra‑only after validation. Enforce Entra-only for all Azure SQL Databases with Azure Policies (deny). Step 1: Identify your logins and users - Use SQL Auditing Consider using SQL Audit to monitor which identities are accessing your databases. Alternatively, you may use other methods or skip this step if you already have full visibility of all your logins. Configure server‑level SQL Auditing. For more information on turning the server level auditing: Configure Auditing for Azure SQL Database series - part1 | Microsoft Community Hub SQL Audit can be enabled on the logical server, which will enable auditing for all existing and new user databases. When you set up auditing, the audit log will be written to your storage account with the SQL Database audit log format. Use sys.fn_get_audit_file_v2 to query the audit logs in SQL. You can join the audit data with sys.server_principals and sys.database_principals to view users and logins connecting to your databases. The following query is an example of how to do this: SELECT (CASE WHEN database_principal_id > 0 THEN dp.type_desc ELSE NULL END) AS db_user_type , (CASE WHEN server_principal_id > 0 THEN sp.type_desc ELSE NULL END) AS srv_login_type , server_principal_name , server_principal_sid , server_principal_id , database_principal_name , database_principal_id , database_name , SUM(CASE WHEN succeeded = 1 THEN 1 ELSE 0 END) AS sucessful_logins , SUM(CASE WHEN succeeded = 0 THEN 1 ELSE 0 END) AS failed_logins FROM sys.fn_get_audit_file_v2( '<Storage_endpoint>/<Container>/<ServerName>', DEFAULT, DEFAULT, '2023-11-17T08:40:40Z', '2023-11-17T09:10:40Z') -- join on database principals (users) metadata LEFT OUTER JOIN sys.database_principals dp ON database_principal_id = dp.principal_id -- join on server principals (logins) metadata LEFT OUTER JOIN sys.server_principals sp ON server_principal_id = sp.principal_id -- filter to actions DBAF (Database Authentication Failed) and DBAS (Database Authentication Succeeded) WHERE (action_id = 'DBAF' OR action_id = 'DBAS') GROUP BY server_principal_name , server_principal_sid , server_principal_id , database_principal_name , database_principal_id , database_name , dp.type_desc , sp.type_desc Step 2: Enable Microsoft Entra authentication (assign admin) Follow this to enable Entra authentication and assign a Microsoft Entra admin at the server. This is mixed mode; existing SQL auth continues to work. WARNING: Do NOT enable Entra‑only (azureADOnlyAuthentications) yet. That comes in Step 7. Entra admin Recommendation: For production environments, it is advisable to utilize an PIM Enabled Entra group as the server administrator for enhanced access control. Step 3: Identity & document existing permissions (SQL Logins & Users) Retrieve a list of all your SQL auth logins. Make sure to run on the master database.: SELECT * FROM sys.sql_logins List all SQL auth users, run the below query on all user Databases. This would list the users per Database. SELECT * FROM sys.database_principals WHERE TYPE = 'S' Note: You may need only the column ‘name’ to identify the users. List permissions per SQL auth user: SELECT database_principals.name , database_principals.principal_id , database_principals.type_desc , database_permissions.permission_name , CASE WHEN class = 0 THEN 'DATABASE' WHEN class = 3 THEN 'SCHEMA: ' + SCHEMA_NAME(major_id) WHEN class = 4 THEN 'Database Principal: ' + USER_NAME(major_id) ELSE OBJECT_SCHEMA_NAME(database_permissions.major_id) + '.' + OBJECT_NAME(database_permissions.major_id) END AS object_name , columns.name AS column_name , database_permissions.state_desc AS permission_type FROM sys.database_principals AS database_principals INNER JOIN sys.database_permissions AS database_permissions ON database_principals.principal_id = database_permissions.grantee_principal_id LEFT JOIN sys.columns AS columns ON database_permissions.major_id = columns.object_id AND database_permissions.minor_id = columns.column_id WHERE type_desc = 'SQL_USER' ORDER BY database_principals.name Step 4: Create SQL users for your Microsoft Entra identities You can create users(preferred) for all Entra identities. Learn more on Create user The "FROM EXTERNAL PROVIDER" clause in TSQL distinguishes Entra users from SQL authentication users. The most straightforward approach to adding Entra users is to use a managed identity for Azure SQL and grant the required three Graph API permissions. These permissions are necessary for Azure SQL to validate Entra users. User.Read.All: Allows access to Microsoft Entra user information. GroupMember.Read.All: Allows access to Microsoft Entra group information. Application.Read.ALL: Allows access to Microsoft Entra service principal (application) information. For creating Entra users with non-unique display names, use Object_Id in the Create User TSQL: -- Retrieve the Object Id from the Entra blade from the Azure portal. CREATE USER [myapp4466e] FROM EXTERNAL PROVIDER WITH OBJECT_ID = 'aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb' For more information on finding the Entra Object ID: Find tenant ID, domain name, user object ID - Partner Center | Microsoft Learn Alternatively, if granting these API permissions to SQL is undesirable, you may add Entra users directly using the T-SQL commands provided below. In these scenarios, Azure SQL will bypass Entra user validation. Create SQL user for managed identity or an application - This T-SQL code snippet establishes a SQL user for an application or managed identity. Please substitute the `MSIname` and `clientId` (note: use the client id, not the object id), variables with the Display Name and Client ID of your managed identity or application. -- Replace the two variables with the managed identity display name and client ID declare @MSIname sysname = '<Managed Identity/App Display Name>' declare @clientId uniqueidentifier = '<Managed Identity/App Client ID>'; -- convert the guid to the right type and create the SQL user declare @castClientId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), @clientId), 1); -- Construct command: CREATE USER [@MSIname] WITH SID = @castClientId, TYPE = E; declare nvarchar(max) = N'CREATE USER [' + @MSIname + '] WITH SID = ' + @castClientId + ', TYPE = E;' EXEC (@cmd) For more information on finding the Entra Client ID: Register a client application in Microsoft Entra ID for the Azure Health Data Services | Microsoft Learn Create SQL user for Microsoft Entra user - Use this T-SQL to create a SQL user for a Microsoft Entra account. Enter your username and object Id: -- Replace the two variables with the MS Entra user alias and object ID declare sysname = '<MS Entra user alias>'; -- (e.g., username@contoso.com) declare uniqueidentifier = '<User Object ID>'; -- convert the guid to the right type declare @castObjectId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), ), 1); -- Construct command: CREATE USER [@username] WITH SID = @castObjectId, TYPE = E; declare nvarchar(max) = N'CREATE USER [' + + '] WITH SID = ' + @castObjectId + ', TYPE = E;' EXEC (@cmd) Create SQL user for Microsoft Entra group - This T-SQL snippet creates a SQL user for a Microsoft Entra group. Set groupName and object Id to your values. -- Replace the two variables with the MS Entra group display name and object ID declare @groupName sysname = '<MS Entra group display name>'; -- (e.g., ContosoUsersGroup) declare uniqueidentifier = '<Group Object ID>'; -- convert the guid to the right type and create the SQL user declare @castObjectId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), ), 1); -- Construct command: CREATE USER [@groupName] WITH SID = @castObjectId, TYPE = X; declare nvarchar(max) = N'CREATE USER [' + @groupName + '] WITH SID = ' + @castObjectId + ', TYPE = X;' EXEC (@cmd) For more information on finding the Entra Object ID: Find tenant ID, domain name, user object ID - Partner Center | Microsoft Learn Validate SQL user creation - When a user is created correctly, the EntraID column in this query shows the user's original MS Entra ID. select CAST(sid as uniqueidentifier) AS EntraID, * from sys.database_principals Assign permissions to Entra based users – After creating Entra users, assign them SQL permissions to read or write by either using GRANT statements or adding them to roles like db_datareader. Refer to your documentation from Step 3, ensuring you include all necessary user permissions for new Entra SQL users and that security policies remain enforced. Step 5: Update Programmatic Connections Change your application connection strings to managed identities for SQL authentication and test each app for Microsoft Entra compatibility. Upgrade your drivers to these versions or newer. JDBC driver version 7.2.0 (Java) ODBC driver version 17.3 (C/C++, COBOL, Perl, PHP, Python) OLE DB driver version 18.3.0 (COM-based applications) Microsoft.Data.SqlClient 5.2.2+ (ADO.NET) Microsoft.EntityFramework.SqlServer 6.5.0 (Entity Framework) System.Data.SqlClient(SDS) doesn't support managed identity; switch to Microsoft.Data.SqlClient(MDS). If you need to port your applications from SDS to MDS the following cheat sheet will be helpful: https://github.com/dotnet/SqlClient/blob/main/porting-cheat-sheet.md. Microsoft.Data.SqlClient also takes a dependency on these packages & most notably the MSAL for .NET (Version 4.56.0+). Here is an example of Azure web application connecting to Azure SQL, using managed identity. Step 6: Validate No Local Auth Traffic Be sure to switch all your connections to managed identity before you redeploy your Azure SQL logical servers with Microsoft Entra-only authentication turned on. Repeat the use of SQL Audit, just as you did in Step 1, but now to confirm that every connection has moved away from SQL authentication. Once your server is up and running with only Entra authentication, any connections still based on SQL authentication will not work, which could disrupt services. Test your systems thoroughly to verify that everything operates correctly. Step 7: Enable Microsoft Entra‑only & disable local auth Once all your connections & applications are built to use managed identity, you can disable the SQL Authentication, by turning the Entra-only authentication via Azure portal, or using the APIs. Step 8: Enforce at scale (Azure Policy) Additionally, after successful migration and validation, it is recommended to deploy the built-in Azure Policy across your subscriptions to ensure that all SQL resources do not use local authentication. During resource creation, Azure SQL instances will be required to have Microsoft Entra-only authentication enabled. This requirement can be enforced through Azure policies. Best Practices for Entra-Enabled Azure SQL Applications Use exponential backoff with decorrelated jitter for retrying transient SQL errors, and set a max retry cap to avoid resource drain. Separate retry logic for connection setup and query execution. Cache and proactively refresh Entra tokens before expiration. Use Microsoft.Data.SqlClient v3.0+ with Azure.Identity for secure token management. Enable connection pooling and use consistent connection strings. Set appropriate timeouts to prevent hanging operations. Handle token/auth failures with targeted remediation, not blanket retries. Apply least-privilege identity principles; avoid global/shared tokens. Monitor retry counts, failures, and token refreshes via telemetry. Maintain auditing for compliance and security. Enforce TLS 1.2+ (Encrypt=True, TrustServerCertificate=False). Prefer pooled over static connections. Log SQL exception codes for precise error handling. Keep libraries and drivers up to date for latest features and resilience. References Use this resource to troubleshoot issues with Entra authentication (previously known as Azure AD Authentication): Troubleshooting problems related to Azure AD authentication with Azure SQL DB and DW | Microsoft Community Hub To add Entra users from an external tenant, invite them as guest users to the Azure SQL Database's Entra administrator tenant. For more information on adding Entra guest users: Quickstart: Add a guest user and send an invitation - Microsoft Entra External ID | Microsoft Learn Conclusion Migrating to Microsoft Entra password-less authentication for Azure SQL Database is a strategic investment in security, compliance, and operational efficiency. By following this guide and adopting best practices, organizations can reduce risk, improve resilience, and future-proof their data platform in alignment with Microsoft’s Secure Future Initiative.254Views0likes1CommentHow to setup alerts for deadlocks using Log Analytics
Managed Instance diagnostic events do not support sending deadlock information to Log Analytics. However, through auditing, it's possible to query failed queries along with their reported error messages—though this does not include deadlock XML. We will see how we can send information to Log Analytics and setup an alert for when a deadlock occurs. Step 1 - Deploy Log Analytics Create a Log Analytics workspace if you currently don't have one Create a Log Analytics workspace Step 2 - Add diagnostic setting On the Azure Portal, open the Diagnostic settings of your Azure SQL Managed Instance and choose Add diagnostic setting Select SQL Security Audit Event and choose has destination your Log Analytics workspace Step 3 - Create a server audit on the Azure SQL Managed Instance Run the query below on the Managed Instance Rename the server audit and server audit specification to a name of your choice. CREATE SERVER AUDIT [audittest] TO EXTERNAL_MONITOR GO -- we are adding Login audit, but only BATCH_COMPLETED_GROUP is necessary for query execution CREATE SERVER AUDIT SPECIFICATION audit_server FOR SERVER AUDIT audittest ADD (SUCCESSFUL_LOGIN_GROUP), ADD (BATCH_COMPLETED_GROUP), ADD (FAILED_LOGIN_GROUP) WITH (STATE = ON) GO ALTER SERVER AUDIT [audittest] WITH (STATE = ON) GO Step 4 - Check events on Log Analytics It may take some time for records to begin appearing in Log Analytics. Open your Log Analytics workspace and choose Logs To verify if data is being ingested, run the following query in Log Analytics and wait until you start getting the first results: Make sure that you change servername with your Azure SQL Managed Instance name AzureDiagnostics | where LogicalServerName_s == "servername" | where Category == "SQLSecurityAuditEvents" | take 10 Example: Step 5 - (Optional) Create a deadlock event for testing Create a deadlock scenario so you can see a record on log analytics. Example: Open SSMS and a new query window under the context of a user database (you can create a test database just for this test). create a table on a user database and insert 10 records: create table tb1 (id int identity(1,1) primary key clustered, col1 varchar(30)) go insert into tb1 values ('aaaaaaa') go 10 You can close the query window or reuse for the next step. Open a new query window (or reuse the first query window) and run (leave the query window open after executing): begin transaction update tb1 set col1 = 'bbbb' where id = 1 Open a second query window and run (leave the query window open after executing): begin transaction update tb1 set col1 = 'bbbb' where id = 2 Go back to the first query window opened and run (the query will be blocked - will stay executing): update tb1 set col1 = 'bbbb' where id = 2 Go back to the second query window opened and run (this transaction will be victim of deadlock): update tb1 set col1 = 'bbbb' where id = 1 You can rollback and close all windows after the deadlock exception. Step 6 - (Optional) Check the deadlock exception on Log Analytics Note: the record can take some minutes to appear on Log Analytics Use the query below to obtain the Deadlock events for the last hour (we are looking for Error 1205) Make sure that you change servername with your Azure SQL Managed Instance name AzureDiagnostics | where TimeGenerated > ago(1h) | where LogicalServerName_s == "servername" | where Category == "SQLSecurityAuditEvents" | where succeeded_s == "false" | where additional_information_s contains "Err 1205, Level 13" Step 7 - Use query to Create an Alert Use the query below to create an Alert on Azure Log Analytics Make sure that you change servername with your Azure SQL Managed Instance name. The query checks for deadlocks that occurred on the previous hour. AzureDiagnostics | where TimeGenerated > ago(1h) | where LogicalServerName_s == "servername" | where Category == "SQLSecurityAuditEvents" | where succeeded_s == "false" | where additional_information_s contains "Err 1205, Level 13" Run the query and click on New alert rule Create the alert with the desired settings