azure sql database
476 TopicsStream data in near real time from SQL to Azure Event Hubs - Public preview
If near-real time integration is something you are looking to implement and you were looking for a simpler way to get the data out of SQL, keep reading. SQL is making it easier to integrate and Change Event Streaming is a feature continuing this trend. Modern applications and analytics platforms increasingly rely on event-driven architectures and real-time data pipelines. As the businesses speed up, real time decisioning is becoming especially important. Traditionally, capturing changes from a relational database requires complex ETL jobs, periodic polling, or third-party tools. These approaches often consume significant cycles of the data source, introduce operational overhead, and pose challenges with scalability, especially if you need one data source to feed into multiple destinations. In this context, we are happy to release Change Event Streaming ("CES") feature into Public Preview for Azure SQL Database. This feature enables you to stream row-level changes - inserts, updates, and deletes - from your database directly to Azure Event Hubs in near real time. Change Event Streaming addresses the above challenges by: Reducing latency: Changes are streamed (pushed by SQL) as they happen. This is in contrast with traditional CDC (change data capture) or CT (change tracking) based approaches, where an external component needs to poll SQL at regular intervals. Traditional approaches allow you to increase polling frequency, but it gets difficult to find a sweet spot between minimal latency and minimal overhead due to too frequent polls. Simplifying architecture: No need for Change Data Capture (CDC), Change Tracking, custom polling or external connectors - SQL streams directly to configured destination. This means simpler security profile (fewer authentication points), fewer failure points, easier monitoring, lower skill bar to deploy and run the service. No need to worry about cleanup jobs, etc. SQL keeps track of which changes are successfully received by the destination, handles the retry logic and releases log truncation point. Finally, with CES you have fewer components to procure and get approved for production use. Decoupling: The integration is done on the database level. This eliminates the problem of dual writes - the changes are streamed at transaction boundaries, once your source of truth (the database) has saved the changes. You do not need to modify your app workloads to get the data streamed - you tap right onto the data layer - this is useful if your apps are dated and do not possess real-time integration capabilities. In case of some 3rd party apps, you may not even have an option to do anything other than database level integration, and CES makes it simpler. Also, the publishing database does not concern itself with the final destination for the data - Stream the data once to the common message bus, and you can consume it by multiple downstream systems, irrespective of their number or capacity - the (number of) consumers does not affect publishing load on the SQL side. Serving consumers is handled by the message bus, Azure Event Hubs, which is purpose built for high throughput data transfers. onceptually visualizing data flow from SQL Server, with an arrow towards Azure Event Hubs, from where a number of arrows point to different final destinations. Key Scenarios for CES Event-driven microservices: They need to exchange data, typically thru a common message bus. With CES, you can have automated data publishing from each of the microservices. This allows you to trigger business processes immediately when data changes. Real-time analytics: Stream operational data into platforms like Fabric Real Time Intelligence or Azure Stream Analytics for quick insights. Breaking down the monoliths: Typical monolithic systems with complex schemas, sitting on top of a single database can be broken down one piece at a time: create a new component (typically a microservice), set up the streaming from the relevant tables on the monolith database and tap into the stream by the new components. You can then test run the components, validate the results against the original monolith, and cutover when you build the confidence that the new component is stable. Cache and search index updates: Keep distributed caches and search indexes in sync without custom triggers. Data lake ingestion: Capture changes continuously into storage for incremental processing. Data availability: This is not a scenario per se, but the amount of data you can tap into for business process mining or intelligence in general goes up whenever you plug another database into the message bus. E.g. You plug in your eCommerce system to the message bus to integrate with Shipping providers, and consequently, the same data stream is immediately available for any other systems to tap into. How It Works CES uses transaction log-based capture to stream changes with minimal impact on your workload. Events are published in a structured JSON format following the CloudEvents standard, including operation type, primary key, and before/after values. You can configure CES to target Azure Event Hubs via AMQP or Kafka protocols. For details on configuration, message format, and FAQs, see the official documentation: Feature Overview CES: Frequently Asked Questions Get Started Public preview CES is available today in public preview for Azure SQL Database and as a preview feature in SQL Server 2025. Private preview CES is also available as a private preview for Azure SQL Managed Instance and Fabric SQL database: you can request to join the private preview by signing up here: https://aka.ms/sql-ces-signup We encourage you to try the feature out and start building real-time integrations on top of your existing data. We welcome your feedback—please share your experience through Azure Feedback portal or support channels. The comments below on this blog post will also be monitored, if you want to engage with us. Finally, CES team can be reached via email: sqlcesfeedback [at] microsoft [dot] com. Useful resources Free Azure SQL Database. Free Azure SQL Managed Instance.315Views0likes0CommentsAzure SQL Database LTR Backup Immutability is now Generally Available
Azure SQL Database is a fully managed, always‑up‑to‑date relational database service built for mission‑critical apps. It delivers built‑in high availability, automated backups, and elastic scale, with strong security and compliance capabilities. Today, I am very excited to announce the General Availability of immutability for Azure SQL DB LTR backups! Azure SQL Database now supports immutable long‑term retention (LTR) backups, stored in write‑once, read‑many (WORM) state for a fixed (customer configured) period. That means your LTR backups cannot be modified or deleted during the lock window—even by highly privileged identities—helping you preserve clean restore points after a cyberattack and strengthen your compliance posture. Why this matters: ransomware targets backups Modern ransomware playbooks don’t stop at encrypting production data—they also attempt to alter or delete backups to block recovery. With backup immutability, Azure SQL Database LTR backups are written to immutable storage and locked for the duration you specify, providing a resilient, tamper‑proof recovery layer so you can restore from a known‑good copy when it matters most. What we’re announcing General Availability of Backup Immutability for Long‑Term Retention (LTR) backups in Azure SQL Database. This GA applies to Azure SQL database LTR backups. What immutability does (and doesn’t) do Prevents changes and deletion of LTR backup artifacts for a defined, locked period (WORM). This protection applies even to highly privileged identities, reducing the risk from compromised admin accounts or insider misuse. Helps address regulatory WORM expectations, supporting customers who must retain non‑erasable, non‑rewritable records (for example, requirements under SEC Rule 17a‑4(f), FINRA Rule 4511(c), and CFTC Rule 1.31(c)–(d)). Always consult your legal/compliance team for your specific obligations. Complements a defense‑in‑depth strategy—it’s not a replacement for identity hygiene, network controls, threat detection, and recovery drills. See Microsoft’s broader ransomware guidance for Azure. How it works (at a glance) When you enable immutability on an LTR policy, Azure SQL Database stores those LTR backups on Azure immutable storage in a WORM state. During the lock window, the backup cannot be modified or deleted; after the lock expires, normal retention/deletion applies per your policy. Key benefits Ransomware‑resilient recovery: Preserve clean restore points that attackers can’t tamper with during the lock period. Compliance‑ready retention: Use WORM‑style retention to help meet industry and regulatory expectations for non‑erasable, non‑rewritable storage. Operational simplicity: Manage immutability alongside your existing Azure SQL Database long‑term retention policies. Get started Choose databases that require immutable LTR backups. Enable immutability on the LTR backup policy and set the retention/lock period aligned to your regulatory and risk requirements. Validate recovery by restoring from an immutable LTR backup. Documentation: Learn more about backup immutability for LTR backups in Azure SQL Database in Microsoft Learn. Tell us what you think We’d love your feedback on scenarios, guidance, and tooling that would make immutable backups even easier to adopt. Share your experiences and suggestions in the Azure SQL community forums and let us know how immutability is helping your organization raise its cyber‑resilience.125Views1like0CommentsConvert geo-replicated databases to Hyperscale
Update: On 22 October 2025 we announced the General Availability for this improvement. We’re excited to introduce the next improvement in Hyperscale conversion: a new feature that allows customers to convert Azure SQL databases to Hyperscale by keeping active geo-replication or failover group configurations intact. This builds on our earlier improvements and directly addresses one of the most requested capabilities from customers. With this improvement, customers can now modernize your database architecture with Hyperscale while maintaining business continuity. Overview We have heard feedback from customers about possible improvements we could make while converting their databases to Hyperscale. Customers complained about the complex steps they needed to perform to convert a database to Hyperscale when the database is geo-replicated by active geo-replication or failover groups. Previously, converting to Hyperscale required tearing down geo-replication links and recreating them after the conversion. Now, that’s no longer necessary. This improvement allows customers to preserve their cross-region disaster recovery or read scale-out configurations and still allows conversion to Hyperscale which helps in minimizing downtime and operational complexity. This feature is especially valuable for applications that rely on failover group endpoints for connectivity. Before this improvement, if application needs to be available during conversion, then connection string needed modifications as a part of conversion because the failover group and its endpoints had to be removed. With this new improvement, the conversion process is optimized for minimal disruption, with telemetry showing majority of cutover times under one minute. Even with a geo-replication configuration in place, you can still choose between automatic and manual cutover modes, offering flexibility in scheduling the transition. Progress tracking is now more granular, giving customers better visibility into each stage of the conversion, including the conversion of the geo-secondary to Hyperscale. Customer feedback Throughout the preview phase, we have received overwhelmingly positive feedback from several customers about this improvement. Viktoriia Kuznetcova, Senior Automation Test Engineer from Nintex says: We needed a low-downtime way to move our databases from the Premium tier to Azure SQL Database Hyperscale, and this new feature delivered perfectly; allowing us to complete the migration in our test environments safely and smoothly, even while the service remained under continuous load, without any issues and without needing to break the failover group. We're looking forward to the public release so we can use it in production, where Hyperscale’s ability to scale storage both up and down will help us manage peak loads without overpaying for unused capacity. Get started The good news is that there are no changes needed to the conversion process. The workflow automatically detects that a geo-secondary is present and converts it to Hyperscale. There are no new parameters, and the method remains the same as the existing conversion process which works for non-geo-replicated databases. All you need is to make sure that: You have only one geo-secondary replica because Hyperscale doesn't support more than one geo-secondary replica. If a chained geo-replication configuration exists, it must be removed before starting the conversion to Hyperscale. Creating a geo-replica of a geo-replica (also known as "geo-replica chaining") isn't supported in Hyperscale. Once the above requirements are satisfied, you can use any of the following methods to initiate the conversion process. Conversion to Hyperscale must be initiated starting from the primary geo-replica. The following table provides sample commands to convert a database named WideWorldImporters on a logical server called contososerver to an 8-vcore Hyperscale database with manual cutover option. Method Command T-SQL ALTER DATABASE WideWorldImporters MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen5_8') WITH MANUAL_CUTOVER; PowerShell Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "contososerver" -DatabaseName "WideWorldImporters" -Edition "Hyperscale" -RequestedServiceObjectiveName "HS_Gen5_8" -ManualCutover Azure CLI az sql db update --resource-group ResourceGroup01 --server contososerver --name WideWorldImporters --edition Hyperscale --service-objective HS_Gen5_8 --manual-cutover Here are some notable details of this improvement: The geo-secondary database is automatically converted to Hyperscale with the same service level objective as the primary. All database configurations such as maintenance window, zone-resiliency, backup redundancy etc. remain the same as earlier (i.e., both geo-primary and geo-secondary would inherit from their own earlier configuration). A planned failover isn't possible while the conversion to Hyperscale is in progress. A forced failover is possible. However, depending on the state of the conversion when the forced failover occurs, the new geo-primary after failover might use either the Hyperscale service tier, or its original service tier. If the geo-secondary database is in an elastic pool before conversion, it is taken out of the pool and might need to be added back to a Hyperscale elastic pool separately after the conversion. This feature has been fully deployed across all Azure regions. In case you see error (Update to service objective '<SLO name>' with source DB geo-replicated is not supported for entity '<Database Name>') while converting primary to Hyperscale, we would like to hear from you. Do send us an email to the email ID give in next section. If you don’t want to use this capability, make sure to remove any geo-replication configuration before converting your databases to Hyperscale. Conclusion This update marks a significant step forward in the Hyperscale conversion process, offering simple steps, less downtime and keeping the geo-secondary available during the conversion process. We encourage you to try this capability and provide your valuable feedback and help us refine this feature. You can contact us by commenting on this blog post and we’ll be happy to get back to you. Alternatively, you can also email us at sqlhsfeedback AT microsoft DOT com. We are eager to hear from you all!1.8KViews2likes0CommentsRemoving barriers to migrating databases to Azure with Striim’s Unlimited Database Migration program
Alok Pareek, co-founder and Executive Vice President of Product and Engineering at Striim Shireesh Thota, Corporate Vice President of Databases at Microsoft Every modernization strategy starts with data. It’s what enables advanced analytics and AI agents today, and prepares enterprises for what’s to come in the future. But before services like Microsoft Fabric, Azure AI Foundry, or Copilot can create that value, the underlying data needs to move into Microsoft’s cloud platforms. It’s within that first step, database migration, where the real complexity often lies. To simplify the process, Microsoft has expanded its investment in the Striim partnership. Striim continuously replicates data from existing databases into Azure in real time, enabling online migrations with zero downtime. Through this partnership, we have collaborated to enable modernization and migration into Azure at no additional cost to our customers. We’ve designed this Unlimited Database Migration program to accelerate adoption by making migrations easier to start, easier to scale, and easier to complete, all without disrupting business operations. Since launch, this joint program has already driven significant growth in customer adoption, indicating the demand for faster, more seamless modernization. And with Microsoft’s continued investment in this partnership, enterprises now have a proven, repeatable path to modernize their databases and prepare their data for the AI era. Watch or listen to our recent podcast episode (Apple Podcasts, Spotify, YouTube) to learn more. Striim’s Unlimited Migration Program Striim’s Unlimited Database Migration Program was designed to make modernization as straightforward as possible for Microsoft customers. Through this initiative, enterprises gain unlimited Striim licenses to migrate as many databases as they need at no additional cost. Highlights and benefits of the program include: Zero-downtime, zero-data-loss migrations. Supported sources include SQL Server, MongoDB, Oracle, MySQL, PostgreSQL, and Sybase. Supported targets include Azure Database for MySQL, Azure Database for PostgreSQL, Azure Cosmos DB, and Azure SQL. Mission-critical, heterogeneous workloads supported. Applies for SQL, Oracle, NoSQL, OSS. Drives faster AI adoption. Once migrated, data is ready for analytics & AI. Access is streamlined through Microsoft’s Cloud Factory Accelerator team, which manages program enrollment and coordinates the distribution of licenses. Once onboarded, customers receive installation walkthroughs, an enablement kit, and direct support from Striim architects. Cutover support, hands-on labs, and escalation paths are all built in to help migrations run smoothly from start to finish. Enterprises can start migrations quickly, scale across business units, and keep projects moving without slowing down for procurement hurdles. Now, migrations can begin when the business is ready, not when budgets or contracts catch up. How Striim Powers Online Migrations Within Striim’s database migrations, schema changes and metadata evolution are automatically detected and applied, preserving data accuracy and referential integrity. As the migration progresses, Striim automatically coordinates both the initial bulk load of historical data and the ongoing synchronization of live transactions. This ongoing synchronization keeps source and target systems in sync for as long as needed to actively test the target applications with real data before doing the cutoff, thereby minimizing risk. However, the foundation of Striim’s approach is log-based Change Data Capture (CDC), which streams database changes in real time from source to target with sub-second latency. This helps migrations avoid just moving the static snapshot of a database. Rather, they continuously replicate every update as it happens, so both environments remain aligned with minimal impact on operational systems throughout the process. While the snapshot (initial load) is being applied to the target system, Striim captures all the changes that occur. Once the initial load process is complete, Striim applies the changes using CDC, and from this point on, the source and target systems are in sync. This eliminates the need for shutting down the source system during the initial load process and enables customers to complete their migrations without any downtime of the source database. Striim is also designed to work across hybrid and multi-cloud architectures. It can seamlessly move workloads from on-premises databases, SaaS applications, or other clouds into Microsoft databases. By maintaining exactly-once delivery and ensuring downstream systems stay in sync, Striim can reduce risk and accelerates the path to modernization. Striim is available in the Azure Marketplace, giving customers a native, supported way to integrate it directly into their Azure environment. This means migrations can be deployed quickly, governed centrally, and scaled as business needs evolve, all while still aligning with Azure’s security and compliance standards. From Migration to Value With workloads fully landed in Azure, enterprises can immediately take advantage of the broader Microsoft data ecosystem. Fabric, Azure AI Foundry, and Copilot become available as extensions of the database foundation, allowing teams to analyze, visualize, and enrich data without delay. Enterprises can begin adopting Microsoft AI services with data that is current, trusted, and governed. Instead of treating migration as an isolated project, customers gain an integrated pathway to analytics and AI, creating value as soon as databases go live in Azure. How Enterprises Are Using the Program Today Across industries, we’re already seeing how this program changes the way enterprises approach modernization. Financial Services Moving from Oracle to Azure SQL, one global bank used Striim to keep systems in sync throughout the migration. With transactions flowing in real time, they stood up a modern fraud detection pipeline on Azure that identifies risks as they happen. Logistics For a logistics provider, shifting package-tracking data from MongoDB to Azure Cosmos DB meant customers could monitor shipments in real time. Striim’s continuous replication kept data consistent throughout the cutover, so the company didn’t have to trade accuracy for speed. Healthcare A provider modernizing electronic medical records from Sybase to Azure SQL relied on Striim to ensure clinicians never lost access. With data now in Azure, they can meet compliance requirements while building analytics that improve patient care. Technology InfoCert, a leading provider of digital trust services specializing in secure digital identity solutions, opted to migrate its critical Legalmail Enterprise application from Oracle to Azure Database for PostgreSQL. Using Striim and Microsoft, they successfully migrated 2 TB of data across 12 databases and completed the project within a six-month timeframe, lowering licensing costs, enhancing scalability, and improving security. What unites these stories is a common thread: once data is in Azure, it becomes part of a foundation that’s ready for analytics and AI. Accelerate Your Path to Azure Now, instead of database migration being the bottleneck for modernization, it’s the starting point for what comes next. With the Unlimited Database Migration Program, Microsoft and Striim have created a path that removes friction and clears the way for innovation. Most customers can simply reach out to their Microsoft account team or seller to begin the process. Your Microsoft representative will validate that your migration scenario is supported by Striim, and Striim will allocate the licenses, provide installation guidance, and deliver ongoing support. If you’re unsure who your Microsoft contact is, you can connect directly with Striim, and we’ll coordinate with Microsoft on your behalf. There’s no lengthy procurement cycle or complex setup to navigate. With Microsoft and Striim jointly coordinating the program, enterprises can begin migrations as soon as they’re ready, with confidence that support is in place from start to finish. Simplify your migration and move forward with confidence. Talk to your Microsoft representative or book a call with Striim team today to take advantage of the Unlimited Database Migration Program and start realizing the value of Azure sooner. Or if you’re attending Microsoft Ignite, visit Striim at booth 6244 to learn more, ask questions, and see how Striim and Microsoft can help accelerate your modernization journey together.
Announcing Public Preview: Auditing for Fabric SQL Database
We’re excited to announce the public preview of Auditing for Fabric SQL Database—a powerful feature designed to help organizations strengthen security, ensure compliance, and gain deep operational insights into their data environments. Why Auditing Matters Auditing is a cornerstone of data governance. With Fabric SQL Database auditing, you can now easily track and log database activities—answering critical questions like who accessed what data, when, and how. This supports compliance requirements (such as HIPAA and SOX), enables robust threat detection, and provides a foundation for forensic investigations. Key Highlights Flexible Configuration: Choose from default “audit everything,” preconfigured scenarios (like permission changes, login attempts, data reads/writes, schema changes), or define custom action groups and predicate filters for advanced needs. Seamless Access: Audit logs are stored in One Lake, making them easily accessible via T-SQL or One Lake Explorer. Role-Based Access Control: Configuration and log access are governed by both Fabric workspace roles and SQL-level permissions, ensuring only authorized users can view or manage audit data. Retention Settings: Customize how long audit logs are retained to meet your organization’s policy. How It Works Audit logs are written to a secure, read-only folder in One Lake and can be queried using the sys. fn_get_audit_file_v2 T-SQL function. Workspace and artifact IDs are used as identifiers, ensuring logs remain consistent even if databases move across logical servers. Access controls at both the workspace and SQL database level ensure only the right people can configure or view audit logs. Example Use Cases Compliance Monitoring: Validate a full audit trail for regulatory requirements. Security Investigations: Track specific events like permission changes or failed login attempts. Operational Insights: Focus on specific operations (e.g., DML only) or test retention policies. Role-Based Access: Verify audit visibility across different user roles. Getting Started You can configure auditing directly from the Manage SQL Auditing blade in the Fabric Portal. Choose your preferred scenario, set retention, and (optionally) define custom filters—all through a simple, intuitive interface. Learn more about auditing for Fabric SQL database here Data exposed session with demo here109Views2likes0CommentsGeneral Availability Announcement: Regex Support in SQL Server 2025 & Azure SQL
We’re excited to announce the General Availability (GA) of native Regex support in SQL Server 2025 and Azure SQL — a long-awaited capability that brings powerful pattern matching directly into T-SQL. This release marks a significant milestone in modernizing string operations and enabling advanced text processing scenarios natively within the database engine. What is Regex? The other day, while building LEGO with my 3-year-old — an activity that’s equal parts joy and chaos — I spent minutes digging for one tiny piece and thought, “If only Regex worked on LEGO.” That moment of playful frustration turned into a perfect metaphor. Think of your LEGO box as a pile of data — a colorful jumble of tiny pieces. Now imagine trying to find every little brick from a specific LEGO set your kid mixed into the pile. That’s tricky — you’d have to sift through each piece one by one. But what if you had a smart filter that instantly found exactly those pieces? That’s what Regex (short for Regular Expressions) does for your data. It’s a powerful pattern-matching tool that helps you search, extract, and transform text with precision. With Regex now natively supported in SQL Server 2025 and Azure SQL, this capability is built directly into T-SQL — no external languages or workarounds required. What can Regex help you do? Regex can help you tackle a wide range of data challenges, including: Enhancing data quality and accuracy by validating and correcting formats like phone numbers, email addresses, zip codes, and more. Extracting valuable insights by identifying and grouping specific text patterns such as keywords, hashtags, or mentions. Transforming and standardizing data by replacing, splitting, or joining text patterns — useful for handling abbreviations, acronyms, or synonyms. Cleaning and optimizing data by removing unwanted patterns like extra whitespace, punctuation, or duplicates. Meet the new Regex functions in T-SQL SQL Server 2025 introduces seven new T-SQL Regex functions, grouped into two categories: scalar functions (return a value per row) and table-valued functions (TVFs) (return a set of rows). Here’s a quick overview: Function Type Description REGEXP_LIKE Scalar Returns TRUE if the input string matches the Regex pattern REGEXP_COUNT Scalar Counts the number of times a pattern occurs in a string REGEXP_INSTR Scalar Returns the position of a pattern match within a string REGEXP_REPLACE Scalar Replaces substrings that match a pattern with a replacement string REGEXP_SUBSTR Scalar Extracts a substring that matches a pattern REGEXP_MATCHES TVF Returns a table of all matches including substrings and their positions REGEXP_SPLIT_TO_TABLE TVF Splits a string into rows using a Regex delimiter These functions follow the POSIX standard and support most of the PCRE/PCRE2 flavor of regular expression syntax, making them compatible with most modern Regex engines and tools. They support common features like: Character classes (\d, \w, etc.) Quantifiers (+, *, {n}) Alternation (|) Capture groups ((...)) You can also use Regex flags to modify behavior: 'i' – Case-insensitive matching 'm' – Multi-line mode (^ and $ match line boundaries) 's' – Dot matches newline 'c' – Case-sensitive matching (default) Examples: Regex in Action Let’s explore how these functions solve tricky real-world data tasks that were hard to do in earlier SQL versions. REGEXP_LIKE: Data Validation — Keeping data in shape Validating formats like email addresses or phone numbers used to require multiple functions or external tools. With REGEXP_LIKE, it’s now a concise query. For example, you can check whether an email contains valid characters before and after the @, followed by a domain with at least two letters like .com, .org, or .co.in. SELECT [Name], Email, CASE WHEN REGEXP_LIKE (Email, '^[A-Za-z0-9._+]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$') THEN 'Valid Email' ELSE 'Invalid Email' END AS IsValidEmail FROM (VALUES ('John Doe', 'john@contoso.com'), ('Alice Smith', 'alice@fabrikam.com'), ('Bob Johnson', 'bob@fabrikam.net'), ('Charlie Brown', 'charlie@contoso.co.in'), ('Eve Jones', 'eve@@contoso.com')) AS e(Name, Email); We can further use REGEXP_LIKE in CHECK constraints to enforce these rules at the column level (so no invalid format ever gets into the table). For instance: CREATE TABLE Employees ( ..., Email VARCHAR (320) CHECK (REGEXP_LIKE (Email, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')), Phone VARCHAR (20) CHECK (REGEXP_LIKE (Phone, '^(\d{3})-(\d{3})-(\d{4})$')) ); This level of enforcement significantly enhances data integrity by ensuring that only correctly formatted values are accepted into the database. REGEXP_COUNT: Count JSON object keys Count how many top-level keys exist in a JSON string — no JSON parser needed! SELECT JsonData, REGEXP_COUNT(JsonData, '"[^"]+"\s*:', 1, 'i') AS NumKeys FROM (VALUES ('{"name":"Abhiman","role":"PM","location":"Bengaluru"}'), ('{"skills":["SQL","T-SQL","Regex"],"level":"Advanced"}'), ('{"project":{"name":"Regex GA","status":"Live"},"team":["Tejas","UC"]}'), ('{"empty":{}}'), ('{}')) AS t(JsonData); REGEXP_INSTR: Locate patterns in logs Find the position of the first error code (ERR-XXXX) in log messages — even when the pattern appears multiple times or in varying locations. SELECT LogMessage, REGEXP_INSTR(LogMessage, 'ERR-\d{4}', 1, 1, 0, 'i') AS ErrorCodePosition FROM (VALUES ('System initialized. ERR-1001 occurred during startup.'), ('Warning: Disk space low. ERR-2048. Retry failed. ERR-2049.'), ('No errors found.'), ('ERR-0001: Critical failure. ERR-0002: Recovery started.'), ('Startup complete. Monitoring active.')) AS t(LogMessage); REGEXP_REPLACE: Redact sensitive data Mask SSNs and credit card numbers in logs or exports — all with a single, secure query. SELECT sensitive_info, REGEXP_REPLACE(sensitive_info, '(\d{3}-\d{2}-\d{4}|\d{4}-\d{4}-\d{4}-\d{4})', '***-**-****') AS redacted_info FROM (VALUES ('John Doe SSN: 123-45-6789'), ('Credit Card: 9876-5432-1098-7654'), ('SSN: 000-00-0000 and Card: 1111-2222-3333-4444'), ('No sensitive info here'), ('Multiple SSNs: 111-22-3333, 222-33-4444'), ('Card: 1234-5678-9012-3456, SSN: 999-88-7777')) AS t(sensitive_info); REGEXP_SUBSTR: Extract and count email domains Extract domains from email addresses and group users by domain. SELECT REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1) AS Domain, COUNT(*) AS NumUsers FROM (VALUES ('Alice', 'alice@contoso.com'), ('Bob', 'bob@fabrikam.co.in'), ('Charlie', 'charlie@example.com'), ('Diana', 'diana@college.edu'), ('Eve', 'eve@contoso.com'), ('Frank', 'frank@fabrikam.co.in'), ('Grace', 'grace@example.net')) AS e(Name, Email) WHERE REGEXP_LIKE (Email, '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$') GROUP BY REGEXP_SUBSTR(Email, '@(.+)$', 1, 1, 'i', 1); REGEXP_MATCHES: Extract multiple emails from text Extract all email addresses from free-form text like comments or logs — returning each match as a separate row for easy parsing or analysis. SELECT * FROM REGEXP_MATCHES ('Contact us at support@example.com or sales@example.com', '[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}'); This query identifies and returns both email addresses found in the string — no need for loops, manual parsing, or external scripting. REGEXP_SPLIT_TO_TABLE: Break down structured text Split a string into rows using a Regex delimiter — ideal for parsing logs, config entries, or form data. SELECT * FROM REGEXP_SPLIT_TO_TABLE ('Name: John Doe; Email: john.doe@example.com; Phone: 123-456-7890', '; '); This query breaks the input string into rows for each field, making it easier to parse and process the data — especially when dealing with inconsistent or custom delimiters. To explore more examples, syntax options, and usage details, head over to the https://learn.microsoft.com/en-us/sql/t-sql/functions/regular-expressions-functions-transact-sql?view=sql-server-ver17. Conclusion The addition of Regex functionality in SQL Server 2025 and Azure SQL is a major leap forward for developers and DBAs. It eliminates the need for external libraries, CLR integration, or complex workarounds for text processing. With Regex now built into T-SQL, you can: Validate and enforce data formats Sanitize and transform sensitive data Search logs for complex patterns Extract and split structured content And this is just the beginning. Regex opens the door to a whole new level of data quality, text analytics, and developer productivity — all within the database engine. So go ahead and Regex away! Your feedback and partnership continue to drive innovation in Azure SQL and SQL Server — thank you for being part of it.92Views0likes0CommentsLesson Learned #486: Snapshot Isolation Transaction Failed Due to a Concurrent DDL Statement
Today, we worked on a service request that our customer faced the following error message: Msg 3961, Level 16, State 1, Line 4 Snapshot isolation transaction failed in database 'DbName' because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this transaction. It is disallowed because the metadata is not versioned. A concurrent update to metadata can lead to inconsistency if mixed with snapshot isolation.2.6KViews0likes1CommentSecuring Azure SQL Database with Microsoft Entra Password-less Authentication: Migration Guide
The Secure Future Initiative is Microsoft’s strategic framework for embedding security into every layer of the data platform—from infrastructure to identity. As part of this initiative, Microsoft Entra authentication for Azure SQL Database offers a modern, password less approach to access control that aligns with Zero Trust principles. By leveraging Entra identities, customers benefit from stronger security postures through multifactor authentication, centralized identity governance, and seamless integration with managed identities and service principals. Onboarding Entra authentication enables organizations to reduce reliance on passwords, simplify access management, and improve auditability across hybrid and cloud environments. With broad support across tools and platforms, and growing customer adoption, Entra authentication is a forward-looking investment in secure, scalable data access. Migration Steps Overview Organizations utilizing SQL authentication can strengthen database security by migrating to Entra Id-based authentication. The following steps outline the process. Identify your logins and users – Review the existing SQL databases, along with all related users and logins, to assess what’s needed for migration. Enable Entra auth on Azure SQL logical servers by assigning a Microsoft Entra admin. Identify all permissions associated with the SQL logins & Database users. Recreate SQL logins and users with Microsoft Entra identities. Upgrade application drivers and libraries to min versions & update application connections to SQL Databases to use Entra based managed identities. Update deployments for SQL logical server resources to have Microsoft Entra-only authentication enabled. For all existing Azure SQL Databases, flip to Entra‑only after validation. Enforce Entra-only for all Azure SQL Databases with Azure Policies (deny). Step 1: Identify your logins and users - Use SQL Auditing Consider using SQL Audit to monitor which identities are accessing your databases. Alternatively, you may use other methods or skip this step if you already have full visibility of all your logins. Configure server‑level SQL Auditing. For more information on turning the server level auditing: Configure Auditing for Azure SQL Database series - part1 | Microsoft Community Hub SQL Audit can be enabled on the logical server, which will enable auditing for all existing and new user databases. When you set up auditing, the audit log will be written to your storage account with the SQL Database audit log format. Use sys.fn_get_audit_file_v2 to query the audit logs in SQL. You can join the audit data with sys.server_principals and sys.database_principals to view users and logins connecting to your databases. The following query is an example of how to do this: SELECT (CASE WHEN database_principal_id > 0 THEN dp.type_desc ELSE NULL END) AS db_user_type , (CASE WHEN server_principal_id > 0 THEN sp.type_desc ELSE NULL END) AS srv_login_type , server_principal_name , server_principal_sid , server_principal_id , database_principal_name , database_principal_id , database_name , SUM(CASE WHEN succeeded = 1 THEN 1 ELSE 0 END) AS sucessful_logins , SUM(CASE WHEN succeeded = 0 THEN 1 ELSE 0 END) AS failed_logins FROM sys.fn_get_audit_file_v2( '<Storage_endpoint>/<Container>/<ServerName>', DEFAULT, DEFAULT, '2023-11-17T08:40:40Z', '2023-11-17T09:10:40Z') -- join on database principals (users) metadata LEFT OUTER JOIN sys.database_principals dp ON database_principal_id = dp.principal_id -- join on server principals (logins) metadata LEFT OUTER JOIN sys.server_principals sp ON server_principal_id = sp.principal_id -- filter to actions DBAF (Database Authentication Failed) and DBAS (Database Authentication Succeeded) WHERE (action_id = 'DBAF' OR action_id = 'DBAS') GROUP BY server_principal_name , server_principal_sid , server_principal_id , database_principal_name , database_principal_id , database_name , dp.type_desc , sp.type_desc Step 2: Enable Microsoft Entra authentication (assign admin) Follow this to enable Entra authentication and assign a Microsoft Entra admin at the server. This is mixed mode; existing SQL auth continues to work. WARNING: Do NOT enable Entra‑only (azureADOnlyAuthentications) yet. That comes in Step 7. Entra admin Recommendation: For production environments, it is advisable to utilize an PIM Enabled Entra group as the server administrator for enhanced access control. Step 3: Identity & document existing permissions (SQL Logins & Users) Retrieve a list of all your SQL auth logins. Make sure to run on the master database.: SELECT * FROM sys.sql_logins List all SQL auth users, run the below query on all user Databases. This would list the users per Database. SELECT * FROM sys.database_principals WHERE TYPE = 'S' Note: You may need only the column ‘name’ to identify the users. List permissions per SQL auth user: SELECT database_principals.name , database_principals.principal_id , database_principals.type_desc , database_permissions.permission_name , CASE WHEN class = 0 THEN 'DATABASE' WHEN class = 3 THEN 'SCHEMA: ' + SCHEMA_NAME(major_id) WHEN class = 4 THEN 'Database Principal: ' + USER_NAME(major_id) ELSE OBJECT_SCHEMA_NAME(database_permissions.major_id) + '.' + OBJECT_NAME(database_permissions.major_id) END AS object_name , columns.name AS column_name , database_permissions.state_desc AS permission_type FROM sys.database_principals AS database_principals INNER JOIN sys.database_permissions AS database_permissions ON database_principals.principal_id = database_permissions.grantee_principal_id LEFT JOIN sys.columns AS columns ON database_permissions.major_id = columns.object_id AND database_permissions.minor_id = columns.column_id WHERE type_desc = 'SQL_USER' ORDER BY database_principals.name Step 4: Create SQL users for your Microsoft Entra identities You can create users(preferred) for all Entra identities. Learn more on Create user The "FROM EXTERNAL PROVIDER" clause in TSQL distinguishes Entra users from SQL authentication users. The most straightforward approach to adding Entra users is to use a managed identity for Azure SQL and grant the required three Graph API permissions. These permissions are necessary for Azure SQL to validate Entra users. User.Read.All: Allows access to Microsoft Entra user information. GroupMember.Read.All: Allows access to Microsoft Entra group information. Application.Read.ALL: Allows access to Microsoft Entra service principal (application) information. For creating Entra users with non-unique display names, use Object_Id in the Create User TSQL: -- Retrieve the Object Id from the Entra blade from the Azure portal. CREATE USER [myapp4466e] FROM EXTERNAL PROVIDER WITH OBJECT_ID = 'aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb' For more information on finding the Entra Object ID: Find tenant ID, domain name, user object ID - Partner Center | Microsoft Learn Alternatively, if granting these API permissions to SQL is undesirable, you may add Entra users directly using the T-SQL commands provided below. In these scenarios, Azure SQL will bypass Entra user validation. Create SQL user for managed identity or an application - This T-SQL code snippet establishes a SQL user for an application or managed identity. Please substitute the `MSIname` and `clientId` (note: use the client id, not the object id), variables with the Display Name and Client ID of your managed identity or application. -- Replace the two variables with the managed identity display name and client ID declare @MSIname sysname = '<Managed Identity/App Display Name>' declare @clientId uniqueidentifier = '<Managed Identity/App Client ID>'; -- convert the guid to the right type and create the SQL user declare @castClientId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), @clientId), 1); -- Construct command: CREATE USER [@MSIname] WITH SID = @castClientId, TYPE = E; declare nvarchar(max) = N'CREATE USER [' + @MSIname + '] WITH SID = ' + @castClientId + ', TYPE = E;' EXEC (@cmd) For more information on finding the Entra Client ID: Register a client application in Microsoft Entra ID for the Azure Health Data Services | Microsoft Learn Create SQL user for Microsoft Entra user - Use this T-SQL to create a SQL user for a Microsoft Entra account. Enter your username and object Id: -- Replace the two variables with the MS Entra user alias and object ID declare sysname = '<MS Entra user alias>'; -- (e.g., username@contoso.com) declare uniqueidentifier = '<User Object ID>'; -- convert the guid to the right type declare @castObjectId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), ), 1); -- Construct command: CREATE USER [@username] WITH SID = @castObjectId, TYPE = E; declare nvarchar(max) = N'CREATE USER [' + + '] WITH SID = ' + @castObjectId + ', TYPE = E;' EXEC (@cmd) Create SQL user for Microsoft Entra group - This T-SQL snippet creates a SQL user for a Microsoft Entra group. Set groupName and object Id to your values. -- Replace the two variables with the MS Entra group display name and object ID declare @groupName sysname = '<MS Entra group display name>'; -- (e.g., ContosoUsersGroup) declare uniqueidentifier = '<Group Object ID>'; -- convert the guid to the right type and create the SQL user declare @castObjectId nvarchar(max) = CONVERT(varchar(max), convert (varbinary(16), ), 1); -- Construct command: CREATE USER [@groupName] WITH SID = @castObjectId, TYPE = X; declare nvarchar(max) = N'CREATE USER [' + @groupName + '] WITH SID = ' + @castObjectId + ', TYPE = X;' EXEC (@cmd) For more information on finding the Entra Object ID: Find tenant ID, domain name, user object ID - Partner Center | Microsoft Learn Validate SQL user creation - When a user is created correctly, the EntraID column in this query shows the user's original MS Entra ID. select CAST(sid as uniqueidentifier) AS EntraID, * from sys.database_principals Assign permissions to Entra based users – After creating Entra users, assign them SQL permissions to read or write by either using GRANT statements or adding them to roles like db_datareader. Refer to your documentation from Step 3, ensuring you include all necessary user permissions for new Entra SQL users and that security policies remain enforced. Step 5: Update Programmatic Connections Change your application connection strings to managed identities for SQL authentication and test each app for Microsoft Entra compatibility. Upgrade your drivers to these versions or newer. JDBC driver version 7.2.0 (Java) ODBC driver version 17.3 (C/C++, COBOL, Perl, PHP, Python) OLE DB driver version 18.3.0 (COM-based applications) Microsoft.Data.SqlClient 5.2.2+ (ADO.NET) Microsoft.EntityFramework.SqlServer 6.5.0 (Entity Framework) System.Data.SqlClient(SDS) doesn't support managed identity; switch to Microsoft.Data.SqlClient(MDS). If you need to port your applications from SDS to MDS the following cheat sheet will be helpful: https://github.com/dotnet/SqlClient/blob/main/porting-cheat-sheet.md. Microsoft.Data.SqlClient also takes a dependency on these packages & most notably the MSAL for .NET (Version 4.56.0+). Here is an example of Azure web application connecting to Azure SQL, using managed identity. Step 6: Validate No Local Auth Traffic Be sure to switch all your connections to managed identity before you redeploy your Azure SQL logical servers with Microsoft Entra-only authentication turned on. Repeat the use of SQL Audit, just as you did in Step 1, but now to confirm that every connection has moved away from SQL authentication. Once your server is up and running with only Entra authentication, any connections still based on SQL authentication will not work, which could disrupt services. Test your systems thoroughly to verify that everything operates correctly. Step 7: Enable Microsoft Entra‑only & disable local auth Once all your connections & applications are built to use managed identity, you can disable the SQL Authentication, by turning the Entra-only authentication via Azure portal, or using the APIs. Step 8: Enforce at scale (Azure Policy) Additionally, after successful migration and validation, it is recommended to deploy the built-in Azure Policy across your subscriptions to ensure that all SQL resources do not use local authentication. During resource creation, Azure SQL instances will be required to have Microsoft Entra-only authentication enabled. This requirement can be enforced through Azure policies. Best Practices for Entra-Enabled Azure SQL Applications Use exponential backoff with decorrelated jitter for retrying transient SQL errors, and set a max retry cap to avoid resource drain. Separate retry logic for connection setup and query execution. Cache and proactively refresh Entra tokens before expiration. Use Microsoft.Data.SqlClient v3.0+ with Azure.Identity for secure token management. Enable connection pooling and use consistent connection strings. Set appropriate timeouts to prevent hanging operations. Handle token/auth failures with targeted remediation, not blanket retries. Apply least-privilege identity principles; avoid global/shared tokens. Monitor retry counts, failures, and token refreshes via telemetry. Maintain auditing for compliance and security. Enforce TLS 1.2+ (Encrypt=True, TrustServerCertificate=False). Prefer pooled over static connections. Log SQL exception codes for precise error handling. Keep libraries and drivers up to date for latest features and resilience. References Use this resource to troubleshoot issues with Entra authentication (previously known as Azure AD Authentication): Troubleshooting problems related to Azure AD authentication with Azure SQL DB and DW | Microsoft Community Hub To add Entra users from an external tenant, invite them as guest users to the Azure SQL Database's Entra administrator tenant. For more information on adding Entra guest users: Quickstart: Add a guest user and send an invitation - Microsoft Entra External ID | Microsoft Learn Conclusion Migrating to Microsoft Entra password-less authentication for Azure SQL Database is a strategic investment in security, compliance, and operational efficiency. By following this guide and adopting best practices, organizations can reduce risk, improve resilience, and future-proof their data platform in alignment with Microsoft’s Secure Future Initiative.254Views0likes1CommentDatabase compatibility level 170 in Azure SQL Database and SQL database in Microsoft Fabric
The alignment of SQL versions to default compatibility levels are as follows: 100: in SQL Server 2008 and Azure SQL Database 110: in SQL Server 2012 and Azure SQL Database 120: in SQL Server 2014 and Azure SQL Database 130: in SQL Server 2016 and Azure SQL Database 140: in SQL Server 2017and Azure SQL Database 150: in SQL Server 2019 and Azure SQL Database 160: in SQL Server 2022, Azure SQL Database and SQL database in Microsoft Fabric 170: in SQL Server 2025 (Preview), Azure SQL Database and SQL database in Microsoft Fabric For details about which feature, or features compatibility level 170 enables, please see what is new under database compatibility 170. The Intelligent query processing (IQP) family of features also include multiple features that improve the performance of existing workloads with minimal or no implementation effort. Once this new database compatibility default goes into effect, if you still wish to use database compatibility level 160 (or lower), please follow the instructions detailed here: View or Change the Compatibility Level of a Database. For example, you may wish to ensure that new databases created on the same logical server use the same compatibility level as other Azure SQL Databases to ensure consistent query optimization and execution behavior across development, QA and production versions of your databases. With this example in mind, we recommend that any database configuration scripts in use explicitly designate the COMPATIBILITY_LEVEL rather than rely on the defaults, in order to ensure consistent application behavior. For new databases supporting new applications, we recommend using the latest compatibility level, 170. For pre-existing databases running at lower compatibility levels, the recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article Change the Database Compatibility Mode and Use the Query Store. Note that this article refers to database compatibility level 130 and SQL Server, but the same methodology that is described applies to database compatibility 170 for SQL Server and Azure SQL Database. To determine the current database compatibility level, query the compatibility_level column of sys.databases system catalog view. SELECT [name], compatibility_level FROM sys.databases; So, there may be a few questions that we have not directly answered with this announcement. Maybe questions such as: What do you mean by “database compatibility level 170 is now the default”? If you create a new database and don’t explicitly designate the COMPATIBILITY_LEVEL, the database compatibility level 170 will be used. Does Microsoft automatically update the database compatibility level for existing databases? No. We do not update the database compatibility level for existing databases. This is up to you as an owner of your database to do at your own discretion. With that said, we highly recommend that you plan on moving to the latest database compatibility level in order to leverage the latest improvements that are enabled with the latest compatibility level. I created a logical server before 170 was the default database compatibility level. What impact does this have? The master database of your logical server will reflect the database compatibility level that was the default when the logical server was created. New databases created on this logical server with an older compatibility level for the master database will use database compatibility level 170 if the database compatibility level is not explicitly specified. The master database compatibility cannot be changed without recreating the logical server. Having master database operating at an older database compatibility level will not impact user database behavior. Would the database compatibility level change to 170 if I restore a database from a point in time backup before the default changed? No. We will preserve the compatibility level that was in effect when the backup was performed.3.3KViews0likes8Comments