FabCon/SQLCon 2026: What You Need to Know
This year, Microsoft did something quietly significant by merging the SQLCon and FabCon conferences. If you're not deep in the data world, that might sound like a minor logistics decision, but for a long time, the people who run databases and the people who build analytics systems have operated in separate ecosystems with different tools, different conferences, and different vendors. Bringing them into the same room under the same brand is Microsoft saying, that separation is over. Your operational databases and your analytics platform are now in one spot.
There were a bunch of big announcements made at FabCon/SQLCon this week, and here are my big takeaways.

The OneLake Foundation Is Getting More Useful, Fast
One of the persistent frustrations with data platforms is that they're only as good as the data you can get into them. Every source that requires a custom pipeline, a nightly export, or a third-party connector is a source that's always a little stale, a little unreliable, and maintained by someone who will eventually leave.
This week, Microsoft announced some meaningful progress on that problem.

Oracle databases and SAP Datasphere systems are often the operational backbone of mid-to-large enterprises. Getting that data into Fabric continuously, without custom engineering, is the kind of thing that would have taken months and a consulting engagement two years ago. It's now available as a database mirroring configuration in Fabric, giving you continuous, near-real time data replication into OneLake.
SharePoint list mirroring and a new Excel shortcut capability are now in preview. Think about how much business data lives in SharePoint lists and Excel files: store inventories, project trackers, regional headcounts, budget drafts. Most of it has never been analyzed alongside anything else, because there was no practical way to connect it without someone building a pipeline. Now it flows into Fabric and stays up-to-date automatically.
Azure Databricks can now read directly from OneLake through its Unity Catalog, and Snowflake interoperability reached general availability with bidirectional Iceberg data access. For organizations running multiple platforms, which describes most of them, you no longer have to pick a winner or maintain a separate copy of data in each place.
And OneLake security is moving to general availability. Define access rules once and they apply across every Fabric workload: warehouses, reports, Excel files, AI agents. This has been the most-requested enterprise feature since Fabric launched and now governance teams no longer have to maintain separate permission structures for every tool in the stack.
What It Actually Means to Give AI a Real Understanding of Your Business
The phrase "AI-ready data" gets thrown around constantly, and it usually means your data is clean and accessible. That's necessary, but it's not sufficient. The more interesting question is whether your AI tools understand what your data actually means.
Microsoft is addressing this through Fabric IQ and is added significant depth to it this week.

Graph in Fabric and Fabric Ontology are moving to general availability on a graph database foundation. An ontology is a structured map of how your business actually works: the relationships between entities, the rules that govern them, the consequences of certain conditions.
The example that tends to land with business leaders is this: instead of an AI agent knowing that "inventory is at 12 units," it knows that 12 units means the spring campaign is at risk, replenishment lead time is 14 days, and here are the three suppliers ranked by reliability. That's a different kind of useful. Ontologies are also now accessible via a Model Context Protocol server, which means AI agents built on Microsoft Foundry, Copilot Studio, or third-party tools can query your business knowledge the same way they'd call any other API.
The other Fabric IQ announcement worth sitting with is Planning in Fabric IQ, co-engineered with Lumel. Enterprise planning has historically lived in specialized tools like Anaplan or Adaptive, disconnected from the data platform. You pull actuals out, load them into the planning tool, build your model, then ship the output back to wherever decisions actually get made. The seams between those steps are where accuracy goes to die.

Planning in Fabric IQ puts budgeting and forecasting directly on top of your governed semantic models without data duplication or export cycles. Business users can build and run planning models without IT involvement, with a stated deployment time of minutes instead of months. The plan and the actuals live in the same place, using the same security model. The stated ambition is to make Fabric the only semantic layer covering past, present, and future: what happened, what's happening, and what should happen. I'd put that in the "watch this space" category. The concept is right, and if the execution is there, it closes a gap that most planning tools never have.
Fabric Data Agents are now generally available. These are AI analysts grounded in your specific business data and know your warehouse, your streaming data, your semantic models, and your mirrored systems. They integrate with Microsoft 365 Copilot, Copilot Studio, Microsoft Foundry and for the first time, they support Git integration and deployment pipelines, which means they can be managed as code rather than one-off configurations that live only in someone's workspace.

Your Entire Database Estate, Managed in One Place
The Database Hub in Fabric is a single management console for an organization's entire database estate: Azure SQL, Cosmos DB, PostgreSQL, MySQL, SQL Server running on-premises via Azure Arc, and Fabric's own native databases. It's not just a list of connections. It surfaces AI-powered monitoring and proactive alerts around blocking queries, memory pressure, and unusual access patterns. The agents can recommend actions and, with human approval, execute them.

For most organizations managing a large number of database instances, the current reality is reactive. Something breaks, someone gets paged, someone investigates. The Database Hub is designed to shift that posture so problems surface before they become incidents. For teams where database administration is stretched thin across a large estate, that's a meaningful operational change and not just a new dashboard.
The Migration Window Is Open, and the Tools are Ready
Many organizations have significant investments in Azure Data Factory pipelines and Azure Synapse Analytics workloads that predate Fabric. For the last two years, there's been a legitimate question of whether to migrate and how painful it would be.
Microsoft released automated migration assistants for both this week, in public preview. The ADF migration assistant handles pipeline conversion and linked service transformation, and leaves triggers disabled by default so your team can validate before anything goes live. The Synapse Spark migration assistant moves notebooks, pools, and job definitions, maps lake databases to OneLake catalog shortcuts, and keeps data in place during the process with no forced cutover. Existing systems run in parallel while you transition.
In short: this is now the time to move. The tooling exists, the platform is mature enough, and the capabilities you'd be moving toward (particularly on the AI and governance side) aren't available on the old stack. The longer you stay on ADF or Synapse, the further you are from the system where everything else is being built.

A Few Things Worth Noting for Developers and Architects
Two developer tooling announcements I'd flag for technical audiences.
- MCP servers for Fabric launched in two forms. There's a local open-source version that connects GitHub Copilot (and Claude) to Fabric, and a cloud-hosted remote version that lets authenticated agents perform real operations in your Fabric environment with Entra ID authentication. For teams building AI workflows, this means your development tools can interact with Fabric the way they interact with any other well-designed API, with proper authentication and a full audit trail.
- Runtime 2.0 brings Apache Spark 4.0, Python 3.12, and Delta Lake 4.0. The more interesting capability for most teams is Resource Profiles, which automatically recommends compute configurations based on workload characteristics. That removes a significant amount of Spark tuning that has historically required specialist knowledge and has historically been done wrong more often than not.

On the ecosystem front, Microsoft announced a partnership with NVIDIA integrating Fabric's Real-Time Intelligence and Fabric IQ with NVIDIA Omniverse libraries for physical AI applications. Digital twins and predictive maintenance are the stated use cases, so if your organization operates in manufacturing, logistics, or infrastructure, that's worth a closer look.
ISVs are also building natively on Fabric at a meaningful pace now: industrial data from SCADA and historian systems, financial market data aggregating 200-plus providers, master data management, data quality testing. Fabric now powers over 23 billion monthly orchestration runs. The platform is building an ecosystem and not just a feature list.
The Takeaway
Last year's FabCon was about whether Fabric was real, while this year's is about what it can do at enterprise scale. The merger with SQLCon signals something that matters more than any individual feature announcement: the separation between where your data lives and where you make sense of it is now gone.
The organizations that will get the most out of AI over the next two years won't necessarily be the ones with the most data. They'll be the ones where that data is connected, consistently governed, and grounded in a shared understanding of what it means. That's what this week's announcements are building toward.
The question worth sitting with: how much of your organization's decision-making is still running on data that nobody has actually connected yet?
