XLTable Connects Excel Directly to Snowflake for Live OLAP Pivot Tables
XLTable connects Excel directly to Snowflake, exposing a myOLAPcube OLAP model, letting users build live PivotTables on a Snowflake trial without CSVs.
Why connecting Excel to Snowflake with XLTable matters
A compact walkthrough demonstrates how to connect Excel to Snowflake using XLTable so analysts can build live PivotTables against a Snowflake-hosted OLAP model instead of exporting CSVs or wiring in a separate BI tool. The guide centers on a sample Snowflake dataset and an OLAP cube named myOLAPcube that XLTable discovers automatically; the whole flow runs on a Snowflake trial account and an XLTable server, giving analysts an immediate path to ad-hoc analysis against cloud-native data.
What the sample deployment creates in Snowflake
The supplied SQL script builds a realistic, self-contained analytical schema in a database named olap. It creates eight tables in olap.public that model a small retail sales environment: a two-year calendar (Times, 731 rows), four Regions, five Managers (with a many-to-many link to regions), eight Stores assigned to regions, eight product Models, about 3,000 Sales transactions (store, model, date, quantity, amount), roughly 500 Stock snapshot rows, and a single olap_definition row that contains the OLAP cube definition. Facts (Sales, Stock) join to Stores, Models and Times; Stores belong to Regions and Managers are associated to Regions in a many-to-many relationship.
Prerequisites and environment checklist
To reproduce the example you need a Snowflake account (trial or paid), a user with privileges to create a database (SYSADMIN or CREATE DATABASE), and an active virtual warehouse such as COMPUTE_WH. You can apply the sample script via SnowSQL or pasting it into Snowflake Worksheets; the walkthrough assumes an XLTable server is installed and running. No custom data is required: the script generates the sample rows described above.
How to deploy the sample schema and confirm it exists
The guide provides two deployment paths: running the provided SQL file from the SnowSQL CLI or executing it in a Snowflake Worksheet. Either approach creates the olap database and the eight example tables; the article shows how to verify the build by querying Snowflake’s information schema for table names and row counts in the PUBLIC schema. If the database does not yet exist, the script can be preceded by a manual CREATE DATABASE IF NOT EXISTS olap and USE DATABASE olap sequence to avoid errors.
Configuring XLTable to talk to Snowflake
XLTable reads OLAP definitions from a table (olap_definition) inside the database, so cube configuration is stored alongside the data rather than in YAML or a separate GUI. To connect XLTable to Snowflake you edit the XLTable settings file — settings.json — and add a Snowflake connection block: credential fields for user, password, account locator, the warehouse name (for example COMPUTE_WH), and the schema name (olap.public). The sample settings also show a simple local user entry (analyst / password123) and a USER_GROUPS mapping that includes olap_users so the connecting Excel session will see the available cubes.
Restarting XLTable to apply configuration
After editing settings.json you restart the XLTable service so it will pick up the Snowflake connection and discover cubes from olap_definition; the guide shows a typical supervisorctl restart command to refresh the olap process. Once restarted, XLTable will read the olap_definition row and register an OLAP cube that Excel can consume.
Connecting Excel and working with myOLAPcube
On the Excel side, the connection is made using Excel’s Data → Get Data → From Database → From Analysis Services path. The server URL is the XLTable server address (the guide uses an example of http://your_server_ip) and the example analyst credential (analyst / password123) is provided to log in. Once connected, myOLAPcube appears as a selectable source and you can drag measures and dimensions into a PivotTable to explore sales, inventory and time-based hierarchies without exporting data.
What myOLAPcube exposes to Excel
The cube that XLTable exposes in the example ships a concise set of measures and dimensions designed for retail analysis. Measures include Sales Quantity (sum of sales.qty), Sales Amount (sum of sales.sum), year‑over‑year versions of both metrics implemented by a date transformation, Average Stock Quantity (average of stock.qty per store and model), and a calculated Turnover metric (Sales Quantity divided by Average Stock Quantity). Dimensions include Store ID/Store, Region (North, South, East, West), Manager (many‑to‑many with Region), Model (Alpha through Theta), and a date hierarchy (Year / Quarter / Month / Day) that supports drill-down in PivotTables.
How the cube definition is stored and transformed at query time
Rather than a separate model file, the OLAP definition lives as a SQL script in the olap_definition table with XLTable-specific annotations that map SELECT expressions to measures and dimensions. XLTable parses those annotations at runtime and generates the XMLA/Analysis Services interface Excel consumes. Year‑over‑year comparisons are implemented using a Jinja transformation: XLTable rewrites the date expression at query time (using a DATEADD to shift dates by one year and formatting them), producing the "last year" measures without additional tables or materialized views.
Customizing the sample dataset
The script is intentionally easy to adapt. To extend the date range you can increase the Times table row count to include 2025 and update the cube’s filter to include ‘2025’ in the year set. To add stores or models, expand the VALUES lists used by Stores and Models and adjust the modulo calculations in the Sales and Stock inserts so the generated rows reference the new totals. If you prefer to place the sample in a different database or schema, replace occurrences of olap.public with your chosen database.schema and update the schema value in settings.json so XLTable points to the right location.
Troubleshooting the most common issues
The guide documents several practical troubleshooting steps. If you encounter a Database ‘OLAP’ does not exist error, run the CREATE DATABASE IF NOT EXISTS olap; and USE DATABASE olap; statements before the rest of the script. Insufficient privileges are resolved by switching to a role that can create objects, for example USE ROLE SYSADMIN;. If a virtual warehouse is suspended, resume it (for example ALTER WAREHOUSE COMPUTE_WH RESUME;). If Excel shows no cubes, verify that a row exists in olap.public.olap_definition and confirm the connecting user’s group membership in USER_GROUPS includes olap_users. Finally, the account entry in settings.json must use Snowflake’s account locator format (for example xy12345.eu-west-1), which you can find in the Snowflake Admin UI.
Who benefits from this approach and how teams might use it
This pattern is aimed at analytics teams and business users who rely on Excel PivotTables for exploration but want to work directly against cloud data in Snowflake. Data engineers can use the SQL script to provision a sandbox dataset for analysts, while BI teams can expose curated analytical models as OLAP cubes without adding a separate modeling layer. Developers and platform teams responsible for internal tooling will find the model-as-data approach—storing cube definitions in the database—simplifies deployment and versioning because the same database holds both the data and the metadata XLTable consumes.
How this fits into broader analytics workflows and ecosystems
XLTable’s example illustrates a lightweight route from cloud data warehousing to desktop analysis: Snowflake provides scalable storage and compute for the modeled facts and dimensions, while XLTable supplies an XMLA-compatible OLAP surface that Excel can query natively. This reduces friction compared with exporting flat files or building dedicated BI extracts and integrates with common ecosystems such as ETL pipelines that populate Snowflake, developer tools that manage SQL scripts, and automation platforms that orchestrate refresh cycles. For organizations already using CRM, marketing, or ERP systems that load into Snowflake, the same approach can expose curated OLAP cubes to business users without additional middleware.
Developer and operational considerations
Because cube definitions are SQL with embedded annotations and optional Jinja transformations, developers can version-control the SQL scripts and adapt them alongside the rest of the data platform codebase. Operationally, the example relies on a running XLTable server and an available Snowflake warehouse; monitoring and access control remain standard platform responsibilities. The example’s USER_GROUPS mapping demonstrates how access can be scoped so only authorized Excel users see specific cubes, enabling a simple permissioning pattern without external configuration tooling.
Practical reader questions addressed
What does XLTable do here? It reads an OLAP definition stored in the database and presents it to Excel as an Analysis Services‑compatible cube, exposing measures and dimensions without intermediate exports. How does it work? XLTable parses annotated SQL in olap_definition, rewrites queries where needed (for example for year‑over‑year metrics using Jinja), and serves XMLA endpoints Excel can query. Why does this matter? It enables live PivotTable analysis directly against Snowflake data, keeping the authoritative data in the warehouse and removing CSV-based workflows. Who can use it? Analysts comfortable with Excel PivotTables, data engineers provisioning Snowflake schemas, and platform teams that run XLTable servers. When is it available? The sample runs on a Snowflake trial and XLTable offers a trial option in the documentation; the SQL and configuration examples let teams deploy the sample immediately once they have a Snowflake account and an XLTable instance.
Common maintenance and extension paths
Teams can expand the sample into a production model by replacing generated sample data with production feeds, adding additional measures or dimensions, or exposing multiple cubes for different business domains. The olap_definition pattern supports calculated measures and SQL-driven hierarchies, so teams can iterate on business logic inside the database. Because the cube definitions are database-resident SQL, they lend themselves to CI/CD: test the SQL in a development Snowflake instance, promote the olap_definition row to staging, and then to production, while the XLTable service continues to read the active definition.
Where to find more resources and next steps
The sample script and extended documentation are provided alongside the XLTable project documentation. The guide recommends downloading the snowflake_sample.sql and following the step sequence—deploy the SQL, configure settings.json to point XLTable at Snowflake, restart the service, and connect Excel—to see a working myOLAPcube appear in Excel. The documentation also lists the sample’s customization points and a short set of troubleshooting queries you can run inside Snowflake to validate deployment.
The example shows a minimal but practical pattern for surfacing Snowflake data to the spreadsheet-centric workflows still common across enterprises; by storing cube metadata in the same database as the facts and using a lightweight service to translate that metadata into an XMLA interface, teams can reduce integration complexity and preserve a single source of truth in Snowflake while keeping analysts in a familiar Excel environment.
Looking ahead, as more organizations centralize data in cloud warehouses and seek low-friction access for business users, approaches that convert SQL-defined models into analysis-ready surfaces for existing tools will grow in relevance; the myOLAPcube example is one concrete expression of that trend, demonstrating how database-resident definitions, Jinja-driven query transformations, and a small translation layer can deliver live, drillable PivotTables against cloud data without duplicating datasets or adding heavyweight modeling platforms.


















