Get Data in Real-Time Analytics: A New and Improved Experience
We’re thrilled to unveil a brand new Get Data experience that simplifies the data ingestion process in your KQL database! With simplicity and efficiency in mind, this update streamlines the way you bring data into your KQL database. Whether you’re a seasoned data professional or just starting your data exploration journey, this new user experience is crafted to empower you every step of the way.
Simplify your data flow
We understand the importance of efficiency in data exploration, and that’s why we reimagined the process in order to minimize the number of clicks required. This ensures that you can bring data from all your familiar data sources including local files, Azure storage, Amazon S3, Event Hubs, Eventstream, and OneLake.
Guided Wizard
The Get Data experience is powered by a new Guided Wizard. This step-by-step assistant takes you through the entire data ingestion process, ensuring that you’re never left in the dark. Whether you’re extracting data from various sources, transforming it to suit your needs, or loading it into your KQL database, the Guided Wizard is your knowledgeable co-pilot, offering insights and guidance at every turn.
Available data sources
Currently, Real-Time Analytics supports six different data sources from which you can ingest data into your KQL database.
Eventstream
Eventstream in Microsoft Fabric is a centralized place to capture, transform, and route real-time events to various destinations with a no-code experience. You can either select an existing eventstream from a workspace or create a new one in the Get Data experience and ingest data into your KQL database with just few clicks. Eventstream is the preferred option for streaming data into your KQL database.
OneLake
OneLake is a single, unified, logical data lake for your whole organization. It’s the OneDrive for data. OneLake comes automatically with every Microsoft Fabric tenant and is designed to be the single place for all your analytics data. You can seamlessly ingest data from OneLake into your KQL database with just a few clicks. All you need is your data file path.
Local File
This is the simplest way to ingest data into a KQL Database. You can upload a file from your local machine using the Get Data UX. This is useful when you have a small amount of data that you want to analyze quickly.
Azure storage
If your data is stored in Azure Blob Storage or Azure Data Lake Storage Gen2, you can use the built-in support for Azure storage to ingest your data.
You can get data by pasting either of the following two URI types:
- Blob container: Ingest up to 5000 blobs from a single container. This is preferred when ingesting a large amount of data or historical backfill.
- Individual blob: You can add up to 10 individual blobs with a maximum size of 1GB of uncompressed data per blob.
Amazon S3
If your data is stored in Amazon S3, you can use the built-in S3 support to ingest it into your KQL database. This is useful when you have data stored in both Azure and Amazon Web Services, and you want to centralize your data in your KQL database for analysis. All you need is the URI of an S3 bucket or individual objects.
Event Hubs
If you’re using Azure Event Hubs to stream your data, you can seamlessly create a connection to your Azure resource from the wizard. This is useful when you want to analyze data in real-time, such as streaming telemetry data from IoT devices or log data from applications. By default, the system starts ingesting new data, but you can optionally choose to backfill the data by specifying an Event retrieval start date.
We hope you find the new Get Data experience more efficient. We look forward to your feedback and suggestions. To get started, see Get data from Eventstreams.