dharmendra mishra
2 min readOct 23, 2023

--

Streaming Data into BigQuery with Pub/Sub and Dataflow

In today’s data-driven world, organizations often deal with large volumes of real-time data that need to be processed and analyzed swiftly. Google Cloud Platform (GCP) offers a powerful combination of tools to handle this task seamlessly.

Here we are going to explore how to use Pub/Sub, Data Flow, and BigQuery to ingest and process streaming data for immediate insights and analysis.

Pub/Sub: The Entry Point

Google Cloud Pub/Sub is a fully-managed, real-time messaging service that allows you to send and receive messages between independent applications. It acts as the entry point for streaming data into your GCP ecosystem. To start the data pipeline, you would set up a Pub/Sub topic to which data producers send messages. These messages could be anything from user interactions on a website to sensor readings from IoT devices.

Suppose you’re operating a real-time analytics platform for a mobile app, and you want to capture user interactions as they occur. You create a Pub/Sub topic, e.g., user_interactions_topic, to which your app sends messages whenever a user performs an action, such as liking a post or making a purchase. These messages contain information about the event, such as user ID, timestamp, and the action taken.

Dataflow: The ETL Engine

Once the data is ingested into Pub/Sub, it needs to be processed, transformed, and enriched before landing in BigQuery. Google Cloud Data Flow is the ideal tool for this purpose. It’s a fully managed stream and batch data processing service that helps you execute Extract, Transform, Load (ETL) jobs seamlessly.

You set up a Data Flow pipeline to process the messages from the user_interactions_topic. Here’s a simplified example.

import apache_beam as beam

class ProcessUserInteractions(beam.DoFn):
def process(self, element):
# Parse the incoming message
user_id, timestamp, action = element.split(',')

# Apply some transformations or calculations
# For instance, you can calculate the average time between actions for each user

return [(user_id, action, average_time)]

# Create a Dataflow pipeline
with beam.Pipeline() as pipeline:
processed_data = (
pipeline
| beam.io.ReadFromPubSub(topic='projects/my-project/topics/user_interactions_topic')
| beam.ParDo(ProcessUserInteractions())
)

# Write the processed data to BigQuery
processed_data | beam.io.WriteToBigQuery(
'my-project:my-dataset.my_table',
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
)

BigQuery: The Data Warehouse
Once the data is in BigQuery, you can execute SQL queries to analyze it in real-time.

-- Query to find the total number of likes by user in the last hour
SELECT
user_id,
COUNT(*) as total_likes
FROM
`my-project.my-dataset.my_table`
WHERE
action = 'like'
AND TIMESTAMP_DIFF(CURRENT_TIMESTAMP(), timestamp, SECOND) <= 3600
GROUP BY
user_id

The examples showcase how the Pub/Sub, Dataflow, and BigQuery pipeline can be used to capture, process, and analyze streaming data in a real-world scenario. The actual implementation may vary based on your specific use case and requirements.

--

--

dharmendra mishra

Data-driven Analytics/Engineering leader with 12+ years of experience in digital advertising company. Skills SQL, Python, Excel and GCP Google Cloud Platform.