EXPAND ALL
  • Home
Open Source Docs

Tutorial #2: Finish your first PxL Script

Overview

Tutorial #1 wrote a simple script to query the conn_stats table of data provided by Pixie's platform:

1# Import Pixie's module for querying data
2import px
3
4# Load the last 30 seconds of Pixie's `conn_stats` table into a Dataframe.
5df = px.DataFrame(table='conn_stats', start_time='-30s')
6
7# Display the DataFrame with table formatting
8px.display(df)

This tutorial will modify that script to produce a table that summarizes the total amount of traffic coming in and out of each of the pods in your cluster.

Adding Context

The ctx function provides extra Kubernetes metadata context based on the existing information in your DataFrame.

Because the conn_stats table contains the upid (an opaque numeric ID that globally identifies a process running inside the cluster), PxL can infer the namespace, service, pod, container and cmd (command) that initiated the connection.

We'll add columns for pod and service to our script.

1# Import Pixie's module for querying data
2import px
3
4# Load the last 30 seconds of Pixie's `conn_stats` table into a Dataframe.
5df = px.DataFrame(table='conn_stats', start_time='-30s')
6
7# Each record contains contextual information that can be accessed by the reading ctx.
8df.pod = df.ctx['pod']
9df.service = df.ctx['service']
10
11# Display the DataFrame with table formatting
12px.display(df)
  1. Save and run your script using Pixie's Live CLI:
px live -f my_first_script.pxl

Use your arrow keys to scroll to the far right of the table and you should see a new columns labeled pod and service, representing the kubernetes entity that initiated the traced connection. Note that some of the connections in the table are missing context (a pod or service). This occasionally occurs due to a gap in metadata or a short-lived upid.

Script output in the Live CLI after adding pod and service metadata columns.

Grouping and Aggregating Data

Let's group the connection data by unique pairs of values in the pod and service columns, computing the aggregating expressions on each group of data.

Note that PxL does not currently support standalone groupings, you must always follow the groupby() call with a call to agg(). However, the agg call can take zero arguments. A full list of the aggregating functions is available here.

1# Import Pixie's module for querying data
2import px
3
4# Load the last 30 seconds of Pixie's `conn_stats` table into a Dataframe.
5df = px.DataFrame(table='conn_stats', start_time='-30s')
6
7# Each record contains contextual information that can be accessed by the reading ctx.
8df.pod = df.ctx['pod']
9df.service = df.ctx['service']
10
11# Group data by unique values in the 'pod' column and calculate the
12# sum of the 'bytes_sent' and 'bytes_recv' for each unique pod grouping.
13df = df.groupby(['pod', 'service']).agg(
14 bytes_sent=('bytes_sent', px.sum),
15 bytes_recv=('bytes_recv', px.sum)
16)
17
18# Force ordering of the columns (do not include _clusterID_, which is a product of the CLI and not the PxL script)
19df = df[['service', 'pod', 'bytes_sent', 'bytes_recv']]
20
21# Display the DataFrame with table formatting
22px.display(df)
  1. Save your script, exit the Live CLI using ctrl+c and re-run the script.
Script output in the Live CLI after grouping and aggregating the data.

Each row in the output represents a unique pod and service pair that had one or more connections traced in the last 30 seconds. All of the connections between these pod / service pairs have had their sent- and received- bytes summed for the 30 second time period.

Filtering

Let's filter out the rows in the DataFrame that do not have a service identified (an empty value for the service column).

1# Import Pixie's module for querying data
2import px
3
4# Load the last 30 seconds of Pixie's `conn_stats` table into a Dataframe.
5df = px.DataFrame(table='conn_stats', start_time='-30s')
6
7# Each record contains contextual information that can be accessed by the reading ctx.
8df.pod = df.ctx['pod']
9df.service = df.ctx['service']
10
11# Group data by unique values in the 'pod' column and calculate the
12# sum of the 'bytes_sent' and 'bytes_recv' for each unique pod grouping.
13df = df.groupby(['pod', 'service']).agg(
14 bytes_sent=('bytes_sent', px.sum),
15 bytes_recv=('bytes_recv', px.sum)
16)
17
18# Force ordering of the columns (do not include _clusterID_, which is a product of the CLI and not the PxL script)
19df = df[['service', 'pod', 'bytes_sent', 'bytes_recv']]
20
21# Filter out connections that don't have their service identified.
22df = df[df.service != '']
23
24# Display the DataFrame with table formatting
25px.display(df)
  1. Save your script, exit the Live CLI using ctrl+c and re-run the script.
Script output in the Live CLI after filtering out rows without a service identified.

The script output shows that the rows that were missing a service value are no longer included in the table.

Conclusion

You now have a script that produces a table summarizing the total amount of traffic coming in and out of each of the pods in your cluster for the last 30s (start_time). This could be used to:

  • Examine the balance of a pod's incoming vs outgoing traffic.
  • Investigate if pods under the same service receive a similar amount of traffic or if there is an imbalance in traffic received.
Copyright © 2018- The Pixie Authors. All Rights Reserved.
This site uses cookies to provide you with a better user experience. By using Pixie, you consent to our use of cookies.