My Pure colleague and good friend Anthony Nocentino wrote a great blog post recently on how to get the Pure Storage FlashArray OpenMetrics Exporter up and running. I wanted to add to what he has written with context around how I got it working running in Docker Desktop on Mac and Windows and some issues that came up that were easily remedied. I also wanted to provide a link to the FlashBlade version of this great toolset for those that have them in their environments. both work pretty much the same way for installation and configuration, with the difference being the metrics being exported.
The Setup
Not much to tell on the Docker Desktop install. The install works without a hitch. It takes awhile, and requires a reboot on the Windows side, but it runs smooth after. Pulling the images was seamless as well, obviously since were just, um, pulling the images. Nothing fancy. I pre-pull the images so I can have them readily available. Your choice of course. Here’s what I pull:
docker pull quay.io/purestorage/pure-fa-om-exporter:latest docker pull prom/prometheus:latest docker pull grafana/grafana:latest
Next, we need to create the virtual volumes for Prometheus and Grafana. This step isn’t necessary if you plan on editing files directly in the containers themselves. I created two folders – /prometheusdockervolume
and /grafanadockervolume
. The Prometheus folder contains the config custom prometheus.yml
file, along with a pure.rules
file. I used the default prometheus.yml
file supplied in the /extra/prometheus
folder (not the /config
folder) and rules files supplied in the repository and adjusted them for my environment. The yml file in the /extra/prometheus
folder contains multi-target config which is what I wanted. The Grafana virtual volume is for extra space since I will be collecting a lot of metrics, as well as being able to keep a static database separate from the container for data persistence with any Grafana container (ie. I accidently delete my container). You can read more about that here.
We then need to create a dedicated Docker Network. You can bypass this, but it does make life a lot easier if you need to troubleshoot communication between containers. When you create a Docker network, it will use a non-overlapping subnet that does not interfere with any other Docker networks, or your local machines network. The default Prometheus config file supplied already has the targets:
keys defined with a subnet of 10.0.2.0/24
. You can create a network that uses that subnet, or allow Docker to create one for you. I choose to let Docker do the work. Open up a terminal and type:
docker network create prometheus-network --driver bridge
For a specific subnet:
docker network create prometheus-network --subnet 10.0.2.0/24 --driver bridge
Now that the network is out of the way, here are the command lines I use to spin up the containers:
docker run -d -p 9490:9490 --name pure-fa-om-exporter --network prometheus-network quay.io/purestorage/pure-fa-om-exporter:latest docker run -d -p 9090:9090 --name=prometheus --network prometheus-network -v D:/exporter/prometheusdockervolume/prometheus.yml:/etc/prometheus/prometheus.yml -v D:/exporter/prometheusdockervolume/pure.rules:/etc/prometheus/rules.yml prom/prometheus:latest docker run -d -p 3000:3000 --name=grafana --network prometheus-network -v D:/exporter/grafanadockervolume:/var/lib/grafana grafana/grafana:latest
Now, I check the Docker UI and see them all spun up and in a running state. You could do this in the Docker CLI as well, but I like the visual.
You will now want to run docker network ls
from a terminal prompt to see the containers and which IP addresses they have been assigned. You should be able to connect to all the instances via localhost:<port_number>
. You must make sure that all of the targets:
keys in the yml file have the correct IP address for the exporter. As an example, my Docker network looks like this:
"Containers": { "44afe44ffb0a6df5a9420d563b2428e55dd4fb68256856975305acef26e2130b": { "Name": "pure-fa-om-exporter", "EndpointID": "1b3c586e465e3e3fd43ca64a1fbaf9574e504874a53381ae79942f6253521982", "MacAddress": "02:42:ac:12:00:02", "IPv4Address": "172.18.0.2/16", "IPv6Address": "" }, "9e1c6815a40f436a057711ea8cf89e2fa58e50218eb97a3e892e6b7165cf54b1": { "Name": "prometheus", "EndpointID": "8539cde86bf8b2824b9e4a593f0eafe73c6ab7b0b3538124e7f821e8e5778ac3", "MacAddress": "02:42:ac:12:00:03", "IPv4Address": "172.18.0.3/16", "IPv6Address": "" }, "b54a78f8df33a7497540c4d58d73e955d65dd9d5181547a5134c9830f988d5de": { "Name": "grafana", "EndpointID": "5c95f6d732690588093313320a1ffab003e992f2bb114604b27d54c2dc6f535e", "MacAddress": "02:42:ac:12:00:04", "IPv4Address": "172.18.0.4/16", "IPv6Address": ""
My prometheus.yml file targets: keys would look, like this:
targets: - 172.18.0.2:9490
To ensure that the exporter and Prometheus are talking to each other, connect to the Prometheus UI via localhost:9090
. You should then click on Status and then Targets. You should see something similar to this with Blue being Good and Red being Bad. (My Red is due to not having any Pods created on my arrays.)
The Trouble with Tribbles
The issues started when trying to get everything to talk to each other and wrangling a localhost connection. I had an issue that port 80 was already in use, so make sure that nothing else is using that port. It took me awhile to find that bugger since it was a nodejs process that I had setup awhile ago that (thankfully) wasn’t doing anything. I was then able to connect to the Exporter web page at localhost:9490.
Once there, I went into Prometheus and found an interesting error – Target authorization token is missing
Turns out that the exporter was able to scrape the FlashArray, but Prometheus could not connect to the exporter to ingest the raw data. Hooking up with one of the authors, he told me to try this curl command to see if Prometheus could, in fact, connect to the instance.
curl -H 'Authorization: Bearer MyTokenHere' -X GET http://127.0.0.1:9490/metrics/array?endpoint=10.225.112.90
**Tip: if you deploy it within K8’s, you can also use the –debug parameter on the container
It could connect, so still hunting down what the issue could be. I finally got to the point of recreating my prometheus.yml
config file and low and behold – it worked! It seems I had a typo in the old file. Fat fingers get me every time.
Once that was running, it will take awhile for metrics to get pumping into both Prometheus and Grafana, so I took a little time to import the default dashboard provided in the repo under /extras
, and to also start creating my own dashboards. This can be a process depending on how you want them to look, what types of graphs you want, etc. I always start to get complicated and then revert to K.I.S.S.!
Well, that’s it. All is up and running, we’re collecting metrics, and building graphs and charts. Many thanks to the Pure folks who created this fine bit of code. It is incredibly helpful for metrics data and very easy to get up and running! Now to get it running in K8’s…