DevRel metrics and why they matter

Sean Falconer
11 min readDec 16, 2020

--

I’ve moved! Check out this article on my new site here.

Thermometer

If you can’t measure it, you can’t improve it. — Peter Drucker

Peter Drucker’s often cited quote is one of the most famous quotes in business and one from my own experience as a startup founder and manager I’ve found to be very true. However, when it comes to developer relations, it is notoriously difficult to measure in part because it’s also notoriously difficult to define.

Developer relations means different things to different people and it encompasses many different tasks, responsibilities, and even resides within different areas of an organization (typically engineering, product, or marketing).

But to find success within any business, you need to be able to justify your existence. The leadership of the organization needs to understand the 10,000 foot view of the value you bring.

Clearly defined metrics tied directly to the overall program goals is simply the best way to do it. It may not tell the full story of developer relations, but it needs to be part of the culture for any developer relations program to succeed and be valued.

In this article, I discuss North Star metrics for developer relations programs, the activities we do as developer relations engineers, and how to measure those activities and tie them back to your North Star metric. Before we get there, let’s take a look at why measuring developer relations activities is particularly difficult.

Why is measuring the impact of developer relations hard?

Measuring tape

At its’ core, developer relations is about relationship building, we are in the relationship business. We engage, acquire, and satisfy developers, we build healthy communities, but measuring the health of a community is extremely challenging.

Furthermore, there are activities that are important to building strong relationships like connecting over a coffee at an event, making yourself available to answer community questions, or introducing a roomful of students to your product that might not have a clearly defined quantitative measurement or an easily traceable one.

For example, you could run a hackathon with university students and ideally, it would be great to be able to track conversions for your platform back to this event, but they’re university students, it could be years before they end up working for a company where they need and recommend your platform. That’s a pretty tough timeline to measure against, but it doesn’t mean there’s no value there.

Also, many of the things we can easily measure, like page views, video views, time on site, attendance for an event, etc. are nice to have, but may or may not correlate with increased ROI for whatever measurement is critical to your product or organization.

That being said, simply shrugging and saying we can’t measure what we do is not going to fly. Metrics tracking, correlating those with program success, and creating visibility for those metrics is key to justifying developer relations as a function and securing continued investment.

Establishing a North Star metric

North star

A North Star metric is a single measurement that is predictive to your company’s or product’s long-term success. For developer relations, it’s important that your North Star metric is mappable to the overall goals of your company or product.

Below are a few common examples of North Star metrics for developer relations.

Time to Hello, World

The time to Hello, World (TTHW) measures the time it takes a developer to go from sign up to sending whatever the equivalent is to your platform’s Hello, World.

Developers can do this within 5 minutes with the the best developer-focused companies in the world. Most companies believe their TTHW is about 15 minutes, but in reality, when tested, 50% of developers never even make it that far.

Simply measuring and trying to improve this metric is not enough though. You need to tie this back to the overall company or product goals.

How does improving this metric increase sign ups, revenue, engagement, conversion rate, etc?

Those are the metrics that most executives are going to care about, they likely will only care about TTHW if you can tell the story for how improving this metric leads to an improvement in one or more of these company-specific metrics.

One thing to note about this metric is the effort to optimize this value beyond a certain point proportional to the value you get from that further optimization eventually may not be worth it. For example, if you get this down to 5 minutes, the effort to reduce it further to 4 minutes, may not be worth it relative to the impact and you may be better off establishing a new metric where the effort put in results in more impact.

Active developers

Increased activity usually means increased engagement and increased engagement often means more revenue. A big part of developer relations is making developers successful and increasing active developers arguably correlates with developer success.

Making the growth of active developers your North Star does not mean you can’t also focus on reducing the TTHW and in fact, reducing the TTHW might actually move this number significantly, it just means that the primary metric you are tracking and trying to optimize is active developers.

Lead generation

While active developers is a measurement for post registration, lead generation is really the top of the funnel. Often lead generation is a metric owned by marketing departments, but developer relations can play a key role in driving leads. Many of our activities from blogging, creating videos, or presenting at a conference could create leads for your product.

The danger with this North Star metric is that it’s easy to manipulate and doesn’t relate directly to the core principles of developer relations, which is to make developers successful. If you focus solely on growing sign ups, you may still have a leaky bucket where developers leave shortly after sign up because your developer experience is bad.

If this is your metric, you should also be looking at the value of those leads over their lifetime with cohorts by source. It could be that you get a lot of leads due to an activity like speaking at a conference but all those leads stop engaging after a short period of time. Conversely, you may get less leads from blogging, but those leads end up sticking around and valuing your product. You need to understand the lifetime value of a lead based on the originating source.

Measuring DevRel activities

The developer relations cycle is to create and own the engineering resources necessary to make a developer successful with a product, educate and drive awareness within developer communities, inspire those communities to take action, and collect feedback to influence the product to improve the developer experience (see image below).

The Developer Relations ongoing interface cycle from The Core Competencies of Developer Relations

Each of these components of the developer relations lifecycle entail a variety of activities and with each activity we should measure the result and see how it impacts our North Star metric.

In this section I breakdown each part of the lifecycle, discuss the activities that are typically part of that component, and discuss what we can and should measure.

With every activity, you need to have a plan for measuring quality and impact. Quality lets us know that what we created was valued by those who used it, and impact lets us know that what we created positively moved our North Star metric. Understanding how you will measure these things beforehand helps communicate the results later and helps you focus your future efforts on the most impactful work items.

Developer resources

Developer resources are the engineering artifacts developer relations engineers create to help developers succeed with a product. These are things like documentation, client libraries, code labs, video tutorials and sample code.

Quality measurements

To measure quality for code, you can look at things like star ratings for your source code on GitHub and survey your community to collect CSAT and qualitative data related to your libraries and samples.

For documentation, you can look at things like page views or time on site, which at least indicates that people are using your carefully crafted documentation. On our team, we run bi-quarterly surveys to collect both quantitative (satisfaction, usability, and completeness) and qualitative feedback to understand where we can improve.

To go a step further, for documentation that attempts to fill a particular information gap or help developers self-serve to resolve an issue, you can look at bug generation before and after the article goes live to see if you reduced the number of issues being raised.

Similarly, for video, you can look at views, thumbs up/down, average watch time, and also look at the delta for bugs before and after if the video is targeted at helping developers perform a specific task.

Impact measurements

The more challenging piece is understanding how the content has impacted your North Star metric. For each of the potential North Star metrics listed before, you can look at the delta for the before and after value of the metric.

It’s not necessarily causation as there’s likely many moving parts, but if you do see a positive trend every time you land a new sample or video, then that feels like the effort was probably worth it.

For something like client libraries or samples, you can trace usage to developer engagement. With our client libraries, we measure the number of API calls the libraries are responsible for versus total API calls to understand the overall impact the libraries are having.

Education, awareness, and inspiring action

Inspiration

Tasks related to educating and driving awareness are all the community activities we do in developer relations: speaking at conferences, running workshops, hosting hackathons, blogging, demos, and answering questions in online communities.

It can be pretty difficult to directly tie these activities to a North Star metric of TTHW, but with some work, connecting these to platform engagement or lead generation is doable.

With any event, you can create a trip report that breaks down what you did, goals of your involvement, observations/themes, notes about the attendees, highlights, and learning & feedback. But more importantly, if you know the attendee list, you can also track conversions from the event to activity on the platform. It’s not always easy or possible to do, but when it is possible it is an extremely effective means to demonstrating impact of the event or workshop.

Also, if you organized the event, you can send a survey post-event to measure satisfaction and collect qualitative feedback.

With demos, you can look to see if companies or people are creating similar experiences to what you showed in your demo. It’s a bit challenging to keep track of, but if you focus on the largest most impactful businesses on your platform, you can significantly focus the search space.

Lead generation is probably the easiest of the proposed North Stars to tie directly to these artifacts as you can use a lot of regular marketing strategies like tracking the source of the conversion.

That works best for blogging and online communities, it’s a bit trickier for speaking engagements. You can look at the sign up delta before and after the event to see if there was a measurable increase due to the activity. Also, if you know the attendee list, you can try tracing conversions from that list to sign ups.

Feedback and influence

Feedback

In my opinion, one of the most critical parts of our jobs in developer relations is sharing feedback and influencing the product to create a better developer experience.

There’s often a push and pull between product and developer relations, where the right decision for developers could be to make a product change, but for whatever reason it can’t be prioritized and developer relations is left trying to manage the issue through documentation. But documentation can only take you so far, product complexity cannot be fixed by docs, the right thing to do is address it in the product.

How you get product features prioritized likely requires its’ own dedicated blog post, but once you are able to convince people to do a product change, from a metrics standpoint, it’s important that you catalogue the feedback you collected and the proposal you made that led to that change. You also need to measure the impact of the change.

Are people using that feature? Does it lead to reduced TTHW? Grow engagement? Create sign ups?

These are things you need to know to help people understand how critical your function is to the lifecycle and success of the product. I’ve found that once I’ve been able to influence product and the changes I suggested led to positive outcomes (and I clearly articulate this through metrics), it gets easier to make subsequent suggestions. People learn to trust that the suggestions you make are meaningful and will continue to lead to positive outcomes.

Final thoughts

Metrics are key to helping leaders within your company understand the important role that developer relations fills. You need to establish a North Star metric that is aligned with the overall organization or product’s goals. Every activity you participate in or prioritize should have a metrics story.

How are you measuring the quality of that thing and the impact?

You should be thinking this through and answering these questions prior to investing your time, otherwise you won’t be able to evaluate whether the activity was worth it or justify continued investment and resources dedicated to developer relations.

With any organization there are those making decisions about product features, go-to-market strategy, key partnerships, and so on and those that are responsible for executing the vision of the decision makers. Beyond metrics, developer relations needs to insert itself into the decision-making process of the product lifecycle.

A lot of what gets built should be coming directly from feedback we collect and suggestions we make for improving the developer experience. We can’t just be passengers on this ship, simply executing other people’s vision, we need to help drive and navigate what gets built.

If what is getting prioritized and built is coming from decisions made by you, and that leads to overall product success, it’s very easy to justify why you should continued to get paid.

--

--

Sean Falconer

Head of Developer Relations and Marketing @ Skyflow | Engineer & Storyteller | 100% Canadian 🇨🇦 | Snowflake Data Superhero ❄️ | AWS Community Builder