Blog

Data Analytics Lessons from the NBA's first data mining product from IBM

I attended the Boston Business Intelligence meetup at Microsoft last week. I was spurred to attend because Bedrock Data is launching a new product in January that closely aligns us with business intelligence teams as the mechanism to supply unified customer data across applications for business analytics and dashboards.

It was a flashback moment for me as it put me back to the start of my career at IBM in the mid 1990s where I worked with a small team led by Dr. Inderpal Bhandari, then a chief research scientist, today the Chief Data Officer of IBM.

Our team created and launched IBM’s Advanced Scout, the first data mining software used by NBA teams which within two years was used by 24 of the 28 NBA teams, powered a regular content series “Beyond the Boxscore” on NBA.com, and was covered in a host of publications including Wall Street Journal, Sports Illustrated, Washington Post, Computer World, CIO Magazine and Wired Magazine.

I’m proud of those achievements in launching the product and there are takeaways from what we did that apply to business intelligence projects to this day, over two decades later.

Let me start with some background and then get to the lessons. Two things were happening in 1995: Inderpal had developed a new data mining algorithm at IBM Research to find interesting patterns in data, and IBM, as an NBA sponsor, was part of an effort to systematize, for the first time, the collection of NBA play-by-play data through courtside data collection.

Inderpal saw an opportunity, asking the question, “Where’s the opportunity to apply these new data mining algorithms to the NBA using this new play-by-play data resource?”

That’s where I came in.

As a high schooler in the New York suburbs at the peak of the rough-and-tough New York Knicks and Big East basketball, I was into basketball. I was the man for the job. I met Inderpal at an IBM student luncheon in the spring of 1995 – and we were off.

It was over 20 years ago but I still remember it like yesterday (although have forgotten a lot of what’s happened in between). These are three lessons I took from what we achieved and how we achieved it, to apply to any data analytics project.

#1 – Focus on the Most Important Performance Metric

The slate I was given at the time is – we have data on each play outcome. Not the detailed pass level data you might see today, but the basics of who shot the ball, what happened (score, miss), was there an assist, who got the rebound. As well as data on substitutions.

The question was – what can we do that’s available in the data and compelling for coaches?

At the time, the most common basketball success metric was field goal percentage.  We went in a different direction.

We focused on points scored, and looking at the lineup combinations that had the highest contribution to positive and negative differentials on points. Our thinking was that there’s a ton happening on the basketball court, but if we start with the outcome as the point of analysis – we are going to unearth patterns that are interesting to the coaches. It will be meaningful if we can tell the coach if a certain player is making the difference, or a combination of players or matchups is making the difference. Take the noise of an entire NBA game or series of games, and net out to the coach what matters.

This personal legacy for me around this is applying that thinking to analytics across a variety of industries. The big breakthrough in marketing analytics in the past decade is looking at what marketing activities are most connected to revenue as an outcome. It’s the same method of thinking that led to these types of closed loop marketing analytics, as looking at points as the key outcome for basketball.

The basketball legacy is this type of plus-minus points analysis has become mainstream in basketball and can now be found in daily box scores on every sports website. At the time, it wasn’t a mainstream metric for basketball – in fact it was commonly found in hockey where very few goals were scored. I thought – what if we apply it to basketball where there are many, many points?

Here’s the ESPN box score from the recent Christmas Day game between the Warriors & Cavs. You can see +/- featured in the box score, and in this particular game the Warriors performed the best when Andre Iguodala from their reserve squad was on the floor. He was the unsung hero with 4-for-8 shooting, 6 rebounds and a blocked shot off the bench.

Every ESPN NBA box score now features the +/- metric, which back in 1995 was a hockey only stat<v:shapetype
 id="_x0000_t75" coordsize="21600,21600" o:spt="75" o:preferrelative="t"
 path="m@4@5l@4@11@9@11@9@5xe" filled="f" stroked="f">
 
 
&nbsp; 
&…

Every ESPN NBA box score now features the +/- metric, which back in 1995 was a hockey only stat

#2 – Get Creative to Solve Data Challenges

To make Advanced Scout compelling for coaches, we had to get creative. The data mining system worked off the concept of attributes and looking at which combination of attributes were most interesting (statistically significant) for a specific numeric attribute (points).

We had information on substitutions but it wasn’t immediately clear how we could use that information for analysis. We couldn’t just throw the list of players on the floor in an unstructured data format, it needed to be a structured data format that would work as an input to the data mining engine.

Then it came to me.

We came up with the idea of a player order ranking, which we prepared a default for the coaches based on player height, and this ranking plus the players on the floor determined which positional slot to put each player in as the attributes – using the order of point guard (PG), shooting guard (SG), small forward (SF), power forward (PF) and then center (C).

So for example, I got into basketball around the 1989-90 Knicks – so let’s take that team as an example. Using these seven players, we would order them like this:

  1. Mark Jackson
  2. Rod Strickland
  3. Gerald Wilkins
  4. Johnny Newman
  5. Kenny Walker
  6. Charles Oakley
  7. Patrick Ewing

So with the starting lineup of Jackson, Wilkins, Newman, Oakley & Ewing, the players would slot into the positions like this, in order:

          PG = Jackson

          SG = Wilkins

          SF = Newman

          PF = Oakley

          C = Ewing

Swap out Jackson for Strickland, Strickland swaps into point guard. Swap out Oakley for Walker, Walker slots into power forward.

Let’s say there was a “small ball” lineup with Jackson, Strickland, Wilkins, Newman and Oakley (that would be pretty rare), then it would lay out like this:

          PG = Jackson

          SG = Strickland

          SF = Wilkins

          PF = Newman

          C = Oakley

Strickland’s not a shooting guard, you might say. Or Oakley’s not a center.

But in these lineups, they were playing those roles – and this allowed us to ensure the algorithms knew that was the case when those lineups were on the floor. 

This approach created the foundation for analysis that led to insights created from every game – and shared with coaches and also used as content for an NBA.com series and TV broadcasts.

There were many, many stories that resulted – see the end of the article for links to ones I could find still published. A Darrel Armstrong story became the most famous.

In 1996-97, Armstrong was a backup guard for the Orlando Magic. He averaged a modest 6.1 points over 15 minutes per game off the bench in the regular season, shooting 38% from the field.

In the playoffs, the Magic took on the Heat, and lost the first two games of the best-of-five series. Following game two, Advanced Scout flagged for the Magic coaches that the team was performing best with smaller lineups with Armstrong on the court. Armstrong played a much more prominent role from there forward, the Magic won both games three and four and gave the Heat a run in game five.

Armstrong’s performance in the playoffs was nearly double his points and minutes from the regular season, while upping his shooting performance to 48% from the field. This personal breakthrough carried over as for the next four seasons his minutes per game increased and he became a more and more prominent part of the Magic. In 1998-99, he won both Most Improved Player and the Sixth Man of the Year Awards.

This story is covered in multiple outlets and you can see in the links at the end of this article.  Here’s how the great NBA reporter Jackie MacMullan told the story in a 1998 Sports Illustrated article titled “Cyber Scouting”:

A dramatic example of the value of computer scouting came in the first round of the playoffs last season, when Orlando found itself down 2-0 to the Miami Heat, having lost those games by an average of 26 points. When the Magic got home after the second loss, Sterner spent three hours in his office plugging questions into the Advance Scout program.

Shortly after 3 a.m. he unearthed a nugget: With reserve point guard Darrell Armstrong on the floor, Orlando had outscored the Heat by 15 points during the two games. In addition, the Magic had shot 64% with Armstrong on the floor and 37% without him, while Miami had shot 57% while Armstrong was out of the game and 45% when he was harassing point guard Tim Hardaway and his Heat teammates. Sterner called up corresponding video footage, which showed how effectively Armstrong had pushed the ball up the floor in transition and created scoring opportunities, and how, on defense, he had forced Miami turnovers and caused the Heat to resort to tough shots.

Armstrong had played only 23 minutes in the two games. In Game 3 Orlando coach Richie Adubato played Armstrong 38 minutes. He had21 points, eight assists and one turnover, and the Magic won 88-75. Rejuvenated Orlando also won Game 4, with Armstrong contributing 12 points, nine rebounds and one assist. Although Orlando dropped the deciding fifth game in Miami, the Magic had been transformed from a floundering club into a team infused with new life--not to mention nearly $3 million more from ticket sales, concessions and television revenues.

The Darrell Armstrong Advanced Scout story was featured in Sports Illustrated in 1998

The Darrell Armstrong Advanced Scout story was featured in Sports Illustrated in 1998

#3 – First & Foremost, Deliver on Decision Support

A key premise to how we approached Advanced Scout came from Inderpal and our chief software architect Rajiv Pratap. Every time we talked to the coaches, we talked to them about being a tool to help them make better decisions.

To bring this life, we linked the stats to video. Since every play was time stamped, we could take an insight like the Armstrong nugget, and then feed the specific offensive and/or defensive plays when that lineup was on the floor.

In fact, the main allies of the tool became the video coordinators who could now link analytics to how they fed video to the rest of the coaching staff. These video coordinators became trusted advisors to head coaches. Two of the video coordinators we worked closely with in those years were Erik Spoelstra and Frank Vogel.

Spoelstra then was video coordinator for Pat Riley’s Miami Heat, and after moving up the ranks became their head coach in 2008. Vogel then was video coordinator for Rick Pitino’s Boston Celtics, and became head coach of the Indiana Pacers in 2011 and the Orlando Magic in 2016.

The legacy of IBM’s Advanced Scout is significant – the plus minus stat in basketball; the great stories that IBM would leverage for years around generating insights in data; the development of careers of a generation of analytical coaches; and now today IBM’s Watson technology is closely partnered with ESPN and can be found generating insights for fantasy leagues.

Here’s the press coverage I could locate – and there’d be a lot more if this didn’t occur during the early days of the Internet. I also wrote hundreds of stat insight articles for NBA.com under the Beyond the Box Score banner which are no longer archived on the site.   

Check out the links above for more texture. Other highlights from this included being featured in an IBM ad campaign (featuring Inderpal photo in a full page print ad), assisting the 1996 Olympics teams in Atlanta, and featured in numerous IBM and NBA events including the Olympics and All Star games. 

Finding common ground on closed loop reporting via Boston Marketo User Group

Today’s Boston Marketo User Group (BMUG) featured three presentations and Q&A on closed loop reporting from Paul Green of Extreme Networks, Lauren Brubaker of NetProspex and yours truly and well chronicled on Twitter by many of the Marketing Automation-erati including Jarin Chu, Jeff Coveney, Ed Masson and OpFocus.

The presentations and conversation spanned a wide range of topics and perspectives, but these were my top overarching takeaways:

#1 - EVERYONE is trying to improve their closed loop marketing effectiveness

Companies may be at different stages in their journey towards closed loop reporting, but every business spending money on sales & marketing is trying to better understand the payoff on their investments and how to leverage those investments to scale their business performance.  Many companies are in the early stages and trying to head in the right direction, while the ones who have been working on it for several years continue to strive for more.

#2 – The skills of marketing technologists as in demand as ever

With a room full of some-70 marketing technologists, the conversation was clear that everyone is looking for more talent in this area. It’s a great time to be a revenue marketer and specialized in technologies such as Marketo.

#3 - Closed loop, revenue marketing takes partnership between marketing, sales and IT

This was one of Paul’s summary points but it applied to all of us --- all three of these departments need to contribute to the closed loop engine. IT teams can be a great ally for marketers looking to connect data and systems and ensure they have the real time dashboards and reporting required to enable close loop reporting. In Paul’s case, he also noted that his sales and marketing teams are now seen as a single organization reporting into a Chief Revenue Officer.

#4 - Tracking revenue stages is now broadly adopted

Whereas everyone has different approaches to where data resides, what is tracked and how reporting is performed, and one common denominator between everyone in the room was the underlying fundamental of using revenue stages to track lead progression through their buying process e.g. MQI to MQL to SAL to SQL.

For those who attended (or even if you didn’t), these are some related resources to the topic:

And as there were multiple requests for the slides here is a download of my PPT deck on closed loop reporting:  Closed Loop Reporting Slides

 

BMUG.jpg

Six critical attributes and three must-have metrics for closed loop measurement of marketing programs

Marketers for decades have been trying to answer the question, “How do you measure campaign effectiveness?”

We’ve come a long way since the days of the quote "Half the money I spend on advertising is wasted; the trouble is I don't know which half”… which has been attributed to many although I believe the true attribution goes back to the 19th century with John Wanamaker.  

Today measurement is more attainable but we have to deal with issues like parent and child campaigns, UTM tracking, first touch and last touch attribution and campaign influence weighting.

After years of evolution, improvement and refinement, I’ve landed on this model for measuring campaign and program performance:

The six attributes I use to slice and dice campaign measurement:

#1 – Theme

Themes are the roll up of multiple programs and last for multiple quarters.

#2 – Program

A program is the intersection of content and a specific media outlet/vehicle over a specific time duration, and has a cost investment attached to it.

#3 – Medium

Medium identifies the marketing channel e.g. website, blog, social media, paid search, email, retargeting and syndication.

 #4 – Media Outlet/Vehicle

This identifies the specific vehicle within a medium. I like to identify major sub-categories for analysis so for example Google splits out into Google Branded Search, Non-Branded Search, Retargeting and Display Network. LinkedIn splits out into Ads and Sponsored Posts. And this also includes specific publishers e.g. Madison Logic, Network World or IDG Connect.

 #5 – Call to Action

This identifies the type of call to action used as the primary call to action in the program so common values include Free Trial, Free Tool, White Paper, eGuide, Webinar, On Demand Webinar, Analyst Report or Case Study.

 #6 – Content Asset

This identifies the specific content asset by title. With the ever importance of content marketing and being able to quantify content effectiveness, this has reached a status of must-have reporting and thus warrants its own field so program effectiveness can be rolled up by content asset.

The three ways to measure campaign effectiveness:

 I was once asked if it’s better to measure first touch or last touch campaign attribution, and my answer is “both”. I use this model, which I call A-I-C for Acquisition, Influence, Conversion.

 #1 – Acquisition Program

The acquisition program identifies the program used to acquire the MQI, which generated the initial interaction.  And (like all of these) carries through to all follow-on performance metrics MQL, Opportunity, Pipeline $, Bookings, etc.  Think of this as measuring “first touch”, and a given lead can only have one acquisition program.

#2 – Influence Program

Influence programs measure all leads who engage around a program and follow-on performance. This is ideal for program comparison (plotting program performance and identifying top performers and low performers)… it should NOT be used for summation to measure total marketing impact as there would be double counting across program. A given lead can and should have multiple influence programs. Since it counts “all attribution”, it’s strongest use is identifying low performers (that have clear non ROI) and for identifying top performers that stand out relative to other programs. A future evolution here are influence weighting system enabled by companies such as Full Circle.

#3 – Conversion Program

The conversion program identifies the “last touch” program prior to MQL conversion, and all follow-on metrics. A given lead can only have one conversion program.

These three sets of metrics provide a complete picture around campaign/program performance, and should be used in combination to provide perspective and measurement when analyzing effectiveness of investments

Five Powerful Metrics to support your Closed Loop Marketing in an Inbound Marketing and Lead Nurturing World

The first challenge of closed loop marketing is getting a system and process in place to enable your closed loop reporting. I’ve covered how to do that in multiple posts and most succinctly you can read about that in my CMO Essentials article “Six Essentials to Setting up a Closed Loop Marketing System.”

Once you done that, you’ll encounter a new set of challenges --- how do you navigate through a set of metrics and reports and use the ones that are the most important? Reporting for the sake of reporting helps nobody – the key is identify the right metrics that help you measure against your strategies and indicate if you are headed in the right direction or have issues that need to be addressed.

In a world where inbound marketing and lead nurturing are critical to building a high volume and repeatable demand generation machine, these are five metrics I’ve found to be particularly useful:

#1 - Active Marketing Database

Your active marketing database represents your ‘cookied population’ of MQIs whom you are using nurturing programs to try to advance to MQL. A growing active marketing database is a signal that your top of funnel inbound programs are growing and your nurturing practices are not serving to turn off your audience. Active marketing database grows each month based on adding additional MQIs, and falling out each month are unsubscribes, bouncebacks who have not been matched to a new email address and those who have not engaged with you via a web page visit in over 12 months.

#2 - MQIs by Medium

This measures how each of your mediums are contributing to MQI growth. One of Adam Barker’s best practices is that each team member owns a metric, and these are key metrics to have ownership by team members. The most scalable MQI mediums to grow are Inbound (Website, Blog, Social Media) and Digital (Paid Search, Retargeting, Email). As a side note I am at a point where I don’t even want to consider content syndication leads as MQIs because of the massive quality difference between syndication MQIs and those from inbound & digital channels.  

#3 - % of MQLs that “graduate” from MQIs

This metric give you a single measure of the performance of your MQI-to-MQL nurturing programs… how much are they contributing to your MQL production? A higher number indicates you are driving performance out of your active database, whereas a lower number indicates prospects are identifying themselves to you for the first time as they visit your website for a later stage call to action such as free trial or contact sales – which signifies a missed opportunity to have more influence as they move through their buying process, or cast a wider net. The best in class number for this percentage for mature demand generation organizations is 50%.

#4 - Of MQLs advancing from MQIs, what were the MQI Lead Sources?

Building on the concept from #3, the next question becomes which sources are yielding MQIs that are then after nurturing graduating to MQLs? This should help to identify which sources to spend more time on driving volume to scale your MQL numbers.

#5 - MQL to Opp Conversion Rate by MQL Source & Medium

As you scale MQLs you also need to keep an eye on quality, and a key quality metric is the conversion rate from MQLs to Opportunities.  Monitor these rates to ensure you don’t see any red flags. The most common red flags to watch for are quality issues within paid search particularly the Google content network and that if you are using scoring programs or content triggers to pass leads to sales, that sales team has everything they need to best convert those MQLs to Opportunities.