Quick post.  Rob McIntosh made a great comment on my recent post List Metrics; how to measure quality in a list?.   Instead of replying to the comment, I though I’d write a quick post on this, as it is something I have some strong feelings on.

Rob asked, “Who owns list validation?”.  Rob goes on to point out that sometimes great lists never get acted upon.  Sometimes a recruiter will point to a sourcer and ask if a list is validated, and to what level,  cross-referenced, email, phone validation, etc.

Here is the real problem:  Lists get stored and archived are simply structured wrong.   Add validation fields to each record.   It does not matter who owns validation, as long as the list is coded correctly and expectations of the list recipient are accurate.

I’ve had years of experience pioneering this research at Broadlook Technologies.   We look at validation from a statistical perspective.   For example, our flagship recruiting software, Profiler product SCORES all contact data.  

profiler_scored_data.jpg

Scoring of data allows the human using the data to make decisions with respect to where to put their efforts.

What are the source dates of the web pages someone was pulled from?  Was the person cross-referenced on multiple sites? If a resume, what is the date?  Does the date on one resume board match the date on another?  Are you saving both dates?  What type of page was the information taken from?

What this makes me think about is that maybe Broadlook should break-out the logic inside the Profiler, enhance it, and create a product that simply scores list data.  Why?  All data is not equal.   After scoring data, a recruiter would have much more insight as to where they should put their efforts first. 

I would like to hear from people on this one??  Implications in the recruiting software business but definitely wider appeal in general B-B sales.

Example:  What is the likelihood of someone from New York, NY moving to Boston vs. Milwaukee?    That affects the score.  What is the ability of the recruiter getting the list to build rapport with a technical candidate?  That affects the score.  What is the track record of the Internal recruitment staff to actually recruit these candidates vs. an outside agency?  (Would be interesting to test this).  Agency recruiters tend to be better, that is why they make the big bucks.  It may be the first step building an ROI study that corporations should be doing the sourcing and getting the short list to a few select recruiters to work the magic.

Axiom:  Regardless of all other variables, all records in a list should have a score

Axiom:  Validation level within a list should persist and be updated throughout the life of the list

Axiom:  Lists should be scored differently based on the need.

Fun stuff.  Thanks for making me think Rob.  enjoyed the rant!

Secured By miniOrange