The Era of Artificial Common Sense Won’t Be Here Anytime Soon

Artificial Intelligence is sometimes better than humans. And sometimes worse. It’s usually faster and cheaper, and that alone ensures it a very important place in our world, but not to the exclusion of human intelligence. The fact remains that almost all machine learning requires a human for a sanity check, at the very least.

In fact, the need for humans to augment the machine learning algorithm becomes stronger as the dataset size and richness decline. For this smaller, “Artisanal Data,” one small bit of noise in the data can distort the whole model. This happens because as much as we talk about the Age of Artificial Intelligence, we have yet to mention the Age of Artificial Common Sense. Nobody even dreams that machines are even close.

This false “Human vs Machine” dichotomy was illustrated to me the other day when I was watching online videos with my car-obsessed elementary school kids. We first watched a video from Tesla, bragging that their cars are made by robots and showing their high tech factory. Then we watched one from Lamborghini, bragging their cars are made by humans and showing off their low tech factory. But as we watched the videos, we realized that Tesla still has a few steps that are done by humans. And Lamborghini has plenty of car parts that are made by machines. The truth is, technology-augmented humans are the most productive of all.

It is this very important common sense component that makes it so crucial to have a human in the loop. The Lamborghini website asks for your date of birth when you sign up for their newsletter, in addition to your name and email address. I’d hazard a guess that some wise Lamborghini marketer realized that a fair amount of the newsletter conversions come from young visitors, like my sons, whose interest could politely be described as “aspirational.” And age serves as a proxy, albeit imperfect, for filtering out these people. Note that machine learning algorithms wouldn’t have discovered this segmentation if the question hadn’t already been asked — it took humans to add it. As a control, I looked at the Ford website, nothing about age because it’s not necessary to segment conversions on that field.

Humans also play a vital role in making sure the algorithm is fed representative data. There are several classic mistakes that a human can mitigate:

Using all available data, however poorly it represents the target problem

Often, it is tempting to just throw whatever data is available at the problem, without really thinking about whether is represents the population you are trying to predict. If the training data is skewed, the model will tend to do well in training and poorly in the real world.

Adding costs to the prediction errors

Without tuning, a model will agnostically weight a false positive the same as a false negative. In some cases, this might be the desired behavior.  But in others, the weights are actual vital to getting the model to predict well. For example, consider the differences in the costs of not sending direct mail to someone who would have converted, compared with sending the mail to someone who will not convert. A human must input and tune the model to these parameters.

Sanity checking the model predictions

It is vital that a human check the model predictions to ensure they make sense, and investigate those which feel incorrect. A model that contradicts the expert’s common sense is probably wrong. Skipping this step is what leads to anomalies like paperbacks priced in the millions on Amazon.

Feature creation relies on the humans

Just as the Lamborghini marketer used human knowledge to add the date of birth field, marketers must look at the raw data and intuit other meaningful ways to slice-and-dice the features to get more insight.  Perhaps a zip code field should be generalized to add a state field to the model. Perhaps the zip code should be joined with census data to get demographic distributions. Perhaps “distance” to the nearest store or the nearest competitor should be added to the model.

Data cleansing

Removing noisy data, e.g. people too young to have driver’s licenses, from the data is the job of the human and the machine together. The machine can find outliers and noise using algorithmic methods. But it’s up to the human to sanity check the removals to ensure the data removed really is noise and not just unusual. It’s also up the marketer to decide how issues like missing values in the data should be treated, and define what constitutes valid data.

Of course, beauty comes when the human augments the machine and the machine augments the human. The human adds the date of birth field and the rules surrounding valid ages. This is a perfect use of the human’s common sense and expert knowledge. The machine then applies these rules. And then given a clean dataset, the machine can uncover valuable insights about the relationship of age to conversion. Which is then used by the human marketer to target her message. Take the human or the machine out of this iterative process, and neither works as well.

At my company, WEVO, we understand this synergy. When decoding a page, we rely on both machines decoding the page in precise ways suited to a human. We also use machine learning to help us generate insights about the page. But we also leverage humans to inject the common sense. And the result is much greater than the sum of the parts.

Share This Post

More To Explore

Ready to Get Started?