From the 1990s to today: A brief overview of artificial intelligence in society
Artificial Intelligence used to be the province of scientists and fantasists, but quite recently, a revolution in software, hardware, and networks have made AI – and specifically machine learning – an industrial commodity and consumer phenomenon.
Whereas the AI of the 1990s brought us search engines and ad networks, the learning machines of today bring us everything from self-driving cars to programs that detect skin cancer in photographs. Fortunes are being made. New dynasties of technology firms are getting founded. Some important people (including our CEO, Marc Benioff) are calling it the “Fourth Industrial Revolution.” It’s an exciting time.
You may wonder: how can artificial intelligence and machine learning be used to further social good?
AI and the Internet: A Personal Journey
I’ve been working on the Internet for a long time, and I make the argument that artificial intelligence has been with us for a long time. I did some work in machine vision projects circa 1997, and spam fighting with Bayesian statistics in 2004. In 2007, I went to graduate school and learned that these were only part of a much larger toolkit, including neural nets, statistical methods, and, one of my favorite techniques, genetic algorithms.
Here’s what computer vision was like in the 1990s:
This “flythrough” screen is a predecessor project to the computer vision work that Phil Nadeau did with Bell Labs.
This was run on Sun Microsystems hardware, which was a big deal in 1994.
This is the Dialog Table. It was installed in the Walker Art Center in Minneapolis. It was the successor to Phil Nadeau’s team’s machine vision system at Bell Labs, and this version had a tiny bit of Phil’s code in it.
However, even with a Masters of Science, AI was not a marketable skill. Real AI was something that PhDs were doing in laboratories; dot-coms were flocking to mobile and refining monetization of social media. I looked for jobs focused on artificial intelligence or machine learning, but the closest I could come was search engineering – a kind of AI, but one we barely notice. We’ve forgotten how miraculous it is to just type a few works into a box and get the ten most relevant results in front of us in the blink of an eye.
Things changed in 2014, when Salesforce brought me on as one of the first two hires on the Security Analytics team. This new project uses statistics and machine learning to protect our users from fraud and abuse. Salesforce started this initiative because online attacks are changing all the time. Security professionals operate at human speeds, and in the time it takes to talk about and detect an attack, thousands of users may be affected. It takes automation – even rudimentary AI – to keep up with the detection workload.
Artificial Intelligence: A Brief History
Did you know that the circuitry that powers sophisticated 3D video games are now used for AI algorithms? Three-dimensional graphics are based on vast amounts of linear algebra, which are sequences of multiplications punctuated by periodic adding up of the results. Early 3D gaming did all these operations in software, and it consumed most of a PC’s processing power.
Expensive industrial-grade workstations, like the legendary Silicon Graphics machines, used hardware rendering engines (also called Graphics Processing Units, or GPUs) that gave them capabilities beyond those of the common PC. In the late 1990s, rendering engines moved out of the workstation and into the PC. You can easily see the difference by comparing the pure-software visuals of the game “Quake” to its sequel “Quake II,” which was a leader in the first wave of PC games that support hardware acceleration.
Many of today’s breakthroughs in machine learning are based on neural nets. This is a kind of computing inspired by the highly-connected electrochemical networks of brain tissue. Mathematically, each artificial neuron is a simple thing – a sequences of multiplications, followed by a summation. It turns out that networks of artificial neurons are very powerful. Just a few layers can approximate any other mathematical function; recurrent neural nets, where later layers feed back into earlier ones, can approximate the function of nearly any other computer program. Breakthroughs circa 2006 vastly improved the training of recurrent neural networks, but they were still very expensive to run, just like the software-based renderers of early 3D games.
It’s wasn’t long until someone realized that neural nets and 3D rendering share the same fundamental operation, and figured out how to support neural nets in hardware using GPUs – and the difference it has made in AI is just as profound as the difference between the software engine in Quake and the hardware-accelerated Quake II. Suddenly, machine learning was out of the lab and all over the news. A GPU cluster could chew through terabytes of data – even video and photo data – in days or hours. Cars were driving themselves and Facebook was finding your face in your family photos.
Increasing Investments in AI
As this article on AI notes, there has been a 7x increase in investment in AI since 2012.
Salesforce invested heavily in the new phenomenon. We expanded our analytics APIs to provide new statistical capabilities. We spun up a new engineering team to provide traditional AI – sometimes called “Good Old-Fashioned AI” – as part of the platform. We bought Metamind to provide natural language and machine vision capabilities via Deep Learning. We gave it all a name, and a face: Einstein, and its mascot, the Professor, and we debuted it all to the public at Dreamforce in 2016. There were a lot of impressive demos, including lead scoring, to predict which leads were likely to convert in the near future; automatic categorization of messages to speed customer service requests; and even predicting when electrical system components were approaching failure based on sensor data.
I did not see much, at the time, about AI helping with philanthropy, or nonprofits, or activism, or even governance. And maybe that was to be expected. Salesforce, is, after all, a for-profit corporation selling CRM tools to other for-profit corporations. We do, however, have stated values – among them Trust, Equality, and Giving Back.
That’s where Salesforce.org comes in. Everyone who wants to change the world should have the tools and technology to do so. Salesforce.org gets our technology in the hands of nonprofits and education institutions so they can connect with others and do more good. As a social enterprise, the more missions our technology supports, the more we invest back into technology and communities. In addition to serving nonprofit, higher ed, and K-12 customers, Salesforce.org also provides grants, including a popular employee-driven matching program within Salesforce.
Last but not least, Salesforce.org has an annual Technology and Products Fellowship, through which the dot-com side of the company loans an experienced employee to work for the nonprofit side of things for a twelve-month period. A few months after Einstein’s thunderous launch at Dreamforce 2016, Salesforce.org began soliciting applications for the next Technology Fellow.
AI for Good: A Brief Overview
Here I saw a chance to ask whether AI could work in the public interest. I became a Technology Fellow, and in the last year, I’ve read dozens of articles about AI, machine learning, and data science, both in abstract and as applied to for-profit, non-profit, and government projects. I’m starting to see more and more for-good examples like this algorithmic approach to helping refugees find jobs. I’ve spoken to dozens of people from Salesforce, Salesforce.org, our customers, and our partners. We even presented an AI for Good panel at Dreamforce 2017.
We’ve discovered so much. Multiple organizations, including Data Analysts for Social Good and the Allen Institute for Artificial Intelligence, are working on public-interest data science projects. Organizations like AINow and AI4All are working to make sure that AI is both applied in a way that benefits the public, and that the teaching of AI also be unbiased and accessible to everyone. Prominent corporations and researchers founded the Partnership On AI to establish best practices and responsible use policies for AI in the for-profit space. Globalist agencies, like the International Telecommunications Union, are holding summits with senior government representatives to work out responses to AI at a grand scale. Finally, the 2017 Beneficial AI Conference developed the Asilomar AI Principles, which my colleague Cheryl Porro recently signed and discussed in her blog. This list of 23 rules guides the developers of AI, and not the AI itself, so it’s not anything like Asimov’s Laws of Robotics. However, it’s at least symbolically important, and I’m sure that it will increasingly be a practical matter as well.
Nearly all of this activity is just in the last two years, and it’s been an eventful time. There’s been plenty of discussion about spambots influencing public opinion in America and whether AI can solve the problem. Financial technology (or fintech) and legislative technology are now real phenomena, and are having an increasing (and increasingly inexplicable) affect on our lives. Several enormous data breaches at organizations that, arguably, should have known better, only increase the urgency of the situation.
We don’t just need AI for nonprofits as a science project. We need a concerted effort to make sure that AI in business, government, and philanthropy, is used responsibly and with a healthy respect for all stakeholders. The biggest and broadest AI projects will, of necessity, have to take the interests of the entire world into account.
About the Author
Phil Nadeau is a lead member of the technical staff at Salesforce. In 2017, he was a Salesforce.org Technology Fellow in Artificial Intelligence. He started using Linux 25 years ago and has been working in software development for almost as long. Phil has written tens of thousands of lines of code with the LAMP stack (Linux, Apache, MySQL, Perl), Java, C, and a variety of other languages. In 2012 he graduated from Western Washington University with a Masters of Science. He enjoys helping make sense of the Internet, using Java, Scala, Spark and Python primarily for his work in search engineering. One of the highlights of his career was as a programmer on an experiment in machine vision at Bell Laboratories for controlling video games using a motion capture system made from vintage Silicon Graphics workstations and old analog video cameras. For previous work, see Phil’s blog Why You Should Care about AI and Where Did AI Come From.