Skip to Content

AI for Good: Principles I Believe In

By Salesforce.org January 16, 2018

Cheryl Porro, SVP of Technology and Products at Salesforce.org, shares her thoughts on how Salesforce for nonprofits is aligned to a vision of AI for good.In movies that feature artificial intelligence, more often than not, AI is the villain. It’s a common trope: it starts off with an experiment meant to advance the human race, but soon enough the evil AI menace is trying to wipe us all out. But here in the real world, you might be relieved to know that scientists, researchers, business leaders, and nonprofits are working hard to develop ethical approaches to AI.

There is a lot of progress being made to define key principles that will shape the fundamentals of what we call “ethical AI.” For example, at the 2017 Asilomar Conference on Beneficial AI, dozens of thought leaders from the fields like economics, law, and ethics came together to standardize principles that will help shape the development of AI for good.

If we take a step back and consider the challenges we face as a society today, AI holds great promise — but only if we build it and use it in a way that’s beneficial for all.

I believe there are 5 main principles that can help us achieve beneficial AI:

1. Being of benefit: AI technologies should benefit, empower, and create shared prosperity for as many people as possible. Investments in AI should be accompanied by funding for research to ensure AI’s beneficial use and tackle thorny questions as they relate to computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust while still ensuring they accomplish our desired goals without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s purpose and vocations?
  • How can we update our legal systems to be more fair and efficient in order to manage the risks associated with AI?
  • What set of values should AI be aligned with to maintain beneficial legal and ethical outcomes?

2. Human value alignment: AI systems should be designed so that their goals and behaviors align with human values. Specifically, they should be designed and operated to remain be compatible with human ideals like dignity, rights, freedoms, and cultural diversity. The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends. Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all of humanity rather than one state or organization. People should have the right to access, manage and control the data they generate, given AI systems’ ability to analyze and utilize our data. The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

3. Open debate between science and policy: There should be constructive and healthy exchange between AI researchers and policymakers.

4. Cooperation, trust and transparency in systems and among the AI community: Researchers and developers of AI should cooperate for the benefit of all. If an AI system causes harm, it should be possible to ascertain why. Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

5. Safety and Responsibility: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards. AI systems should be safe and secure throughout their operational lifetime, and verifiably where applicable and feasible. Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with an implied responsibility and opportunity to shape those implications. Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact. AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures. Humans should choose how and whether to delegate decisions to AI systems, in order to accomplish human-chosen objectives.

You’re probably thinking: Wow, this is starting to sound really out there. How does this relate to my day-to-day work?

I believe that we each have a responsibility to improve the world around us. From the moment he founded Salesforce in 1999, Marc Benioff has championed the idea that a company can have a purpose beyond profit. Since then he has continued to reinforce the trailblazing belief that the business of business is to improve the state of the world.

That belief has become part of our company’s DNA, along with a variety of other values that guide us along our own journey into the realm of AI and other emerging technologies.

Our nine values are:
1. Trust
2. Customer Success
3. Growth
4. Innovation
5. Giving Back
6. Equality for All
7. Wellbeing
8. Transparency
9. Fun

When it comes to developing tech, especially AI, it’s imperative that we keep values like these in our hearts and minds.

I’ve added my name to the list of researchers and citizen changemakers who are committed to AI for Good. I hope you’ll join me in exploring how we can use technology for the benefit of all.

I’ll leave you with a quote from someone I greatly admire, Dr. Vivienne Ming, who works with AI for healthcare and social justice. You can watch her Dreamforce Dreamtalk, in which she discusses how AI and machine learning is being used for all kinds of good, from precision farming to treating Alzheimer’s, autism, and diabetes.

Technology must always challenge us. When we turn it off, we should be better than when we turn it on.” – Dr. Vivienne Ming, Managing Partner, Socos