Why Working with AI in Higher Ed is Like Cooking

By Suzanne Yuen | June 3, 2019 | AI for Good, Education Empowered Podcast, Higher Education

Why Working with AI in Higher Ed is Like Cooking

While the algorithms behind AI can be complex, you can think about how to use AI effectively as being similar to cooking. You have to know what outcome you want (what kind of dish you want to make), you need some raw ingredients, some tools to put your dish together, and a way to present your results. But what does this mean for higher ed? As we have done previously in an Einstein Prediction Builder recipe book, we take the analogy of cooking and expand it to AI with the following steps:

  1. Understand Your Purpose (a.k.a. Your Use Case)
  2. Think of Data As Your Raw Ingredients
  3. Imagine Algorithms as Cooking Tools to Create Your Dish
  4. Utilize a UX Layer as Your Presentation (Setting the Table)

I was recently a guest on the 3rd episode of Education Empowered, the Salesforce.org podcast about education, where I spoke about “the ingredients of AI” and what it takes on both a technical and human-level to create effective and ethical AI. You can listen to the full episode here, which includes a discussion with hosts Jason Belland and Haley Gould about how better student data can help solve challenges with student loan debt and degree continuity.

Understand Your Purpose (a.k.a. Your Use Case)

Before you start cooking, it’s important to understand the final purpose and product that you want to create. Are these special occasion cupcakes for your niece’s birthday? Is this Grandma’s “Famous” Lasagna that’s been passed down for generations, something the family loves and cherishes? Or is it a quick stir-fry on a weeknight?

In terms of AI for higher ed, when we start considering various use cases, it’s important to understand the variety of AI applications. Here are a few:

  • Are we trying to create personalized education journeys that are perfectly tailored to student learning styles, strengths, and interests?
  • Are we trying to automate legacy back-of-the-office processes so that our faculty and staff can be freed up to help more students at scale?
  • Are we implementing chatbots to actively engage with students so that they can find what they need quickly?

These are just a few examples of how AI can be used in higher ed. Like cooking different dishes, each of these use cases requires different data, algorithms, and UX components.

Suzanne Yuen, a data scientist, in the kitchen

Think of Data As Your Raw Ingredients

Just like with a recipe, even when you know which ingredients to use, there are still some other considerations. After you’ve selected your use case and recipe, next you need to get the right ingredients: your data. Three things to consider are your data sources, data quantities, and data freshness.

An image describing how to know if your data are biased.

Considerations to examine data bias. Image courtesy of Kathy Baxter in her article “Dirty data or biased data?”

 

Sourcing – how did/do we get our data?

The source of the data matters. Especially in terms of privacy, security, ethical, or legal concerns. This is particularly pertinent for financial, insurance, health, education (FERPA!) and other regulated industries. It’s also important to be ethical in how you get your data. Just as you wouldn’t want to take lettuce from your neighbor’s garden without asking them first, don’t use data that was acquired deceptively. Make sure you’re following relevant regulations in the country you work in, and respect your constituents’ privacy. Depending on the use case, we may also evaluate and build data pipelines to continue getting data as our process evolves.

Quantities – how much data?

While a recipe may call for a cup or two of one ingredient and a dash of another, AI is a little bit more nuanced. The quality of the data matters a lot. The general rule of thumb is that more examples that the algorithm can learn from (10,000 to millions) are better. However, high quality, clean, unbiased data that appropriately and statistically represents your population is best.

The purpose or use case matters a lot. If we expect that the AI may have errors, especially in the beginning, it’s important to consider “What are the risks of false negatives? Or false positives?”

Freshness

Over time, data and AI models may become stale, so building data pipelines, refreshing, and monitoring your models is important. Obviously, if your data is changing daily, you’ll want to refresh more often than data that only changes quarterly or annually. The great thing about being on a platform such as Salesforce is that a lot of the pipeline and refresh work is managed for you!

Algorithms to put it together

After you’ve picked your use case, and your data, selecting the appropriate algorithm is key. There are a plethora of algorithms and approaches. While some algorithms may help label your friends’ faces in social media photos, suggest the next thing to buy online, or what movie to watch, there are also algorithms for predicting persistence, detecting cancer, and many more applications.

Just like the difference in complexity between a strawberry souffle or a pancake, there are “more complex” algorithms, and simpler ones. Generally speaking, the “simpler” algorithms tend to be more explainable (also called interpretable), and some of the complex ones, such as deep neural nets, behave like “black boxes.”

The UX layer is your presentation

Like frosting a cake, or plating broccoli stir fry, presentation matters. The user experience or presentation of results are how our human senses interact with AI. Best practices, risks, and potential for biases vary depending on the ultimate application.

For example, chatbots that are too intrusive and not very helpful run the risk of becoming obsolete.

If we present too much information, we risk overwhelming people and running into Automation Bias, which is the propensity for humans to favor suggestions from automated decision-making systems, even when those systems are incorrect.

 

Assistive AI can help guide decisions, but not take over.

The perfect balance of this is Assistive AI, which is where the AI guides you towards your desired goal. It can offer suggestions, but will not overrule you.

Google Maps is a great example of Assistive AI. When you want a route to a destination, Maps gives you several options. It could even give you more information if you wish -– perhaps there is an accident on one route, a hefty toll on another, or a parade on Main Street. You need not know the specific algorithms, but if you wished, they are accessible to you.

If you deviate from a route, Maps will reroute you towards the direction you want to go. It does not penalize you, yell at you, or drive the car for you. Its sole purpose is to help you. If we continue to go down the path of Assistive AIs, we don’t need to worry about machines taking over. We can keep AI going in a helpful direction by remembering it is just a tool like any other, and it is our responsibility to use it for good.

LISTEN TO THE PODCAST

 


Suzanne Yuen

About the Author

Suzanne Yuen is the Director of Data Science at Salesforce.org. She focuses on how to help nonprofits and educational institutes level up with AI, machine learning, and other data science practices. She has been a data practitioner for nearly a decade at various companies including VISA, Walt Disney, Williams Sonoma, and a smaller startup. She founded WomenforData.org, which focuses on getting more women into the professional data space.