“I can help you with that request right away,” says the AI-powered assistants used by many tech-savvy businesspeople nowadays. This army of virtual assistants are seemingly-perfect — missing all the quirks and flaws that make humans, well, human (read: a need to break for food or to use the bathroom, emotions that may let your temper get out of hand) — they offer a pathway to efficiency and success.
But have you noticed that many of the most-popular AI tools — from voice-powered AIs like Siri and Alexa to virtual secretaries — present themselves in their first form as female? With their high-pitched voices and female-skewed names, these tools are slightly subservient and certainly obedient. They are built to be bossed around.
Sure, Alexa can also be activated by saying “Amazon” or “Echo”. But the overall tendency towards skewing AI assistants female showcases a larger problem for the technology at large. As AI rapidly progresses toward becoming more human-like (one list ranks AI assistants on their level of humanity and character flaws, noting some that “love jokes” or “will also quote Shakespeare”), are we building these tools to hold the same biases that humans sometimes hold?
Kriti Sharma, VP of bots and artificial intelligence at cloud accounting software company Sage, is concerned about this very subject. She built an autonomous chatbot that helps people in over 100 countries manage their expenses, financials and budget. But Sharma knew that, with the power of overseeing such a tool, came a responsibility to its thousands of users.
“Whenever you’re using any kind of technology, it can be used for either good purposes or bad. AI is a very powerful tech, because it also has machine learning and self-learning. With bot learning ability, it can very quickly evolve into something quite negative if it is not designed correctly,” says Sharma.
To guide her company’s development and use of AI, and to call on other leaders in the tech space to share this responsibility, Sharma created five “Core Principles” of developing AI.
The first core principle addresses diversity and biases — more specifically, inclusion. This means not perpetuating gender stereotypes.
“We often see in the world of AI, a lot of human-like stereotypes are being reinforced. We see a lot of female assistants that are switching lights on and off, scheduling meetings, booking taxis, typical AI digital assistance. But then there are banker-bots, or lawyer bots, built to be male. There is not a very diverse workforce, unfortunately, in the world of AI,” says Sharma.
“AI technologists want to create AI that mimics the world of humanity and follows those roles exactly. Unfortunately, there is also a lot of bias in the data itself, that gets into the algorithms in the AI system. The machines are only as smart as the data fed into them. That data, in many cases, is quite biased.”
It speaks to a larger conversation about the role of AI as it becomes more integrated into human life. The Turing Test, said to be the ultimate test of computer intelligence, aims to judge whether a bot can pass as a human. The Asilomar Principles address some of the ethics and values that those designing AI should consider, as well as long-term cautionary guidelines.
In fact, Sharma says the idea of AI having any gender at all is not necessary.
“Software does not need to have a gender. There is no need for bots to pretend to be human or human-like. They can have a character or personality — the Sage bot has British accounting humor,” she says.
“Using the Turing test to judge if AI is advanced enough, I think that is really obsolete. Why do you need to create AI that can pretend to be human and get away with it? We need AI that helps people. It doesn’t need to have a gender. Bots are just bots. They don’t need to pretend to be human.”
As chatbots and virtual tools grow smarter, there’s also a need for transparency. Humans need to know when they’re speaking to a bot, especially in sensitive industries like finance or healthcare. They need to know when it’s time to call in a human.
“The real state of AI is we are not at a point where AI can 100 percent serve humans without any intervention. There’s a
lot of scenarios where AI can do preliminary analysis and then pass on to a human, or it can troubleshoot some basic problems, but then have to hand it over to someone else,” says Sharma.
“The best technique that I’ve realized is a bot should introduce itself as a bot very clearly when you first start chatting with it. It should say, ‘Hi, I’m a chat bot. I’m here to help you with your finances. These are the things I can do.’ Then if you need to connect with an advisor, it needs to very clearly say, ‘Now I’m inviting a human into our conversation.’”
So what about the fear that AI will work us all out of a job. One study made a lot of noise when it claimed that half of millennials are looking at jobs that could be replaced by automation.
Sharma is not as concerned by this fear. “We’re still 10 years away from artificial general intelligence that would be human-level intelligence,” she says. The jobs that AI is taking over are repetitive, thoughtless tasks, ones that are “probably not a goof fit for humans,” she says.
And AI has the power to create jobs, particularly in emerging fields like human-computer interaction, AI design, and robotics.
“Overall, if designed correctly, AI gives us an opportunity to create a better world than we have for humans today. AI can learn exponentially faster and can help multiple people at the same time. So it is an opportunity to provide personalized services to people who didn’t have them before — for example, healthcare, education and what we are doing with financial advice to business owners around the world.”
The full set of Sage’s “Ethics of Code: Developing AI for Business with Five Core Principles” are as follows:
1. AI should reflect the diversity of the users it serves.
2. AI must be held to account — and so must users.
3. Reward AI for ‘showing its workings’.
4. AI should level the playing field.
5. AI will replace, but it must also create.