A quick lesson in AI ethics for leaders

Cartoon person with paper and AI

We are all navigating a workplace rapidly evolving due to emerging technology, but deciphering AI ethics doesn’t have to be scary or overly technical. At its core, ethics are about people, and AI ethics applies to how people are using this new technology.

How do leaders build frameworks that reduce the instances in which AI has the potential to work against people instead of with them? We’ve got a quick lesson with practical recommendations for policies and implementation strategies.

How are people using AI in the workplace?

Developing good AI ethics for leaders has to start with an understanding of how AI is already being used in the workplace.

Most employees are already being exposed to AI in the modern office environment, including increasing instances where individuals seek out AI technology to improve their daily work lives (like using generative AI such as ChatGPT). AI affects us in more passive ways, too. Have you ever clicked into suggested posts on social media? Applied for a home loan? AI algorithms are curating our decision-making processes, but we don’t always realize how these AI-informed options affect us.

If you’ve applied to a job in the last five years, chances are artificial intelligence made a decision about you somewhere along the pipeline. Were you aware when it happened? The most commonly utilized AI is Applicant Tracking Systems (ATS), which adopt AI technology to streamline the recruitment process by scanning thousands of resumes and narrowing the field of candidates based on scoring metrics dictated, typically, by HR departments. This means a phrase missing from your cover letter could move you into the rejection pile, or some keywords could catapult you into an interview.

Maybe you’ve worked with ChatGPT yourself. Three-in-four (75%) workers now use generative AI in the workplace. According to Harvard Business Review, employees use generative AI for two primary reasons:

  • Content creation and editing
  • Troubleshooting and technical assistance

Doesn't sound so bad, right? But how far can we trust AI? Over half of (52%) of Americans feel more concerned than excited about the use of AI in daily life. Building trust for leadership is where AI ethics for leaders comes to the forefront.

AI ethics

Ethos is defined as the “guiding beliefs of a person, group, or institution.” That’s exactly what AI ethics for leaders aim to do: guide your organization in alignment with your business principles. For this reason, AI ethics have to be a collaborative, team process. Open discussion with all levels of talent should lead you to concrete policies.

Your AI ethics policies should shape two things. First, they should define how AI is used in your organization. Second, your organizational ethics should shape your feedback to vendors about your preferences when it comes to development. Now let’s talk about the why.

For better or for worse

AI technology has the ability to empower people. It also has the potential to widen existing inequalities.

Whether it’s generative AI platforms or software-as-a-service (SaaS) programs like your ATS, you can’t assume software companies hold the same ethical considerations that your organizations do when they train their AI, nor can we blame technology when our values don’t align. AI is developed by humans, and not all technology is created in the same vacuum.

Fairness and bias

As you work on your AI ethics for leaders in 2024, first address the bias in the technology.

How bias gets into AI. The dominant type of AI right now is machine learning (ML), where algorithms gain knowledge from massive amounts of existing data and then predict outcomes based on that knowledge. Basically, ML is dependent on the data fed to it. Using real-world data means that there is human bias embedded in the data sets. If, for example, you use AI to determine what neighborhoods get policed for potential crime, it will be driven by biases that already exist in society.

Real life example. On January 9th 2020, Detroit police arrested Robert Williams in his driveway based on AI-powered facial recognition technology identifying him as a match for security footage from a theft. Williams, a Black man, became history’s first person wrongfully arrested based on facial recognition, making international news. AI technology led police to mistreat someone from a historically underrepresented community. The technology wasn’t adequately trained to accurately distinguish an innocent Black man from the man they were investigating. Programs like these can have intrinsic discriminatory biases that originate with either their developers or their development.

Transparency, robustness, and privacy

Transparency is key to reducing or eliminating the possibility of negative incidents like this in the workplace. Leadership is responsible for tough decisions about organization-wide AI use and regulatory policies.

Interrogate your processes:

  • Do you tell people - internally and externally - when AI is used?
  • Do you tell them how it’s being used?
  • Can they find out more information after being made aware?

Be careful not to put the burden on the candidate or employee. Take your website’s cookies policy for example. Does anyone actually read the lengthy document before clicking OK? Dense legal documentation is not transparent when it comes to AI ethics for leaders.

Robustness. Can your AI technology be willfully manipulated? Can it be used to intentionally benefit one person or group of people over others? Should you limit the amount that technology replaces traditional human decision-making in the company?

Privacy. What happens to the information you feed ChatGPT? Or the information that candidates feed your ATS system? Where is that information stored? Who has access? Publish a privacy disclosure on your candidate portal for public safety and reduced risk to the organization. Ensure that everyone that has access to that data internally has been trained on proper privacy techniques.

AI education and action

Organization-wide value alignment is the goal when it comes to AI ethics for leaders. Let’s look at some examples of concrete steps you can take.

1. Bring in an AI ethics expert

Train your teams about AI by bringing in an expert. Your staff has to understand how AI and ML work – this is basic risk management. An educated employee can interrogate the system and help identify gaps and risks.

2. Start a dialogue

Encourage open debates, questions, and feedback about AI in your workplace. Establish a cross-functional work group that represents diverse people, ideas, teams, and levels of your organizations, or call up an existing employee resource group (ERG). These conversations are invaluable for building the foundation for principles you establish amongst leadership about AI ethics.

3. Call in the sales team

If your company has purchased or will purchase software featuring AI, everyone needs to understand exactly how it works. Bring in the software vendor to explain how the technology works. Start by asking:

  • How can you ensure that this AI is fair to historically underrepresented groups?
  • Does the data represent all possible scenarios and users without bias? How?
  • Can you tell us what methods and data sets were used to train the model?
  • Who is on your AI development team? Is the team diverse or disproportionately white and male like the majority of the tech industry?
  • Can your AI be manipulated to favor one group or profile?

Software vendors must be able to answer these concerns in a straightforward way. Give them time to reach out to the developers if necessary. You can also get ahead of it by seeking out AI-powered technologies developed with diversity and inclusion at the forefront, like PowerUp, a professional development platform founded on DEIB implementation.

Accountability starts with leaders like you. The AI software marketplace is already vast, and you can afford to use tech that you are confident complies with your organizational AI ethics policies and aligns with your company values.

Leadership is built on trust

As we’ve mentioned, AI ethics and policies shouldn’t be decided from the top down. It requires the collaboration of the diverse people at your organization. AI ethics for leaders require that you take the lead to understand how AI is being used in your organization and share that information to all levels. As the decision-maker, it’s your job to hold AI software companies accountable to your values around inclusion. If AI is going to be trustworthy tech, you hold the power to push for those products to be developed ethically. Stand firm in your guiding principles. Then put the responsibility back on AI software vendors by asking the right questions.


Looking for more information about DEIB and AI working together?Check out this PowerUp course!
You may also like View more articles
Open jobs See all jobs
Author


What skills are you missing?