Back to episodes

Episode 9

AI’s Impact on Employment, Equal Opportunity and Upholding our Civil Rights

Keith Sonderling, Commissioner at the United States Equal Employment Opportunity Commission (EEOC), discusses how HR companies and employers are using AI, and the implications of this usage on employment law. He stresses the importance of governance and regulation to avoid discrimination, prevent HR biases at scale, and to ultimately uphold our civil rights.

Transcript

[00:00:00] From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Malks, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser and search engine, now powering AI with the Brave Search API.

[00:00:29] You’re listening to a new episode of the Brave Technologist. And this one features Keith Sonderling, Commissioner of the United States Equal Employment Opportunity Commission. Keith was confirmed by the U. S. Senate with a bipartisan vote in 2020, and his term expires in July 2024. We covered a lot of ground with Keith today around AI and HR and equal employment opportunity.

[00:00:51] Including how HR companies are using AI, how employers are using AI, how AI can be beneficial for employers and [00:01:00] other software applications, and how AI can be detrimental to things like discrimination at scale and other areas that his agency is working on making safer for everybody. It was a great discussion.

[00:01:12] We covered a lot of ground and hope it was insightful for everybody here. And now for this week’s episode of the brave technologist. Hi Keith, welcome to the Brave Technologist podcast. How are you doing today? Good. Thanks for having me. Yeah. I really appreciate you coming on and giving our audience an insight into kind of what you’re up to.

[00:01:32] don’t we

[00:01:32] start with that? Where are you currently spending your time? I need an A in AI and, what are you up to in this space? Well, first of all, at the United States Equal Employment Opportunity Commission, for those of you who may not be aware of what we do known as the EEOC, think of us as the regulator for human resources.

[00:01:49] So all workplace issues you have come to us. So. Big ticket items like the Me Too movement, pay equity, diversity, equity, inclusion, race discrimination, [00:02:00] national origin, religion, color, disability, age. That’s all of our agency. So our agency was founded in the 1960s from Martin Luther King marching here in Washington, created the civil rights act, which created the EEOC.

[00:02:13] So we are one of the premier civil rights agencies, I say in the world to protect some of the most fundamental rights we have, which is to enter and thrive in the workforce, provide for your family. Without being discriminated against. So you may be wondering with that day job of how I got into technology and why artificial intelligence and other kinds of machine learning within the workplace are important because, you know, when I got into this role, I really.

[00:02:37] What is the future of human resources? What is the future of the workforce? And it was a very easy answer. And that was technology. And a lot of people didn’t really understand what that meant early on in this space. And you may be thinking too, well, AI machines in the workplace is really going to only impact certain industries like manufacturing, like fast food, like retail, [00:03:00] where you have actual robots that can.

[00:03:02] Replace humans. And we’ve all seen those photos of that dystopian workforce in the future where robots are just doing all the work that really didn’t excite everyone in the sense because, you know, that was only for certain industries. When I dove into it, I realized. It’s much more than that, that artificial intelligence, machine learning, natural language process, facial recognition, all the different types of technology in the AI space are, were already being used in human resources and have been used in HR for years.

[00:03:32] And we’ll explain more what that means. But really that has been my focus about as companies start. Thinking about how they’re going to use a I to make workforce decisions to actually make decisions that human resources have been making what are the guardrails around that what are the laws around that what are the regulations around that and how are we going to ensure that these computers are making the same kind of lawful decisions that humans are super interesting quite a history to you with the agency and kind of interesting [00:04:00] approach I think you’re kind of looking at.

[00:04:02] Thank you. At the AI impact on employment and equal opportunity, how are you kind of setting your priorities around that? There’s kind of like a near term roadmap and then probably like a longer horizon on like bigger concerns that you’re driving to make sure are getting addressed to. But can you break that down for our audience a little bit, what your priorities are and kind of what’s driving them?

[00:04:21] Yeah, well, it’s ensuring that all uses of AI, both the employers who are using this technology, the employees that are being subject to this technology really know what their rights are and their rights aren’t going to be much different than if a human was making a decision. And, you know, for us here at the EEOC, You know, we are not able to regulate the technology.

[00:04:43] That is not our specialty. Our specialty is regulating employment decisions. What I’ve really been trying to do with this initiative and all the discussion around AI in the workplace is simplify it for people who don’t understand how the algorithms work, people who don’t understand how the data sets work.

[00:04:58] And it is really saying at [00:05:00] the end of the day, what are these tools going to be doing? And they’re going to be making a decision. If you look at AI broadly, not just in the employment context, at the end of the day, you’re using AI and machine learning to actually do something that humans were doing before.

[00:05:13] There’s going to be a result of that decision. And that’s what we regulate here at the EEOC to make sure that all employment decisions are fair and are not biased, and they’re not discriminatory. If you look at it, if using AI in housing to make sure that AI decisions in housing are not also biased or discriminatory or finance or credit.

[00:05:30] So we have to just really be concerned here. With the limitations of our knowledge of what we know best, and that is regulating employment decisions. So that’s also what I’ve been trying to do in the sense of kind of demystifying what all this is. And at the end of the day, in the United States, only the employer can make an employment decision for their employees.

[00:05:49] And what do I mean by employment decision, hiring, firing. Determining your wages, determining your promotion, determining your benefits, determining your training, all those things that [00:06:00] occur to you at work. There’s some sort of AI that is helping making that decision completely, partially making the decision for employers.

[00:06:07] And at the end of the day, that is what we are going to regulate, and that’s what we need to look at. So because these AI tools, and we’ll talk about chat GPT in a moment, of course, aren’t really creating anything new. They’re just helping employers making decisions or making decisions. It’s so important for me and my perspective saying we can’t tell you whether or not to use these tools and there’s a lot of benefit, which we’ll talk about or don’t use them because there’s a potential harm, which we can also talk about if you’re going to use them, determine what purpose you’re going to use it for and then how are you going to.

[00:06:38] Comply with longstanding civil rights laws, because at the end of the day, all those tools are doing is making a decision. Cool. Yeah. So it’s kind of like a rules of the road are X and you’ve got these different types of cards out here and like, make sure you’re following the rules. Basically. I’m always curious about this on your team.

[00:06:54] Like how up to speed are they with the, with the technology and the AI space? Are you guys doing training [00:07:00] on this new field? Is there a lot of education that has to go on within your staff? All the above. I think you, you really nailed all the, all the major issues is that the fact that, you know, these tools are very complex.

[00:07:13] And I’d now like to talk about the ecosystem related to the new world relating to using AI in all different corporate uses. Because before, you know, here at the EEOC, who are our stakeholders? And that’s a very Washington DC word, the employers, unions, staffing agencies, employees, that was our world. So now you’re having this technology coming here and essentially all.

[00:07:34] Make these employment decisions, do the work for the companies themselves, completely eliminating some departments and some groups of workers. And there’s just new stakeholders involved. So now we have, if you look all the way back to where it starts, the VC and the investors who are looking to invest in these products to help the workforce, to eliminate bias in the workforce, to make the workforce more productive and efficient.

[00:07:57] And you know, they don’t want to invest in technology that’s [00:08:00] going to violate the laws that’s going to discriminate. And then you have tech entrepreneurs who are not trained HR professionals, are not trained HR labor and employment lawyers, because they understand how to create AI. They can understand how to write the code.

[00:08:13] They’re entrepreneurs to actually develop this sophisticated technology that’s being used within this space. And you know, they need to understand the different languages. How we speak here at the EEOC and how HR speaks because they don’t want to develop a tool that is going to discriminate. Because companies are going to want to buy a tool that discriminates.

[00:08:33] So in that sense, to answer your question, the doors have been open here. We’ve been hearing from a lot of people we don’t normally hear from, like tech vendors, like AI ethicists, like people who understand the codes, people who understand the algorithms to help us make those decisions. But at the end of the day, if we try to regulate the technology absent, you know, government AI commission.

[00:08:56] Which there’s talks of absent increasing federal [00:09:00] employee salaries to get PhDs in computer coding and software engineers within the government, which is very difficult because the competitiveness of hiring in Silicon Valley and how desirable these tools are, you know, that’s going to be a really big distraction from what our current mission is, is to ensure that there’s no discrimination and to promote equal employment opportunity now.

[00:09:20] And at the end of the day, because we investigate. Employment discrimination we look to see if there’s bias in the result whether it’s intentionally or unintentionally liabilities gonna be the same so our investigators know how to investigate employment practices and that’s what we’re gonna do and right now if we try to figure out what the code is what the algorithm is.

[00:09:42] We’re just going to get lost in the numbers, but what is this AI tool actually doing? It’s translating, and this is where we’ve been getting help from the outside, translating data science, computer science, coding words into our world. So you look at the data set when it comes into HR. What is the data set for HR decisions?

[00:09:59] [00:10:00] Well, it’s your, either your applicant pool or your current employees, whatever you’re using to make that decision on to have the algorithm look at the patterns in there. So let’s say, you know, using a data set of your current employees. Well, there’s issues with that because if the current workforce is made up of one sex, race, national origin, religion, you know, the computer may unintentionally replicate the status quo instead of trying to find a more diverse workforce or looking for the actual knowledge and skills required.

[00:10:29] So, you know, in that sense, we know that as well, because if it’s made up of one race, national origin, sex, that’s likely what you’re going to get, right? The other side of it too, is, you know, the algorithm, what’s going into that. And it’s going to be difficult for us to actually understand that, but we know what the prompts and the inputs are.

[00:10:45] So if you’re. Asking the AI to look for certain skills and that doesn’t meet the requirement of the job. And instead of finding the right candidates, you’re discriminating against candidates that don’t have that specific background. Then that’s an input problem as well. So you can see a lot of these [00:11:00] issues within data science within computer engineering are going to be very similar in our space.

[00:11:04] It’s just translating data set to applicant pool, algorithms, inputs. To what the skills are required for the job. And that’s what we know. That’s what HR professional knows. And that’s ultimately what these tools are going to be doing. Super interesting. I mean, cause that’s like being from this entrepreneurial startup world, right?

[00:11:22] Like it’s always a balance of like, okay, we have global technologies here with national or regional use cases and, and where the rubber meets the road with how these things become, how you can build something that is compliant where most of the energy is going to be, is there challenges, right? Like, because there’s always like pressure to just kind of get the new innovation.

[00:11:39] out there as quickly as possible, but I think you framed it really well, like kind of making sure that the parameters are there at the right inputs and that it’s not setting up for unintended consequences, like with a lot of these things because it’s so new. And because you’re dealing with employment law, the unintended consequences here.

[00:11:56] Is discrimination is violating, you know, [00:12:00] individuals, civil rights. And there’s so many good examples of how AI can actually help us promote unique employment opportunity and prevent discrimination from ever occurring. Because look, bias exists in the workplace. That’s why my agency exists. You know, we get over 70, 000 cases every single year.

[00:12:14] We collect. Over 500 million every single year from employers for violating these laws. So, you know, there’s already been issues with human decision making in employment. And if you think about some of the most basic biases within employment is, you know, there’s studies about a name, a male and female resume come in.

[00:12:32] Both with equal qualifications, a male is just more likely to get selected. A name has nothing to do with the candidate’s ability to perform the job. What it does is it tells you things you’re not allowed to make an employment decision on, like their sex, potentially their national origin, potentially their religion.

[00:12:47] So what does it have to do with anything? If there’s AI tools that can remove that and mask that or mask. Associational characteristics like where you went to school or where you work, which may indicate you’re part of a certain [00:13:00] protected class, like a national origin or a religion. All those biases that come into play will be eliminated and it will allow the machine to actually just look at the skills, parsing out all those other individual factors which are unlawful to make an employment decision on, which a human can’t unsee.

[00:13:16] And you think about too, you know, with interviews. You know, now most interviews are done by zoom, of course, but like, think about walking into somebody’s office or going on a zoom. What’s the first thing that happens when the zoom camera comes on, the employer sees you and they see a lot about you again, that you’re not allowed to make an employment decision on your race, your national origin, your color, you know, potentially your religion, your sex, if you’re disabled, if you’re pregnant.

[00:13:43] Again, all factors that are not allowed to be used in an employment decision. However, that’s hard to unsee. And there’s a lot of bias sometimes. And if you see a disabled worker, you see a pregnant worker, you may say, well, how much is this person going to cost me? You know, she’s pregnant. She’s going to need [00:14:00] pregnancy leave.

[00:14:00] Or this he’s disabled. It’s going to cost me X amount of money to bring this disabled candidate on board because I have to do all these other stuff. Now, obviously those are highly illegal examples, but it’s one that AI may mitigate. If you look at a lot of companies doing initial screening interviews through an app, you know, they’re asking you questions about your ability to perform the job using natural language processing to go through the way you respond to the question and seeing if that matches the words that the employer believes are important to answer the question.

[00:14:29] Right. So. That would eliminate bias at the earliest stage of the hiring process because they don’t see all those things they normally would see and allow those candidates that normally would not be able to get past that point to actually show they have the skills to get the job later on. So there’s a lot of really significant benefits, but the benefits of this all depend.

[00:14:48] Very simply on how you’re going to use these programs, how the programs designed and how they’re implemented. And that’s really what I’m trying to get across, is that all the uses have benefits. All the uses have [00:15:00] potential harms. It’s just what are you going to do to make sure that you’re using the right program for the right purposes and the inputs you’re doing, whether it’s the data set or what you’re looking for, are actually going to get you the candidate you need.

[00:15:12] And not actually discriminate against others, and that’s the challenge. Yeah, no, it’s definitely a challenge. I mean, you see both sides of how this could be totally beneficial to you. Where are you currently seeing AI implemented kind of in, in the work in the employment or workforce from your point of view, like at the agency?

[00:15:29] The entire employee life cycle. So there’s software out there that will do it, create a job description for you. that uses natural language processing to look through hundreds of thousands of job descriptions. There’s been a lot of historical issues with job descriptions that studies that females are less likely to apply to job descriptions that are more aggressive than males who will apply.

[00:15:50] There’s other studies that are done the way words are within a job description can prohibit some people in certain groups from applying. So there’s a AI out there that goes through and [00:16:00] can de bias your job description that can write your job description using terms that are more likely to have people who are more diverse.

[00:16:07] Applying that that normally wouldn’t because of the words there’s a I out there that then will target where to advertise those job descriptions looking for candidates or it’s a eyes that will just find candidates for you based upon the job description you create then there’s a I that encourages candidates to apply there’s a I that does the entire scheduling of interviews then there’s a I that does the complete initial interview a lot of companies your entire first round of interview is through a chat bot.

[00:16:36] Or through an app like I explained earlier, then there’s AI that will look through all the resumes and see what skills these candidates have, tell you which ones you should interview, which ones you should not interview. There’s AI that will say that this person applied for this job in this part of the country, but based upon, you know, our machine learning, they would be the best candidate for this job that they didn’t even apply for in a completely different division in a completely different [00:17:00] department.

[00:17:00] Then there’s AI that determines whether an offer should be made. AI that determines what the salary should be. There’s AI that determines where the employee should physically work and sit based upon how they may get along with their coworkers. Once you’re there, there’s AI that monitors everything you’re doing at work.

[00:17:15] So everything you’re doing in Zoom, Slack, Teams, Microsoft Office, it’s looking to see what your employee sentiment is to see if you’re happy at work, to see if you are going to be a flight risk. If you’re miserable, there’s AI that prevents discrimination from occurring by preventing you from harassing coworkers by using certain words within the workplace.

[00:17:32] Slack or teams and then there’s AI that does your performance review and does your yearly ratings to see how good you are. There’s AI that gives you your schedule. There’s AI that gives you your, how many widgets you need to make that day. And there’s even AI that will tell you you’re fired. Think about every function that human resources does.

[00:17:49] Everything that you’ve been subject to as an employee at a company, the actual boss or the person making the decisions may be an algorithm. That’s wild. [00:18:00] I mean, like, and that’s the thing I think people don’t really realize is that this isn’t, I mean, the hype, a lot of hype last November onward about a year ago, right?

[00:18:07] Like when chat GPT three came out and a lot of focus on this, but this has been out for a long time. Like, how long have you guys been seeing AI in the workforce? I mean, everything you just listed there, those are obviously been, it’s taken time to get there, right? Like, is this how long has your, your, your agents have been looking into this and looking into AI in the workplace?

[00:18:27] Yes. Well, the, the COVID obviously, you know, really increased when we did the increase of remote work of video interviews, a lot of additional vendors came online companies to start to develop some of the software themselves or buying it in mass. But really the HR tech market around the last 10 years, like 2014, 2015 is really when a lot of these programs started becoming on the market and starting to be developed.

[00:18:51] And right now it’s absolutely exploded. The amount of money companies are investing in these tools, the amount of companies that are on the market. Now, if you think about [00:19:00] it for some companies, large companies, they get millions and millions of resumes a year, and they interview hundreds of thousands of workers, and they don’t have 2 million employees to review a million resumes, right?

[00:19:11] So a lot of this is really geared towards larger employers. Fortune 500 employers who need this kind of efficiency, who need to be able to make employment decisions On a large scale. So you’re really seeing it take off. I’m completely well before the discussions of chat GPT well before, you know, generative AI has been all the rage within corporations.

[00:19:33] I’m talking about something completely different and that’s not automating workers. This is just. Using AI and machine learning to make decisions in HR that typically and normally were made by humans alone. At least from my perspective in the tech and startup area, people look at regulatory agencies kind of as this monolithic thing where it’s like, okay something to get in the way, but these things are super important, right?

[00:19:55] Like how, how we apply them is like, you’re saying the bad outcome there is [00:20:00] discrimination, right? Like nobody really wants to see that. And it’s a scale that it could be with this. It’s discrimination at scale when you think about if a hiring manager has bias, right? So there’s a stat in talent acquisition that a human hiring manager looks at a resume, let’s say a PDF resume because they’re not on paper anymore, five to seven seconds.

[00:20:21] That’s all you get from a human looking at a resume. If somebody has bias and say they don’t want to hire women for the job, they have to look and spend that time saying, well, is this a female name? Did this person go to a woman’s college? Or, you know, how do I associate with that? What their sex may be opposed to.

[00:20:39] So that takes time, right? You know, each resume five, you can add up the time. So if you think about AI though, with a few clicks, you can scale discrimination to the likes we’ve never seen before, because in 0. 7 seconds, you know, you could run a code or an algorithm to. Delete all female names or, you know, associating with being a female like women’s colleges, like women’s sports team, like women’s clubs.[00:21:00]

[00:21:00] And in 0. 7 seconds, you could discriminate against millions of women to the scale we’ve never seen before. So you could see if unchecked without those governance, without the proper uses of it, you can use these tools to inject bias far greater than any one individual. It’s interesting. In which ways do you think that AI is kind of overestimated and underestimated by the general public or workers?

[00:21:27] Well, I think it’s, it’s overestimated in the sense that it can just do all of this by itself, right? It could just make a hiring decision by itself. And we forget that. Industry specific location specific. There are different hiring requirements. So there’s going to be a human at some point saying what the job requirements are, what the applicant pool is, and it’s not going out and necessarily creating that itself.

[00:21:51] Now it could use, you know, it could be told to look at the patterns within certain industries to find it. But at the end of the day, there is still some human oversight and [00:22:00] controlling this. And that’s why it’s so critical. From an employer standpoint, to make sure that those who have access to the system, those who are doing the inputs are actually putting in skills and actually looking at the diverse slate of candidates at the front end to prevent discrimination from ever occurring before.

[00:22:17] So I think it’s overestimated in this fact, in the sense where we’re going to just do this for you. There’s going to be no human inputs in there. The end of the day, it still needs to be told what the data set is. It still needs to be told, you know, what patterns to look for. Initially, right now it may continue to learn and get, get up smarter as well.

[00:22:35] But I think we shouldn’t lose sight of that as well. And especially when you’re dealing with areas of civil rights, you know, such as employment, housing, finance, credit, you know, a lot of different protections. We have here in this country, there’s just going to be so much more care and requirement taking to having some kind of human in the loop, having some kind of human oversight where you may not need for other kind of SAS software.[00:23:00]

[00:23:00] You know, a lot of this AI technology being sold within other parts of the organization are literally being sold to say, all you have to do is. Turn it on and let it go. And you’re going to save money. All you have to do is, you know, make it do your document reviews faster, making it do your delivery routes faster.

[00:23:15] And that’s fine, but you can’t have that when you’re using it within human resources, because again, you’re dealing with people’s livelihood. And so I think, you know, that, that extra precaution that needs to occur for these tools to be used properly, which is different than other kinds of. AI uses within organizations for sure, for sure.

[00:23:34] And there’s so many different angles here that it’s just thinking about kind of like that list of, of, of uses where it’s already applicable. But I mean, you’ve also got things like, I would imagine like medical benefits and, and, and employments and data that’s going back and forth between those. I mean, I remember hearing about incentives for people to like, quit smoking and with their medical benefits are basically tied in with your employer, right?

[00:23:56] Like, and so I can imagine there’s a whole set of concerns around [00:24:00] privacy for employees and health information and all sorts of things like that, too. That’s probably top of mind as well. What do you think is the most impactful personal or professional example of using AI to improve your guys work at the agency?

[00:24:15] Well, as you’ve seen from the executive order that came out, there’s a lot to do to sell to the US government. So we are still the US government. Generally, the procurement is a completely different agency that deals with this. So it’s not as easy as the federal government, each agency saying we’re doing this or that.

[00:24:31] There’s a big part to sell into the federal government. Let me just tell you how I think it can actually help us with our law enforcement investigations. Because think about it, we show up, somebody says they were fired because of it. Their national origin or their race, right? So I believe my employer fired me because I’m a, let’s just say a african american female, right?

[00:24:52] So what do we have to do now to prove employment discrimination? We have to sit down, we have to take depositions, we have to ask for [00:25:00] documents and very rarely does anyone says yes, of course I fired her because she’s a black female, right? It doesn’t work that way. So that’s what we’re dealing with now.

[00:25:08] When you talk about the black box of AI, we’re We’ve been dealing with the black box of the human brain, you know, since our agency isn’t existing and whether somebody made a lawful decision or an unlawful decision is not easy to prove. You very rarely have that smoking gun. So in a sense, I say, you know, an employer who’s using AI, you know, to make.

[00:25:26] Let’s say a termination decision based upon performance reviews in a sense that it’s very auditable. That’s very traceable, and you could see the exact inputs. And when you’re saying you’re doing a performance review using machine learning, we’re failing because of the industry. We have all this additional data, and we found that this is what the requirements should be.

[00:25:42] If somebody this level having worked at this time, this is how many widgets they should be making per hour with this much of efficiency, you know, whatever hypothetical industry. Standardization you want to use, which we have these large data sets. Now you can show that we didn’t fire this person because of their, that she was a female.

[00:25:59] We didn’t fire this person [00:26:00] because she’s African American. They didn’t meet the performance reviews based upon. The metrics we use and here’s everything we use to make that employment decision. And you same with failure, your higher cases and saying, well, here’s the qualifications that we believe were the best for this job.

[00:26:16] And that’s all we asked the algorithm to do. It had, and we insured by testing by guardrails that the. Hey, I couldn’t make a decision based upon in this case, race or sex. So I think in a way you have a much more auditable trail than you’ve ever had before. And if you’re using it properly, if you’re keeping that data in a sense, you can really disprove discrimination.

[00:26:38] Easier than before, of course, if you’re doing it the right way. So I think that’s a way that can really benefit us from a law enforcement side. Yeah, no, that’s super, super interesting. And is that mainly where, where you guys like kind of come into play is when there are conflicts or issues that, that require investigation or are you guys like proactively working to kind of inform employers to around this topic at [00:27:00] all?

[00:27:00] Like, or just kind of curious. That’s a great question. So we are a civil law enforcement agency at our core. Our mission is, you know, when employees are discriminated against, they come to our agency. You may not know in the United States, you can’t sue your employer directly in court for discrimination without coming to our agency, whether you work for a private company, state and local government or the federal government.

[00:27:19] So we basically see every case of discrimination within our agency. You know, that being said, We do have an initiative that you can read more about at EEOC. gov slash AI, where, you know, exactly to do your points you just made is like, we want to be proactive in this and we want to ensure that both developers, investors, employees, unions.

[00:27:41] Everyone who are using and being subject to this technology knows what their rights are. So we put out some guidance on how the Americans with disability act interplays with AI tools. So for instance, in AI interviews to make sure that disabled workers have the same amount of ability to use the programs.

[00:27:59] With their [00:28:00] disability, that these programs don’t take medical information that a human would know not to take and how the different kinds of accommodations employers would have to make for disabled workers using this tool. We also put out guidance related to how some of our longstanding laws from the 1960s apply to decisions that AI are being used and how you would audit employment decisions relating to giving, let’s say, employment assessments to make employment decisions.

[00:28:25] Decisions to make sure that those tests are auditable, whether they’re done on scantron or whether they’re done through algorithms. So we’ve done a lot. We’ve held a hearing. We’ve had listening sessions with all different diverse groups, and we’re going to continue to do that, including trading our own internal investigators and staff on how this technology is quickly becoming mainstream at most HR departments.

[00:28:47] That’s fascinating. I mean, like, it’s such a complex set of, I mean, that’s what we’re seeing too, is that there’s a lot of human, human issues that, that apply with AI and it’s just about kind of how they’re being applied and a lot of relative connective [00:29:00] tissue between these two things to where it’s like, okay, look, like, just like you said, right?

[00:29:03] Like we’ve been dealing with scantrons and with, you know, human problems for a long time, right? Like, like how do we apply them to, to work in this framework? And it’s super interesting. Is there anything here that you kind of want or. Are the general public or audience or and there’s a lot of entrepreneurs in our audience to like to be aware of or Information that you could recommend they take a look at yeah We need to hear from you.

[00:29:24] And the thing is for those who are looking to develop this technology To make the workplace more fair, to make employment decisions more transparent. You know, you hear so much about employers wanting to take a skills based approach to hiring and moving away from resumes and moving away from certain degree requirements.

[00:29:42] Well, you know, that’s going to take a lot of interesting and complex technology to help us get there across the board. So, you know, the more we can interact and the more, you know, entrepreneurs can come meet with me, meet with our agency to tell us. What you think you can solve we can be a better be able to provide the long [00:30:00] standing employment laws that will apply to whatever decision you’re gonna be making through an algorithm and look there’s different legal issues for using facial recognition then there is from having to take an assessment test using a i think it’s really important that we are giving out all the tools and information.

[00:30:20] Relating to each specific use and then for the companies who are looking to buy these programs who are going to subject their employees to this program. It’s so critical to continue to work with us to see what those guardrails are going to be to make sure that they’re asking the vendors the right question.

[00:30:35] If they’re making sure that the vendors are going to work with them. To test the tools before it ever makes a decision on someone’s livelihood and how the vendors are going to train their individual HR workers who are going to be having access to this tool to make sure that they’re using it the right way and then to the employees as well, you know, to understand what their rights are and what their employers obligations are to them when they’re using these tools.

[00:30:59] So it’s really, [00:31:00] you know, the entire ecosystem needs to work with us. Ask it. Us those questions, ask us where to provide guidance on where they want to go with this technology because the end of the day, you know, when these products are being developed, when these products are being sold and where products are being implemented, the government is not there overlooking it and the more that we can answer those questions that come up outside of investigations, the more we can help these programs be invested in, be developed, be used properly, Where employees feel comfortable that they’re getting a fair shake and then they’re not going to be discriminated against.

[00:31:37] Keith, I won’t keep you for any longer. We’ve gone through a bunch of things over half an hour. I think it’s been really, really helpful for our audience and really kind of brings a lot of this to earth. I think, and the info you provided too, for people to reach out is super helpful too, and make sure that that gets included, where can people find you?

[00:31:51] You can find me on LinkedIn. That’s probably the easiest place, LinkedIn and Twitter. Awesome. I really want to thank you for coming on the brave technologist today and love to [00:32:00] have you back to like in the future to kind of check in on how it’s going. Thank you so much. Thanks. Thanks for listening to the brave technologist podcast.

[00:32:08] To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the brave browser, you can download it for free today at brave dot com and start using brave search, which enables you to search the web privately. Brave also shields you from the ads, trackers and other creepy stuff following you across the web.

[00:32:29] ​

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • How to use machine learning to make HR decisions that were historically done by a human
  • Examples of how AI and HR tech have already impacted every step in the employee lifecycle
  • How the use of natural language processing can mitigate against discrimination in the hiring processes
  • The work the EEOC is doing to ensure HR tech is being developed in a way that helps all employment decisions remain fair and non-discriminatory

Guest List

The amazing cast and crew:

  • Keith Sonderling - Commissioner at the United States Equal Employment Opportunity Commission (EEOC)

    Keith E. Sonderling was confirmed by the U.S. Senate, with a bipartisan vote, to be a Commissioner on the U.S. Equal Employment Opportunity Commission (EEOC) in 2020. Until January of 2021, he served as the Commission’s Vice-Chair.

    Since joining the EEOC, one of Commissioner Sonderling’s highest priorities is ensuring that artificial intelligence and workplace technologies are designed and deployed in a way that’s consistent with long-standing civil rights laws. Commissioner Sonderling has published numerous articles on the benefits and potential harms of using artificial intelligence-based technology in the workplace, and speaks globally on these emerging issues.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.