AI, Algorithms, and Bias

News and events of the day
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

AI, Algorithms, and Bias

Post by carmenjonze »

Protecting Our Communities From Algorithmic Bias - Kapor Center

November 9, 2021
11:30 AM PST | 2:30 PM EST

Image
So what is algorithmic bias? How does algorithmic bias disproportionately and negatively impact marginalized groups? And how do we take action to protect the civil rights of our communities?

We are at a pivotal moment where the harms of algorithms must be addressed through multiple approaches -- ranging from federal regulation to grassroots community action--to ensure that the development and deployment of artificial intelligence systems are done with an ethical and equitable lens.

Join us for this special session with a panel of leading voices on algorithmic bias to understand how to protect the civil rights of our communities and create a more equitable future!
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
Motor City
Posts: 1802
Joined: Thu Oct 28, 2021 5:46 pm

Re: AI, Algorithms, and Bias

Post by Motor City »

The results have been in for years now.

Good for overprotecting property at the expense of bankrupting and criminalizing masses of innocent people.

Criminalizing the unemployed
........"You're saying the agency used the computer to determine fraud," Sewell responds.

Yes, without any human oversight, a machine had determined Sue committed fraud. Sewell promptly dismisses the fraud claim, saying Sue was legally entitled to unemployment benefits.

Given those circumstances, some of Sewell's colleagues are baffled by what they've seen lately.

Since 2011, under Republican Gov. Rick Snyder, the state has spent tens of millions of dollars to slowly implement a computer software program that handles applications filed with the UIA. The effort to curb waste is consistent with a vision posed by Snyder of operating government with a business-minded attitude.

The program — called MiDAS — detects possible fraud by claimants.

The problem, says Blanchard, who represents several plaintiffs in a recently filed federal lawsuit that challenges the UIA's alleged "robo-adjudication" system, is that apparent lack of human oversight. MiDAS seeks out discrepancies in claimants' files, according to the lawsuit — and if it finds one, the individuals automatically receive a financial penalty. Then, they're flagged for fraud.

"The system has resulted in countless unemployment insurance claimants being accused of fraud even though they did nothing wrong," the suit says.

The net effect, Blanchard asserts, is that Michigan now has a system in place that criminalizes unemployment. It's a process that, contrary to its stated intention, is creating fraud, rather than eliminating it — a MiDAS touch, if you will, where the state gets the gold: The program has been a windfall for Michigan, collecting over $60 million in just four years.

The state has also gloated about the software's progress in detecting significant amounts of fraudulent claims, but what officials don't seem to grasp is the enormity of the situation, according to administrative judges and attorneys who are routinely involved with fraud cases.

Claimants can be issued a warrant for their arrest, Michigan can garnish their wages and federal and state income taxes, and some succumb to bankruptcy. The number of claimants who have faced those circumstances for being falsely accused of fraud is entirely unknown, but it's clearly an emerging contingent.

That's not to say legitimate claims aren't being brought. But administrative judges, UIA workers, and attorneys say bogus fraud charges are being levied by the state with greater frequency.

So instead of protecting some of the state's most vulnerable residents, they say, the UIA has ushered in a disaster. And those affected by the process, buried in debt, have been pushed to the brink — financially and emotionally. A couple have even attempted suicide in the wake of the "decisions" by MiDAS........
Image
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

Motor City wrote: Tue Nov 09, 2021 9:51 pm The results have been in for years now.

Good for overprotecting property at the expense of bankrupting and criminalizing masses of innocent people.

Criminalizing the unemployed
First thing the cons yell is fraud. :roll:
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
ProfX
Posts: 4087
Joined: Tue Nov 02, 2021 3:15 pm
Location: Earth

Re: AI, Algorithms, and Bias

Post by ProfX »

Thank you for reviving this topic, and making sure the problem is not limited in definition/description to facial recognition AI.

Yes, there are problems of algorithmic bias in lots of situations. And I again repeat the reason I think it exists is since we are still not in an age of self-coding or constructing AI, AIs contain bias, because they are often coded in ways that reflect the (often unconscious) biases of the programmers.

Like many sectors of society, computer programming is not exactly over-represented with women, racial and other minorities, and LGBT and other marginalized groups. The tech industry seems terribly full of strange right-wing ideologues, see: John McAfee. Or Peter Thiel.
"Don't believe every quote attributed to people on the Internet" -- Abraham Lincoln :D
Motor City
Posts: 1802
Joined: Thu Oct 28, 2021 5:46 pm

Re: AI, Algorithms, and Bias

Post by Motor City »

Prince Harry says he warned Twitter CEO of U.S. Capitol riot
.......“Jack and I were emailing each other prior to January 6 where I warned him that his platform was allowing a coup to be staged,” Harry said at the at the RE:WIRED tech forum. “That email was sent the day before and then it happened and I haven’t heard from him since.”.....
....Harry accused other social media sites like Facebook of misleading “billions of people” with misinformation about COVID-19 and climate change. He also targeted YouTube, saying many videos spreading COVID-19 misinformation were left up despite violating the site’s own policies.

“And worse, they came to the users via the recommendation tool within YouTube’s own algorithm versus anything that the user was actually searching for,'' he said. “It shows really that it can be stopped but also they didn’t want to stop it because it affects their bottom line."
Image
User avatar
sam lefthand
Posts: 678
Joined: Sun Oct 24, 2021 1:58 pm

Re: AI, Algorithms, and Bias

Post by sam lefthand »

ProfX wrote: Wed Nov 10, 2021 6:19 am Thank you for reviving this topic, and making sure the problem is not limited in definition/description to facial recognition AI.

Yes, there are problems of algorithmic bias in lots of situations. And I again repeat the reason I think it exists is since we are still not in an age of self-coding or constructing AI, AIs contain bias, because they are often coded in ways that reflect the (often unconscious) biases of the programmers.

Like many sectors of society, computer programming is not exactly over-represented with women, racial and other minorities, and LGBT and other marginalized groups. The tech industry seems terribly full of strange right-wing ideologues, see: John McAfee. Or Peter Thiel.
It sounds like you are seeing the reflection of a human being. And so far I have not seen any discussion in this thread going beyond bald statements that they're bad, whatever they are. And what's being described as what is bad about them again appear to be the reflectuons of human beings being human beings.

There's a wave of hollow undefined fear out there.

Meanwhile my daughter is trying to write an AI compiler program, again. That would be along the lines of being the beginning that age of "self-coding" you were talking about above.

She's not a hardware person so she won't be building an AI.
User avatar
Libertas
Posts: 6468
Joined: Sun Oct 24, 2021 5:16 pm

Re: AI, Algorithms, and Bias

Post by Libertas »

carmenjonze wrote: Wed Nov 10, 2021 12:36 am First thing the cons yell is fraud. :roll:
I love the person who claims to be on the left but spends their entire time on a message board scouring every single word written by anyone not a con so they can pick it apart and pretend to be doing it for fairness reasons. :lol:
I sigh in your general direction.
User avatar
ProfX
Posts: 4087
Joined: Tue Nov 02, 2021 3:15 pm
Location: Earth

Re: AI, Algorithms, and Bias

Post by ProfX »

sam lefthand wrote: Thu Nov 11, 2021 4:02 pm It sounds like you are seeing the reflection of a human being. And so far I have not seen any discussion in this thread going beyond bald statements that they're bad, whatever they are. And what's being described as what is bad about them again appear to be the reflectuons of human beings being human beings.
That who or what is bad? The programs or the programmers? I don't call a lot of software bad, well, maybe deliberately malicious stuff like computer viruses.

The only thing I said is programmers can have implicit bias, that doesn't make them "bad," it does make them human, and in turn that can influence the nature of the code in their programs.
"Don't believe every quote attributed to people on the Internet" -- Abraham Lincoln :D
User avatar
sam lefthand
Posts: 678
Joined: Sun Oct 24, 2021 1:58 pm

Re: AI, Algorithms, and Bias

Post by sam lefthand »

ProfX wrote: Thu Nov 11, 2021 4:32 pm That who or what is bad? The programs or the programmers? I don't call a lot of software bad, well, maybe deliberately malicious stuff like computer viruses.

The only thing I said is programmers can have implicit bias, that doesn't make them "bad," it does make them human, and in turn that can influence the nature of the code in their programs.
As I said it sounds like you are seeing the reflections of human being part. That's good.

:)

And about the "bad" stuff I did turn to other posts in the thread before doing that, I said:

"And so far I have not seen any discussion in this thread going beyond bald statements that they're bad, whatever they are."

That's referring to the parts I saw in the thread before I got to your part.
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

sam lefthand wrote: Thu Nov 11, 2021 4:02 pm There's a wave of hollow undefined fear out there.
Out where, exactly?
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

sam lefthand wrote: Thu Nov 11, 2021 4:58 pm That's referring to the parts I saw in the thread before I got to your part.
What other parts of the thread?
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

Libertas wrote: Thu Nov 11, 2021 4:11 pm I love the person who claims to be on the left but spends their entire time on a message board scouring every single word written by anyone not a con so they can pick it apart and pretend to be doing it for fairness reasons. :lol:
Being contrary whenever a Black person brings up race is cute and funny and not alt-right at all!
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
Libertas
Posts: 6468
Joined: Sun Oct 24, 2021 5:16 pm

Re: AI, Algorithms, and Bias

Post by Libertas »

carmenjonze wrote: Thu Nov 11, 2021 5:16 pm Being contrary whenever a Black person brings up race is cute and funny and not alt-right at all!
We know how they acted the ONLY time I can think of when a Black person wasnt convicted of something...OJ

Regardless of what one thinks about guilt, the reaction by some white folk was priceless, wasnt it.
I sigh in your general direction.
Motor City
Posts: 1802
Joined: Thu Oct 28, 2021 5:46 pm

Re: AI, Algorithms, and Bias

Post by Motor City »

The $300m flip flop: how real-estate site Zillow’s side hustle went badly wrong
Enormous companies with deep pockets and mounds of data bidding against ordinary people in an already absurd housing market? It sounds like a nightmare for anyone who isn’t a tech investor. And indeed, news of what Zillow has been up to has caused a backlash on social media, largely fuelled by a viral TikTok by a Nevada real-estate agent called Sean Gotcher that claimed iBuyers manipulate the housing market.

Gotcher didn’t explicitly name Zillow but he heavily alluded to them and accused the company of using data harvested from people perusing their dream homes while they are bored on the website. Gotcher said this nameless company then buys a ton of properties in the neighbourhood people are searching for, and overpay for a couple of adjacent properties in order to artificially drive up prices. (Zillow and Redfin have denied doing this and real estate experts have noted they don’t have enough market share for this strategy to work.)

Zillow may not have been explicitly manipulating the market, but it was certainly trying to use technology to outsmart it. In the end, however, the market won. Zillow’s flipping flop should serve as a reassuring reminder that not everything can be automated. There are various reasons why Zillow got burned, including a labour shortage making it difficult to renovate homes. But the biggest issue is that its algorithm simply wasn’t up to snuff. It couldn’t deal with the complexities of pricing in a volatile market and resulted in Zillow overpaying for a lot of property.
regular folks competing with artificial intelligence and idle money is no market at all
While individual homebuyers may not have to compete against Zillow any longer, it’s unlikely that buying a house is going to get any cheaper or easier anytime soon. Those 7,000 houses Zillow is sitting on? Bloomberg reports that they will probably be offloaded to institutional investors like BlackRock rather than regular people. And while Zillow may be ending its iBuying business, the financialization of housing looks set to continue. Big money is gobbling up real estate and leaving many first-time buyers out in the cold.
Image
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

Motor City wrote: Tue Nov 16, 2021 7:44 pm The $300m flip flop: how real-estate site Zillow’s side hustle went badly wrong



regular folks competing with artificial intelligence and idle money is no market at all
This is a really f'ed up story that I watched unfold on TikTok.

What's really interesting is, if you go to any of these flip parties in my city, people will tell you straight out what they're doing. Kick out the residents, bring in a crew to strip clean and so-called upgrade the Victorian, put in track lighting and an in-house LAN room and fiber, then mark up the price to 2.5 mil.

This happened literally, not exaggerating, right next door to me. They have a Black Lives Matter sign in their planter.

People put down the so-called coastal elites but really are utterly clueless about those of us who live on these coasts, who are not elites.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
Motor City
Posts: 1802
Joined: Thu Oct 28, 2021 5:46 pm

Re: AI, Algorithms, and Bias

Post by Motor City »

https://youtu.be/du9StHQYGGs?t=171
You once spoke to me about intelligence, that it was a gift to be used for the good of mankind, these things have turned you into something you're not, don't listen to them
Image
User avatar
ProfX
Posts: 4087
Joined: Tue Nov 02, 2021 3:15 pm
Location: Earth

Re: AI, Algorithms, and Bias

Post by ProfX »

That's a great scene, it really is Peter using Otto's own words to remind him of what he once believed.

What makes me wonder about the new film coming in Dec is what happened to Otto? If he teleported right out of his universe before drowning, ... the arms were no longer controlling him, and while he did die, it was his choice to sacrifice himself to save Peter & MJ. So why is he back to being sore?

... I guess we're going to find out.

I appreciate you posting that clip in this thread, MC. For one thing, I kinda think one metaphor of that film, with Doc Ock's struggle with/over being controlled by his own robot arms, is whether we will control our AI, or be controlled by it.

Also, of course, I completely agree with the larger point Otto made, and Peter repeated, about the responsibilities of science, intelligence, and thus also AI design/engineering. If our mistaken dreams have gone awry ("the sun in the palm of my hand"), they need to be ... adjusted, or if necessary, undone. Especially if like the fusion reactor, they have gotten out of our control.
"Don't believe every quote attributed to people on the Internet" -- Abraham Lincoln :D
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

sam lefthand wrote: Thu Nov 11, 2021 4:02 pm Meanwhile my daughter is trying to write an AI compiler program, again. That would be along the lines of being the beginning that age of "self-coding" you were talking about above.

She's not a hardware person so she won't be building an AI.
You have a transgender kid, so you claim, and yet you're in bed with the worst antitrans elements of this country. Weird.

What's it like being civil to the eliminationists intent on second-classing your child and our LGBTQ QTPOC community?
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
sam lefthand
Posts: 678
Joined: Sun Oct 24, 2021 1:58 pm

Re: AI, Algorithms, and Bias

Post by sam lefthand »

:ignore:
Motor City
Posts: 1802
Joined: Thu Oct 28, 2021 5:46 pm

Re: AI, Algorithms, and Bias

Post by Motor City »

ProfX wrote: Mon Nov 29, 2021 7:43 pm That's a great scene, it really is Peter using Otto's own words to remind him of what he once believed.

What makes me wonder about the new film coming in Dec is what happened to Otto? If he teleported right out of his universe before drowning, ... the arms were no longer controlling him, and while he did die, it was his choice to sacrifice himself to save Peter & MJ. So why is he back to being sore?

... I guess we're going to find out.

I appreciate you posting that clip in this thread, MC. For one thing, I kinda think one metaphor of that film, with Doc Ock's struggle with/over being controlled by his own robot arms, is whether we will control our AI, or be controlled by it.

Also, of course, I completely agree with the larger point Otto made, and Peter repeated, about the responsibilities of science, intelligence, and thus also AI design/engineering. If our mistaken dreams have gone awry ("the sun in the palm of my hand"), they need to be ... adjusted, or if necessary, undone. Especially if like the fusion reactor, they have gotten out of our control.
and how a bit of wisdom from his aunt help this come about, how it ripened at the right moment.
Image
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

sam lefthand wrote: Mon Nov 29, 2021 7:55 pm :ignore:
A transgender AI researcher’s nightmare scenarios for facial recognition software - Venture Beat

This is still happening.
For International Women’s Day last month in Berlin, some ticket machines used automatic gender recognition, a form of facial recognition software, to give female riders discounts on their tickets.

As well-intentioned as that may seem, AI researcher Os Keyes is worried about how such systems will negatively impact the lives of transgender people or those who do not adhere to strict binary definitions of male or female.

A recipient of the Ada Lovelace Fellowship from Microsoft Research, Keyes served as an expert witness for facial recognition software regulation being considered by lawmakers in the state of Washington and was cited earlier this month by a group of more than two dozen AI researchers who say Amazon should stop selling its facial recognition software Rekognition to law enforcement agencies.

In the instance of, say, rent-stabilized apartment buildings in New York where facial recognition systems are being proposed for entry, poorly made systems could provide negative user experiences for transgender or gender-neutral people who may encounter trouble opening the door. But Keyes also fears such systems could lead to increased encounters with law enforcement that lead to trans people getting discriminated against, overly monitored, or killed.

Keyes is especially concerned because their analysis of historic facial recognition software research found that the industry has rarely considered transgender or gender-fluid people in their work. This has led them to believe facial recognition software is an inherently transphobic technology.

Although the National Institute of Standards and Technology’s (NIST) facial recognition software testing system has been called a gold standard, Keyes staunchly opposes the organization, which is part of the U.S. Department of Commerce.

They take issue with NIST’s mandate to establish federal AI standards detailed in Trump’s executive order, the American AI Initiative. NIST is the wrong organization to lead the creation of federal AI standards, they said, because it employs no ethicists, it has a poor history with matters of gender and race, and its Facial Recognition Vendor Test (FRVT) uses photos of exploited children and others who did not provide their consent.
More in link.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

sam lefthand wrote: Mon Nov 29, 2021 7:55 pm:ignore:
Facial recognition AI can’t identify trans and non-binary people - QZ
Facial-recognition software from major tech companies is apparently ill-equipped to work on transgender and non-binary people, according to new research. A recent study by computer-science researchers at the University of Colorado Boulder found that major AI-based facial analysis tools—including Amazon’s Rekognition, IBM’s Watson, Microsoft’s Azure, and Clarifai—habitually misidentified non-cisgender people.

The researchers gathered 2,450 images of faces from Instagram, searching under the hashtags #woman, #man, #transwoman, #transman, #agenderqueer, and #nonbinary. They eliminated instances in which multiple individuals were in the photo, or where at least 75% of the person’s face wasn’t visible. The images were then divided by hashtag, amounting to 350 images in each group. Scientists then tested each group against the facial analysis tools of the four companies.

The systems were most accurate with cisgender men and women, who on average were accurately classified 98% of the time. Researchers found that trans men were wrongly categorized roughly 30% of the time. The tools fared far worse with non-binary or genderqueer people, inaccurately classifying them in all instances.

The rising use of facial recognition by law enforcement, immigration services, banks, and other institutions has provoked fears that such tools will be used to cause harm. There’s a growing body of evidence that the nascent technology struggles with both racial and gender bias. A January study from the MIT Media Lab found that Amazon’s Rekognition tool misidentified darker-skinned women as men one-third of the time. The software even mislabeled white women as men at higher rates than white men. While IBM and Microsoft’s programs were found to be more accurate than Amazon’s, researchers observed an overall trend of male subjects being labeled correctly more than female subjects, and of darker skin drawing higher error rates than lighter skin.
More in link.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

sam lefthand wrote: Mon Nov 29, 2021 7:55 pm:ignore:
I hope this didn't happen and will never happen to your kid that you claim is transitioning.

Transgender YouTubers had their videos grabbed to train facial recognition software - The Verge, 2017
In the race to train AI, researchers are taking data first and asking questions later

About five or six years ago, one of Karl Ricanek’s students showed him a video on YouTube. It was a time lapse of a person undergoing hormone replacement therapy, or HRT, in order to transition genders. “At the time, we were working on facial recognition,” Ricanek, a professor of computer science at the University of North Carolina at Wilmington, tells The Verge. He says he and his students were always trying to find ways to break the systems they worked on, and that this video seemed like a particularly tricky challenge. “We were like, ‘Wow there’s no way the current technology could recognize this person [after they transitioned].’”

To tackle the problem, Ricanek did what all good scientists do: he started collecting data. Like all AI systems, facial recognition software requires stacks of information to train on, and although there are a number of sizable and freely available face databases available (ranging in size from thousands to millions of images), there was nothing documenting faces before and after HRT. So, Ricanek turned to the internet — a decision that would later prove to be controversial.

On YouTube, he found a treasure trove. Individuals undergoing HRT often document their progress and post the results online, sometimes keeping regular diaries, and sometimes making time-lapse videos of the entire process. “I shared my videos because I wanted other trans people to see my transition,” says Danielle, who posted her transition video on YouTube years ago. “These types of transition montages were helpful to me, so I wanted to pay it forward,” she tells The Verge.

The videos also happen to be gold for AI researchers, as each contains dozens of varied, true-to-life photos. As Ricanek wrote on a webpage for the dataset he would compile from the videos: “[It] includes an average of 278 images per subject that are taken under real-world conditions, and hence, include variations in pose, illumination, expression, and occlusion.”
This should never happen to anybody.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

sam lefthand wrote: Thu Nov 11, 2021 4:02 pm There's a wave of hollow undefined fear out there.
What wave of hollow undefined fear are you talking about?

Where is "out there"?
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
sam lefthand
Posts: 678
Joined: Sun Oct 24, 2021 1:58 pm

Re: AI, Algorithms, and Bias

Post by sam lefthand »

:ignore:

:|
Post Reply