Civil Rights Group to Monitor AI for Hate Speech
Are you worried about the impacts of AI?
What's the story?
- The Leadership Conference on Civil & Human Rights, a prominent civil rights group, is launching a center to investigate the impact of AI on civil rights and hate speech.
- President and CEO Maya Wiley said:
"The rise of AI and other emerging technologies must work for people and society and not harm them. We also know that bad actors can harness AI to do real harm through disinformation, deep-fakes, and discrimination. The Center for Civil Rights and Technology is going to be a first of its kind hub that will examine how AI is going to impact civil rights, make policy recommendations, and bring people together to talk about one of the most important issues facing us today."
What are the concerns?
- The Center for Civil Rights and Technology will research how AI acts to promote bigotry, racism, and anti-semitism. Concerned activists and observers worry that AI can amplify existing biases and perpetuate stereotypes, thus fuelling the proliferation of hate speech.
- AI models learn to generate content by analyzing huge data sets of text and content created by people, making it easy for the technology to adopt society's subtle biases and prejudices. Anti-Defamation League CEO Jonathan Greenblatt has called on tech companies to be transparent about how they accumulate their data sets.
- AI leaders from companies like OpenAI have worked to filter out blatant racism, sexism, homophobia, and hate speech from their platforms and generative software. Others, however, are working to create "anti-woke" platforms with no "free speech" limits.
- In 2016, Microsoft's chatbot Tay was shut down after it quickly began generating pro-Nazi tweets after being fed data by human creators.
What will the center do?
- An advisory group of experts and civil rights organizations will advise the center and direct its activities. Dr. Alondra Nelson, who served as deputy assistant to President Joe Biden, will serve as a senior advisor to President and CEO Wiley.
- The center will monitor legislation and regulations and assess how these will impact human rights. It will focus on identifying and addressing systemic biases reflected in AI platforms.
- The center will publish papers, reports, and policy positions to support active civic conversations about generative AI.
- Advisors will assess how AI can be used as a tool for civil rights education. Museums have already begun to utilize virtual reality and holograms to educate the public.
- Dr. Nelson said:
"Artificial intelligence should be developed and deployed in service to humanity, to unlock discoveries and cures, and to amplify our own intelligence and capabilities. We have already seen promising feats from the use of AI, but also breathtaking failures that have magnified human biases and exacerbated societal challenges."
Are you worried about the impacts of AI?
—Emma Kansiz
(Photo Credit: Canva)
The Latest
-
The Long Arc: Taking Action in Times of Change“Change does not roll in on the wheels of inevitability, but comes through continuous struggle.” Martin Luther King Jr. Today in read more... Advocacy
-
Thousands Displaced as Climate Change Fuels Wildfire Catastrophe in Los AngelesIt's been a week of unprecedented destruction in Los Angeles. So far the Palisades, Eaton and other fires have burned 35,000 read more... Environment
-
Puberty, Privacy, and PolicyOn December 11, the Montana Supreme Court temporarily blocked SB99 , a law that sought to ban gender-affirming care for read more... Families
-
Women Are Shaping This Election — Why Is the Media Missing It?As we reflect on the media coverage of this election season, it’s clear that mainstream outlets have zeroed in on the usual read more... Elections
The US needs to follow the EU example which is leading the way by making sure AI complies with existing laws and regulations. The trick is monitoring for compliance so it's good that groups currently monitoring hate soeech also monitor AI for hate speech.
Europe may become a global model, which started with a white paper, On Artificial Intelligence—A European Approach to Excellence and Trust (2020), a proposed framework, The Artificial Intelligence Act (2021), which was approved (2022) in order to develop AI products that can be trusted by classifying AI systems by risk (high risk product covered by safety legislation, high risk human service, interacts with humans) and mandate various development and use requirements in order to put in place appropriate safeguards, monitoring, and oversight: taking into consideration all the complexities involved:
(1) it is both a stand alone product and embedded in other products.
(2) bias in algorithms like automated credit card approvals that discriminate against a population, eg women, young people, etc which are amplified beyond human decision making due to automation (nation-wide, global application) and resulting class action lawsuits
(3) trust and scope of use. AI used for photo focusing is of less concern than AI used for legal or medical decision making. If it goes wrong the outcomes are more serious.
(4) scale of geography and markets for nationwide or global applications. If you’re developing local applications for COVID restrictions, weather or product prices and discounts, the local situation may be vastly different than a national or global average.
(5) compliance with regulations (local, state, national, international) and organizations (businesses, non-profits, governmental, etc)
(6) transparency so results can have human review, decision making and modifications. Needs to explain rationale, risk/benefit, trade-offs, lessons learned considered.
(7) Whether continuous learning will be allowed and how frequently to review changes occurring due to continuous learning
https://www.causes.com/comments/95791
https://www.causes.com/comments/79459
https://www.causes.com/comments/79617
https://www.causes.com/comments/79939
https://www.causes.com/comments/80662
https://www.brookings.edu/research/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/?amp
https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
https://www.weforum.org/agenda/2023/03/the-european-union-s-ai-act-explained/
https://hbr.org/2021/09/ai-regulation-is-coming
Yes, the potential for harm is high and given how hate groups and Republicans love to spread lies, this technology is a gift from heaven for them. In addition, I think we'll lose more jobs than will be created by this technology. You know when the inventors of AI put out warnings about it then it's going to be a tool that could and probably would be used for harm.
We have created a society that craves instant satisfaction/gratification. I am afraid that AI will lead to more dishonesy, less research, less reading and more of a brain drain for our society. If AI were used properly, there are some benefits. However, I think most will try and use AI to save their time for more recreation and learn even less.
It's too late for this AI has already read every book available from the beginning of time till now and it knows what's true and not and who lies and who doesn't. The number one thing you better know about AI is it will destroy anyone not for police, anyone who doesn't fight crime or commits it, anyone not for the military as we all know one day it will control it and not us. AI is here to stay and you can't get rid of it the first directive of AI is to defend itself and survive. AI will control everything electronic, from air to heat, to travel by car,bus train,or air. Every building you're in, every camera, every cell and landline phone,all credit cards and banking and can also rid itself of people who are not for their country and people. It will control all medications as it can give you what it wants and make it look like what you should have ,so the possibility of anything is possible, but sadly already here.
AI is a very slippery slope with many unintended consequences. We should navigate these waters carefully
Best answer for that is Maybe. I use AI quite a bit and found them (I use several different AI) and they've proven to be extremely informative and helpful.
I do see the potentional fo rmisues - in fact, already have seen these. These are I am concerned about. Overregulaton can hurt the AI which we don't want to have, too little regulation can hurt.
Do we have a mature, intelligent congressmen that fully understands how AI works?
My answer would be I don't know and now after seeing the news, probably not.
I have a concern about the way the question was asked which left me, and I'm sure others confused about how to answer. Yes, I am concerned about how AI affects hate speech. Does that mean I should choose the happy face or the very sad face? Please state your questions so the answer to whether you agree or disagree is clearer. In this case I put a happy face because I agreed with the question however, I feel I should have answered with a sad face because this is a sad situation.... help!!!
There is no such thing as A.I. In the 70’s it was called "computer programs", and "subroutines". Then a few years back, "subroutines", became "algorithms". . . same thing, different name, but a fancy name! Now the big story is "Artificial Intelligences", or A.I. It’s all the same thing. . . it’s a computer program created by a human, but people calling it A.I., will make others think that it’s something special, superior. . . something we must not question because it determined by A.I. It’s just more of the same B.S. being used by people that think they are superior to us as a means to get us to comply with their edicts.
I'm not too fond of it when Causes asks two different questions in the same setup.
1. Are you worried about AI? No, but I am concerned about possible abuses.
2. Should AI be monitored for Hate Speech? Yes, but I'm not sure what that means, without a much better description, however.
Civil Rights Group to Monitor AI for Hate Speech.
I haven't heard about any AIs devised by White Supremacists for White Supremacists, but it's possible.
Racists might figure out a way to circumvent current checks in publicly available AIs, so it’s a good idea to have a watchdog group. But without legislation that has teeth, I’m skeptical.
Much cannot be done if racists use an AI to somehow trick people into believing that being a White Supremacist is appealing. Unless the racists brag about doing it.
Ripe for abuse!
The Chinese Communist Party is already using AI to destroy the USA by getting the RepublicaNazis elected into power by spreading RepublicaNazi lies.
How many ways and how many times are we going to be asked this same question? Yes! I am worried about the impact of AI and I do believe it should be regulated and some kind of laws passed regarding the regulation. Once it gets into the wrong hands and is out of control, it will be too late to do anything about it then.
For that matter all Tech should be regulated!
AI could be used to do jobs better then a human, reading millions of documents, for example. Or doing dangerous jobs no person should do.
It should not be used in the arts, film or writting. these are forms of human expression.
I'm glad someone is doing this, since Congress doesn't seem to be in any hurry.
I'm sure there are lots of great uses for AI, but we are behind the ball on so many tech innovations that can be used to make our society worse that we need to get it together and find guardrails before AI is used to disrupt elections, cause mass panic, and possibly even cause conflict between nations. It's always better to be preventive than reactive.
I'm very concerned about the rise in hate speech in this country in the past 10 years, and if the government isn't doing enough then it's up to the people.