Wednesday, December 4, 2024
HomeOpinionBig Tech’s fail — unsafe online spaces for women

Big Tech’s fail — unsafe online spaces for women


Just after U.S. President Joe Biden’s stepping away from the 2024 U.S. presidential race and his endorsement of U.S. Vice-President Kamala Harris as the Democratic Party nominee, Ms. Harris got swift support from notable political figures which included former President Barack Obama. But Ms. Harris’s candidacy sparked significant political debate. Her campaign was also marred by AI-generated deepfakes and disinformation.

Even before the announcement of her candidacy, Ms. Harris was the target of memes and video content that focused on her mannerisms that generally showed her in a bad light. These attacks escalated after her candidacy was announced. They were personal, focusing on her birth, character, and integrity as an American. For instance, there was a manipulated video with her cloned voice that was shared by Elon Musk. She could be seen saying that “President Biden is senile”; that she does not “know the first thing about running the country” and that, as a woman and a person of colour, she is the “ultimate diversity hire”.

In addition to these digital assaults, Ms. Harris faced relentless trolling, particularly from right-wing figures. Former U.S. President Donald Trump often mocked her manner of laughter and labelled her “crazy”. Media personalities Megan Kelly and Ben Shapiro were explicit in their posts on how Ms. Harris moved to the top. Social media was flooded with derogatory jokes, sexualised images, and racist and sexist comments directed against her. A recent Artificial Intelligence (AI)-generated video depicted Ms. Harris and Donald Trump in a fabricated romantic relationship. These AI-generated videos are not only violative of privacy but also deeply undermine the dignity of women. Despite user knowledge of such content being fake, their wide circulation suggests deep user engagement.

No isolated case

Ms. Harris’s ordeal is not an isolated case. Women in power or those aspiring for high office face similar online harassment. When U.S. politician Nikki Haley, for example, was in the running in the Republican primaries, there were manipulated and explicit images of hers that were circulated online. Italian Prime Minister Giorgia Meloni was a target, featuring in a deepfake and explicit video. In Bangladesh, deepfake images of women politicians Rumin Farhana and Nipun Roy were on social media just before the Bangladesh general election on January 7, 2024. Such content garnered millions of views.

This demands the question: how and why do social media platforms allow such content to be posted and shared? What do the content moderators of media platforms do?

Big Tech’s failure to curb the deluge of degrading content against women results in a disproportionate burden being imposed on women, impacting their identity, dignity and mental well-being. The nature of online abuse women face is also starkly different from the trolling or insults directed at men. While men may encounter misinformation and disinformation regarding their actions or duties, women face objectification, sexually explicit content and body shaming. Big Tech companies often dodge accountability by claiming that their platforms reflect upon users and that they cannot control it closely. They enjoy immunity from responsibility due to ‘safe harbour’ protections.

More an illusion of empowering women

Though technology is often praised as a tool for women’s empowerment, AI and digital technologies appear anything but gender-neutral. Instead, they reflect societal biases and existing stereotypes. Rather than liberating women, AI can amplify entrenched biases and become a new tool for their abuse and harassment. With AI’s rapid evolution, women face increased risks of digital abuse, violence, and threats. These systems, shaped by datasets infused with societal prejudices and developed mostly by men, often lack the inclusivity needed to challenge discrimination effectively. The representation of female staff in technology development (female AI developers) is also low in Meta and Google and OpenAI, according to data from Glass.ai.

Imagine the challenges faced by a serving woman Prime Minister, Ms. Meloni. Now think about the plight of ordinary women. Online harassment sees many women stopping to use digital devices. Or their families restrict their access to these devices, further hindering women’s careers and public life. This is not the solution.

Creation and distribution platforms must take the responsibility for failing to curb the spread of harmful content. It is surprising that despite technological advancements, resources are not being invested in developing safety features or enhancing content moderation techniques. Labelling AI-generated content is not always effective. Often, harmful content needs to be removed entirely. For example, with sexually explicit content, the damage comes from sharing and viewing. What is the most troubling is the owners of big tech themselves sharing misinformation and deepfake videos. While they should be allowed to have a political ideology and profess it, they should also realise the power they hold over millions who may not know fake from real.

Beyond clicks and likes

Big Tech should ensure that proper content moderation teams and safety researchers are not a liability but a necessity. The time taken to review reported pornography is often too long, causing further harm and violating platform policies. The burden should not fall on users to report and follow up on harmful content. Platforms must share the responsibility. Apps that offer explicit services causing harm to women should be critically reviewed and promptly removed from app stores.

Big Tech and policymakers need to resolve such incidents promptly. Women should also be encouraged to take proactive measures by reporting such incidents and taking necessary actions. In Ms. Meloni’s case, she sought €1,00,000 in damages. Ms. Harris and her campaign team were able to turn the trolling attacks on their head and question the inherent misogyny of such online attacks. Can we think about huge fines in monetary terms and the limitation of platforms for a certain number of days and in certain geographical limits?

We need more women to be involved in developing technology and holding decision-making positions in tech companies. AI entrepreneur Mustafa Suleyman, in his book, The Coming Wave, says moving from technical to non-technical measures is the key.

To make online spaces safer for women, we need safety researchers and simulation exercises to test for gender biases, especially when AI is involved. Technical professionals can check data for biases, as a model is only as good as its training data, while simulations can assess potential risks. This will help to ensure fair, safe and ethical AI by design. Non-technical measures, laws, policies and governance structures must support these efforts.

Ensuring that technology is free from gender bias should not be the job of only feminists, social scientists, ethicists, or users. The responsibility should start with the tech companies which thrive on revenues from the content generated through user interfaces, developers, and algorithms. Governments and their regulatory bodies must set the guardrails to keep these digital spaces safe and fair for women.

Manish Tiwari is Director of the Institute for Governance, Policies and Politics, New Delhi, focusing on policy for societal good in emerging technology and health ecosystems



Source link

RELATED ARTICLES

Most Popular

Recent Comments