Cointelegraph
Ciaran Lyons
Written by Ciaran Lyons,Staff Writer
Felix Ng
Reviewed by Felix Ng,Staff Editor

ChatGPT shows geographic biases on environmental justice issues: Report

A recent study found that the artificial intelligence (AI) chatbot ChatGPT has constraints in providing locally tailored information on environmental justice issues.

ChatGPT shows geographic biases on environmental justice issues: Report
News

Virginia Tech, a university in the United States, has published a report outlining potential biases in the artificial intelligence (AI) tool ChatGPT, suggesting variations in its outputs on environmental justice issues across different counties.

In the report, researchers from Virginia Tech have alleged that ChatGPT has limitations in delivering area-specific information regarding environmental justice issues. 

However, the study identified a trend indicating that the information was more readily available to the larger, densely populated states.

“In states with larger urban populations such as Delaware or California, fewer than 1 percent of the population lived in counties that cannot receive specific information.”

Meanwhile, regions with smaller populations lacked equivalent access.

“In rural states such as Idaho and New Hampshire, more than 90 percent of the population lived in counties that could not receive local-specific information,” the report stated.

It further cited a lecturer named Kim from Virginia Tech’s Department of Geography, urging further research as prejudices are being discovered.

“While more study is needed, our findings reveal that geographic biases currently exist in the ChatGPT model,” Kim declared.

The research paper also included a map illustrating the extent of the U.S. population without access to location-specific information on environmental justice issues.

A United States map showing areas where residents can view (blue) or cannot view (red) local-specific information on environmental justice issues. Source: Virginia Tech

Related: ChatGPT passes neurology exam for first time

This follows recent news that scholars are discovering potential political biases exhibited by ChatGPT in recent times.

On Aug. 25, Cointelegraph reported that researchers from the United Kingdom and Brazil published a study that declared large language models like ChatGPT output text containing errors and biases that could mislead readers.

Magazine: Deepfake K-Pop porn, woke Grok, ‘OpenAI has a problem,’ Fetch.AI: AI Eye

Cointelegraph is committed to independent, transparent journalism. This news article is produced in accordance with Cointelegraph’s Editorial Policy and aims to provide accurate and timely information. Readers are encouraged to verify information independently. Read our Editorial Policy https://cointelegraph.com/editorial-policy