This Iranian-American Dem Just Shamed Her Party About the Airstrikes and Trump on...
When a Tyrant Dies, Let the Truth Be Loud
Pete Hegseth, Vindicated (Part Deux)
Here's the Delusional Reason Chris Murphy Thinks President Trump Authorized Airstrikes on...
U.S. B-2 Bombers Carried Out Another Successful Strike on Iranian Ballistic Missile Sites
Iran and Trump's Impossibles
10 Reported Dead After Pakistanis Attempt to Storm U.S. Embassy
Trump Calls on Iranian Military to Lay Down Arms or Face Certain Death
Thomas Massie Joins in With Democrat Allies Who Claim That Iran Strikes Are...
Miami Man Gets 4.5 Years in Prison for Possessing 450 Stolen or Counterfeit...
Illegal Immigrant Sentenced to 19 Years Over Alleged $4M Romance, Business Scams
Iran Moves to Install New Supreme Leader After Death of Supreme Leader Khamenei
Connecticut Man Sentenced to 6 Years for Online Threats Targeting South Carolina FBI...
Possible Islamic Terror Attack at Iconic Austin Bar Leaves Two Dead and Many...
Dems Defend Dead Iranian Tyrants
Tipsheet
Premium

New Report Warns the Danger AI Poses to Children

New Report Warns the Danger AI Poses to Children

Artificial Intelligence is on the rise and its profound effects on society may not be a good thing. 

According to an investigation by Human Rights Watch (HRW), images of children are being used to train AI models—without consent— and exposing them to significant privacy and safety risks.

Researcher Hye Jung Han confirmed that, without their parent’s consent, photos are being posted to various online platforms under strict privacy settings and being scraped from the internet as part of a larger dataset used to teach popular AI programs. 

The group found that children whose images were in the dataset, such as such as LAION-5B, were easily identifiable. Some of their names were even included in the accompanying caption or the URL where the image was stored. For example, Han traced “both children’s full names and ages, and the name of the preschool they attend in Perth, in Western Australia” from just one photo link. 

[The] investigation, which examined less than 0.0001 percent of the 5.85 billion images in the LAION-5B dataset, identified 190 photos of children from all of Australia’s states and territories. This sample size suggests that the actual number of affected children could be significantly higher. The dataset includes images spanning the entirety of childhood, making it possible for AI image generators to create realistic deep fakes of real Australian children. Han found examples of images from “unlisted” YouTube videos, which should only be accessible to those with a direct link, included in the dataset. This raises questions about the effectiveness of current privacy measures and the responsibility of tech companies in protecting user data. Via Breitbart News. 

The investigation also found that the photo links in the datasets were from personal blogs and posts by schools and family photographers hired to take photos of the children for personal use. HRW found that some of the images in the LAION-5B dataset may have been uploaded a decade before it was even created. 

“Current AI tools create lifelike outputs in seconds, are often free, and are easy to use, risking the proliferation of nonconsensual deep fakes that could recirculate online forever and inflict lasting harm,” Human Rights Watch said. 

Stanford University released a report last year warning of the dangers of artificial intelligence and the risks it poses to children. 

AI is being used to “produce explicit adult content, including child sexual abuse material (CSAM) as well as to alter benign imagery of a clothed victim to produce nude or explicit content,” the report noted. 

Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement