HARRIS COUNTY, Texas – For the first time in the Houston area, a federal grand jury has indicted a 34-year-old man who allegedly used artificial intelligence apps to create child pornography using photos of victims he knew.
Kane James Kellum is in federal custody and scheduled to be arraigned Friday morning.
“AI, like anything in any crime, they’re using this technology to prey on the vulnerable,” FBI Houston Supervisory Special Agent Torrence White told KPRC 2 News. “Like every emerging technology out there, we’re seeing actors take advantage of where there’s little knowledge about exactly how AI is working.”
Kellum was arrested just before Christmas and the grand jury returned the indictment on four counts related to child pornography, also known as child sexual abuse material (CSAM), last week.
Baytown police and federal investigators found multiple videos of a 3- and 15-year-old who Kellum knew engaged in explicit acts, according to the criminal complaint.
In one video, investigators wrote in the complaint the victim’s face got superimposed on an adult woman’s naked body, and in another video, the victim’s clothes got completely removed. In both videos, the victims were depicted dancing on a table.
Kellum allegedly admitted to using at least two apps to make videos of at least one of the victims, but said he may have made videos of the other victim “when he was drunk,” records show.
“Anyone can become a victim, anyone can be a subject. There’s no demographic,” White said.
While most U.S.-based social media companies flag suspected CSAM that crosses their servers and report it to law enforcement, the FBI said many newer AI platforms, which are based overseas, may not be obligated to report such activity.
That makes detecting the transmission or creation of the AI-generated content more difficult for investigators to find.
AI regulation
AI law expert Peter Salib, an assistant professor at the University of Houston Law Center, said companies are racing to build more powerful platforms, but they also have a responsibility to prevent this kind of abuse.
“What do you think it takes and how long does it take to stop this kind of content from being produced from these platforms?” KPRC 2’s Bryce Newberry asked.
“The real answer is no one knows,” Salib said.
Salib said legislative action, similar to California’s SB 53, is also needed to prevent risky outputs from AI.
“Every guardrail on every AI system ... can be jailbroken,” Salib said. “I have many concerns ... We’re right at the beginning of a big tidal wave.”
While the FBI first issued a warning about generative AI used for creating CSAM being illegal in March 2024, the wave is forcing quick adaption.
“We’re always learning,” White said. “We as investigators need to work diligently to learn a little more about these platforms and how these perpetrators are using it.”
Protecting your family
Law enforcement suggests locking down social media accounts where pictures of children may be shared.
“Families are sharing images with their kids, graduation, their first day of school, those things that are for good reasons. Perpetrators have the ability to also get those images ... and then they modify them to produce CSAM (child sexual abuse material),” White said.
Parents could also consider covering a child’s face with an emoji or something similar when posting online.
“AI image generators can take a photo of anybody and they can put that face on anybody doing anything,” Salib said.