Here's a better problem - generative AI detecting humans

George Lawton Profile picture for user George Lawton August 24, 2023
Can generative AI detect humans using some combination of paralinguistic metadata and public key cryptography?


AI-generated content is starting to pollute the Internet, businesses and schools at an unprecedented scale. In some cases, it may be easier to detect human text than flagging generative AI content.  At the very least, they could complement one another.

The rapid growth in AI-generated content is driving discussion on how AI vendors can improve tools to detect AI-generated content. This is an important aspiration, but these kinds of approaches are already falling short for text. And not just for 'black hats' attempting to breach AI security or de-stabilize democracies. Lazy students, overstretched employees, unscrupulous product marketers, and data labeling sweatshops will easily breach most safeguards with light editing. A much better approach may be to detect humans using some combination of paralinguistic metadata and public key cryptography. 

And tools are emerging that can help to establish a chain of provenance for this. As I have previously written about on diginomica, AI content detectors for video, audio, and pictures could draw on a long history of digital watermarking tools and intellectual property protection. However, tools for automatically detecting AI-generated text are a much harder problem to solve. Digital watermarks are much more challenging to embed into plain text. Some interesting progress is being made in embedding statistical patterns, odd grammar usage, and even punctuation conventions in text. One example was Genius’ effort to embed a strange pattern in its music lyrics to prove that Google had been directly copying its content. That case, however, failed to win the court case. 

The problem with bot text

School systems globally are concerned that recent progress in Large Language Model (LLM) powered generative AI will turbocharge students’ efforts to cheat. In the long run, success in this endeavor may yield a large crop of incompetent workers unable to effectively run businesses, governments, and, well, teach. But this is not just an academic problem. Governments are starting to enact legislation regarding unscrupulous product and service review practices. The UK is currently working through a proposed Digital Markets, Competition, and Consumer Bill that bans exchanging money or free goods for writing product reviews. It’s only a matter of time before similar legislation is extended to more automated approaches, such as unscrupulous marketers spinning up a crowd of fake humans to extoll the wonders of their products or trash-talk competitive offerings. 

And data labeling companies are starting to grapple with a dispersed network of humans paid to apply labels to content for training the next generation of AI. These are essential to ensure that future AI tools can get better at identifying objects in images, vet toxic content, or improve the performance of a new crop of enterprise AI apps. One concern is that overworked data labelers may turn to ChatGPT and other LLMs. While this may be great for data productivity and some data labeling tasks. A downside is that training LLMs on AI-generated content could lead to AI model collapse in which the new models fail to perform as well. 

Capturing behavioral metadata

A few years ago, the banking industry struggled with rising fraud empowered by new online services. Meanwhile, a growing subscription economy struggled with password sharing in which individuals shared their passwords to highly valued information services with friends and families. It was observed that a lot of information is embedded not just in the text of the password but in the metadata about how the password is typed as well. Due to various typing styles, cadence, and rhythm, people tend to type out the actual letters in wildly different ways. Various teams call this behavioral biometrics, keystroke dynamics, or paralinguistic metadata. Different flavors of these techniques can also extend the concept to mousing techniques and voice input. 

In the academic arena, it would make sense to embed measures of these behavioral metrics into a new generation of word processors. It may not even be necessary to develop entirely new apps, either. They could simply be incorporated into an open-source library that existing word processors, web apps, and other tools could consume. 

It would also be important to establish a chain of provenance to reduce the risk of this metadata being tampered with after the fact. Older tools like PGP’s approach to public key cryptography could cryptographically sign the combination of text and associated paralinguistic metadata. Alter one or the other, and the whole thing would fail to certify that you wrote it. 


Several obvious challenges would need to be addressed for this to work out in the long run. Privacy concerns are a big one. It may be helpful for academic institutions to discern whether a paper was written by you or your overly helpful parents or cut and pasted from ChatGPT. However, it would have to balance these against privacy concerns. In some digitally proctored testing scenarios, students may be willing to suffer through the watchful eye of a service that fully locks down all their plugins and ensures there is no one else and no phones in the room. This is not practical for the day-to-day hustle and bustle of life. 

A related issue is that privacy advocates and browser makers are already rallying against browser fingerprinting techniques. These capture information about the unique combinations of plugins, versions, locations, and other kinds of metadata to sometimes uniquely identify individuals in the absence of cookies. Human provenance tracking approaches would need to cautiously balance increasing the detection of humans with privacy concerns.

It’s also important to note that while some interesting human techniques may work on day one, creatively lazy students, curious hackers, and 'black hats' will start to find workarounds for the first implementations. Like all things in security, a perfectly secure system is only so the first day it rolls out.

In the related realm of AI image detectors, the latest generation of tools is looking for biometric artifacts of actual humans. For example, a few years ago, MIT developed a couple of interesting algorithms that could capture subtle color changes and physical vibrations to detect your heart rate. I have played with a few versions of this over the years, and it sort of works if you keep very still. Vendors like Intel are starting to adopt similar principles to see if your video at least has a pulse, not that it will perfectly calibrate how fast it is beating for medical purposes. 

However, the challenge is that many fake people generators use generative adversarial networks, which pits one algorithm in training against a separate algorithm used to detect fakes. Traditionally, these detect things that look fake to humans, and pulse has not been an issue because most of us don’t often see the subtle color changes with any accuracy. Once the new fake detection algorithms go into the wild, these will simply be added to the GANs to successfully fool the fake detectors as well. 

My take

Building human detectors will not be easy for the reasons just observed. However, it may be easier in the long run and provide a complementary tool to AI fake detector approaches. Open-source approaches will make it easier for a larger community of security experts, privacy researchers, academic institutions, and enterprises to work through bugs using modular interchangeable libraries. New updates will need to be developed every time creative students start posting their clever hacks, much like the Kia Boys popularized a glaring security hole in Kia cars on TikTok, sparking a crime wave. 

Also, these tools don’t just need to be focused on catching people out for passing AI work off as human. A few years ago, I took a renewed interest in my typing inadequacies after having broken my 'P' key on two separate keyboards. A careful observation of my typing technique revealed a funny habit of reaching across the keyboard to hit it with my ring finger rather than the more appropriately positioned pinky finger. That and I banged all the keys pretty hard despite my fingers and hands protesting after a long day of this torture. A bit of time exploring a more relaxed and ergonomic form using the Keyboard Hero app helped mitigate some of my more egregious habits. Improved tools for analyzing the various aspects of keyboard form, including delays, sound levels, and flow, could go a long way to a more relaxed and flowing work experience. 

I was recently chatting with a friend who had observed two pavers listening to classical music. She said that when they worked, it was like watching a soft, smooth dance of humans and equipment as they coordinated their movements to the slow, rhythmic beat of the music. She planned to hire them the next time she did her driveway. Wouldn’t it be great if we could all work like that?  

A grey colored placeholder image