When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
Thats according toa new paperpublished by researchers from Stanford andGoogleDeepMind.
Participants were then asked to replicate their own answers a fortnight later.
Remarkably, the AI agents were able to simulate answers with 85% accuracy compared to the human participants.
And that could make deepfakes even more dangerous.
Double agent
The research was led by Joon Sung Park, a Stanford PhD student.
The idea behind creating these simulation agents is to give social science researchers more freedom when conducting studies.
They may also be able to run experiments which would be unethical to conduct with real human participants.
Whether study participants are morally comfortable with this is one thing.
For many, this will set dystopian alarm bells ringing.
The idea of digital replicas opens up a realm of security, privacy and identity theft concerns.