Please do not give me shit for using Facebook. It’s how I keep in touch with relatives, most of whom live abroad, and my brother, who is has ASD prefers to communicate with me that way.
I would rather not use it, but I would prefer keeping in touch with my brother.
That said, I would not let AI keep in touch with my brother for me.
I’ve been dreaming of a social network where you would have AI intermediate all interactions to bias the network towards positivity and less hostility.
I don’t know that is a realistic idea. I don’t know if AI at our current level could accurately discern positivity from hostility well enough. There’s too much emotion in language that I think would require a deep understanding of emotion itself to sort that out properly.
It absolutely could.
With a reference frame constructed from over 500 adults, we tested a variety of mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding 89% of human participants with an EQ of 117.
We first find that LLM agents generally exhibit trust behaviors, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that LLM agents can have high behavioral alignment with humans regarding trust behaviors, particularly for GPT-4, indicating the feasibility to simulate human trust behaviors with LLM agents.
A lot of people here have no idea just how far the field actually has come from dicking around with the free ChatGPT and reading pop articles.