![]() ![]() Ironically, until that complete human-like appearance can be achieved, we’re likely to see more basic options that leverage the Eliza Effect for us to engage with them as humans. Concretely, while an obviously synthetic and shiny robot might be accepted by humans, as would a robot that looked and behaved exactly like a human, a robot that almost looked and behaved like a human would likely instil negative reactions in us. ![]() It describes the way humans might feel discomfort and even revulsion with robots or non-human entities that are almost, but not quite, human-like. The Uncanny Valley is a phenomenon first identified by Masahiro Mori, a Japanese professor of robotics. The combination of LLMs and robotics is set to further challenge our perceptions of artificial intelligence, but beware the Uncanny Valley. LLMs are now developing at a rate that is reminiscent of Moore’s Law and, on a connected but different path, the development of robots is also accelerating. Eliza represented a very basic program by today’s standards, yet Weizenbaum was shocked at how users believed that the chatbot was listening to and cared about them, and how much they would reveal to it as a result (See Origins below for more). Eliza was programmed to rephrase user statements as questions and have a few trigger words for blocks of questions, so if you said ‘my mother doesn’t like me,’ Eliza would ask ‘why do you think your mother doesn’t like you?’ In addition, the term ‘mother’ would prompt other questions like ‘ tell me about your family life’. In 1966 MIT’s Joseph Weizenbaum created Eliza, a chatbot that mimicked a therapist's questions. The misalignment between a computer's external output and what we assume is happening internally for them isn’t new. Indeed, the web is filled with examples of people who are in fascinating conversations with LLMs akin to friends, trusted advisors, and even virtual lovers. You might consider them to be glorified autocomplete systems, but if you’ve been chatting with them, you’ll know that this doesn’t do them justice. Instead, LLMs draw on enormous data sets (literature, online content, social media etc) and use a process known as deep learning, a combination of algorithms and statistical models, to generate text based on pattern recognition and context. Let’s start by level-setting: ChatGTP, Bard, Bing and the broader wave of Large Language Models (LLMs) are generally not considered to be 'thinking' nor 'sentient'. The Eliza Effect describes the human tendency to attribute human characteristics to non-human entities, thus anthropomorphising machines, computers, and things. Inspired by a 1960s chatbot and more relevant today than ever, this effect serves as a warning about the nature of human-AI relationships. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |