Artificial intelligence will never become a conscious being due to a lack of intention, which is endemic to human beings and other biological creatures, according to Sandeep Nailwal, co-founder of Polygon and the open-source AI company Sentient.
“I don’t see that AI will have any significant level of conscience,” Nailwal told Cointelegraph in an interview, adding that he does not believe the doomsday scenario of AI becoming self-aware and taking over humanity is possible.
The executive was critical of the theory that consciousness emerges randomly due to complex chemical interactions or processes and said that while these processes can create complex cells, they cannot create consciousness.
Instead, Nailwal is concerned that centralized institutions will misuse artificial intelligence for surveillance and curtail individual freedoms, which is why AI must be transparent and democratized. Nailwal said:
"That is my core idea for how I came up with the idea of Sentient, that eventually the global AI, which can actually create a borderless world, should be an AI that is controlled by every human being."
The executive added that these centralized threats are why every individual needs a custom AI that works on their behalf and is loyal to that specific individual to protect themselves from other AIs deployed by powerful institutions.
Sentient’s open model approach to AI vs the opaque approach of centralized platforms. Source: Sentient Whitepaper
Related: OpenAI’s GPT-4.5 ‘won’t crush benchmarks’ but might be a better friend
Decentralized AI can help prevent a disaster before it transpires
In October 2024, AI company Anthropic released a paper outlining scenarios where AI could sabotage humanity and possible solutions to the problem.
Ultimately, the paper concluded that AI is not an immediate threat to humanity but could become dangerous in the future as AI models become more advanced.
Different types of potential AI sabotage scenarios outlined in the Anthropic paper. Source: Anthropic
David Holtzman, a former military intelligence professional and chief strategy officer of the Naoris decentralized security protocol, told Cointelegraph that AI poses a massive risk to privacy in the near term.
Like Nailwal, Holtzman argued that centralized institutions, including the state, could wield AI for surveillance and that decentralization is a bulwark against AI threats.
Magazine: ChatGPT trigger happy with nukes, SEGA’s 80s AI, TAO up 90%: AI Eye