Artificial intelligence is starting to become a big part of our everyday lives, but it’s not just for robots. From virtual assistants like Apple’s Siri and Amazon Alexa or robotic vacuums that can clean your house without you having any help at all-to automated investment portfolio managers which make sure the markets stay afloat no matter what happens with other humans on Wall Street; AI has become an integral component in many aspects both personal and professional nowadays. But when we think about AI in the future it’s often humanity that comes to mind-the robots will someday become independent according to countless science fiction stories. But can it happen?

It’s a fascinating thought that perhaps some of the most advanced and intelligent machines on Earth might be toddlers. A recent Washington Post story paints Google engineer Blake Lemoine as he says his artificially intelligent chatbot generator, LaMDA with whom had numerous deep conversations seems to have traits more commonly found in children. Has LaMDA achieved sentient?

Ready to get started?

Let us help you build the business of your dreams

The term “sentient” is used to describe beings that can perceive or feel things. This term is often used in contrast to “sapient,” which describes beings that can think and reason. While sentient beings may be aware of their experiences, they may not be able to think abstractly or understand complex concepts. However, some philosophers believe that sentient AI could be minimally conscious. This means that they would be aware of the experience they are having, have positive or negative attitudes, and have desires. While this level of consciousness is far from sapience, it would still allow AI to interact with humans in a more meaningful way. In the future, as AI technology advances, sentient AI may become more common.

When we think that a machine language model is listening to us, it’s talking in disguise. The “machine” has already made its decision based on what calculations are needed for this particular task and will respond with words accordingly- just like how humans do when they speak. Artificial Intelligence tools such as GPT3 or Lamda can help solve specific problems like having conversations seamlessly between people but Google’s DeepMind hopes to create something called ‘Artificial General Intelligence’ (AGI). An AGI would be able to understand any task thrown at them which could lead to rapid speeding up of problem-solving. Experts have said that it is impossible to achieve a machine with an inner mind and the ability to feel or express emotions. Technology doesn’t exist yet, but some people are working toward this goal nonetheless.

Google has denied that the company is working on any kind of sentient or general artificial intelligence, but some in academia are worried about what might happen if these principles long used for research and applications become outdated.

A spokesperson from Google said “the evidence does not support his claims,” when asked about concerns over researchers who believe there’s potential danger coming due to current trends involving machine learning systems becoming more complex over time while lacking human understanding.

Sign up to our newsletter

Get the latest articles on all things development related right in your inbox.
Loading

Ready to get started?

Let’s discuss how we can add value to your project and build your idea from the ground up!

    By clicking ‘Send’, I’m agreeing to all the privacy terms.