top of page
Writer's pictureAndriy Link

Does AI think?

Technologies are constantly evolving and advancing. Especially, the ones driven by Artificial Intelligence. According to Francesca Rossi, Head of Ethics at IBM Research,

The AI system Francesca Rossi speaks about is ChatGPT. It is also one of the large language models or LLMs that redefine AI within public consciousness. However, there are also questions related to ethics and following opportunities.


According to Forbes, 60% of businesses report increased efficiency, productivity, and better customer relationships due to predictive AI. The others are rather skeptical. Over 75% of consumers are having trouble trusting AI. The main reason here is misinformation and other significant issues related to generative AI usage.

Yet, AI is among the Top Information Technology Trends 2023. Its market size increases with the technology’s demand. AI is expected to conquer even more enterprise trust and reach a market value of $407 billion by 2027.


Sencury has a great article based on ChatGPT’s business value. In the current article, let’s talk about AI’s ability to think. What is an LLM? Can it perform reasoning? How does it act? What is its thinking process like? Can LLMs have hallucinations and catastrophic forgetting?


What is a Large Language Model (LLM)?

A World Economic Forum’s head of AI, Data, and Metaverse, Cathy Li, defines a large language model as an unusually smart computer program with the ability to generate a language that is similar to ours. LLM is driven by deep learning, which is a subset under the AI umbrella. Any LLM is trained on text data from books, articles, and websites. For example, ChatGPT is trained on 570GB of various data. The amount of this data is pretty massive and helps AI trace, understand, and learn all the patterns and relationships between words and sentences.


While being trained, the focus of LLM is on predicting further words based on the text data it processes before. So, when a user interacts with a large language model such as ChatGPT, it analyses the question or a prompt. And then, it tries to accurately predict words to meet user expectations of an answer.


A World Economic Forum’s head of AI, Data, and Metaverse, Cathy Li, defines a large language model as an unusually smart computer program with the ability to generate a language that is similar to ours. LLM is driven by deep learning, which is a subset under the AI umbrella. Any LLM is trained on text data from books, articles, and websites. For example, ChatGPT is trained on 570GB of various data. The amount of this data is pretty massive and helps AI trace, understand, and learn all the patterns and relationships between words and sentences.    While being trained, the focus of LLM is on predicting further words based on the text data it processes before. So, when a user interacts with a large language model such as ChatGPT, it analyses the question or a prompt. And then, it tries to accurately predict words to meet user expectations of an answer.


Reasoning, Acting, and Thought Process in LLMs

Large language models (LLMs) possess exceptional reasoning capabilities. This remarkable reasoning became possible with the help of the Chain-of-Thought (CoT) technique. However, LLMs still struggle with generating action plans to carry out tasks in a given environment. Or they cannot perform reasoning that is complex and related to math, logic, and commonsense.


Chain-of-thought (CoT) stands for the ability to break down a problem into intermediate reasoning steps. This allows LLMs to improve their performance of complex reasoning. CoT prompting becomes helpful while scaling the model.


If to take ChatGPT, this LLM also has outstanding reasoning. But to make it perform complex tasks quicker, your prompt has to be broken down into smaller parts.

If to take ChatGPT, this LLM also has outstanding reasoning. But to make it perform complex tasks quicker, your prompt has to be broken down into smaller parts.       Human Thinking   ChatGPT   Origin    HT consists of   reasoning   thinking   assessing    cognition   ChatGPT is an AI LLM trained on massive amounts of data. Essentially, it was created by humans.   Learning    Humans observe, experience, and learn new information to be able to use it afterwards.   ChatGPT uses statistical models and algorithms (regular training), to learn from huge amounts of text data to generate human-like responses.    Creativity    Humans are innovative thinkers and can generate new information, e.g., music, art, etc.   ChatGPT cannot be creative. It generates new information based on processing the initial input and the existing text memory.   Decision-making   Humans make decisions based on different factors:   logic   emotions   circumstances   data   ChatGPT relies on data it possesses. So, its decision-making is based on hard evidence. The choice will still be up to humans.   Social Skills    Distinguishes sentiments   AI is unable to recognize sentiments

Hallucinations and Catastrophic Forgetting

When the AI LLM answers with information that does not directly correspond to the provided input, this phenomenon is called hallucination. It becomes a tricky situation as the model starts giving out “false knowledge”. Why does LLM hallucinate? Mainly, due to the lack of context. This can be corrected by supplying the LLM with additional text, code, or other relevant data. Then, the process will be called context injection. It involves embedding additional information into the prompt to give LLMs the knowledge they require to respond appropriately.


Catastrophic forgetting, in its turn, is the deviation of a neural network that learns new tasks and simultaneously forgets previously learned ones. So, the algorithms lose the past knowledge and overwrite it with new data.


“Behind the Scenes” of today’s LLM

Pascale Fung, Professor at Hong Kong University of Science and Technology claims that AI has gone through many changes these days. Although generative AI is relatively new to us it takes a lot after deep learning and neural networks. Now, generative AI models happen to be more powerful due to relying on a huge amount of training data. And, also, their huge parameter size impacts LLMs capabilities.

These generative AI models are the foundation for conversational AI. However, ChatGPT doesn’t fall under the category of a conversational AI system. ChatGPT is rather a foundational model. Or an LLM performing various tasks simultaneously. The innovation here is the chat's interface which helps users interact with it directly. So, it may seem as if ChatGPT can participate in conversations and lead dialogues, but it cannot. On the contrary, ChatGPT can be used to build conversational AI and other systems.


Rossi points out the fact that before ChatGPT, people used AI a lot without even realizing they did it. And AI was hidden inside all of the applications we were using online. Now, this awareness is a bit heightened. However, it is still unclear how to deal with LLM concerns and where’s the limit of ChatGPT.


Sencury on AI’s Capabilities

Sencury is one of the many companies that also provides AI services to our customers. However, our experts possess great experience and solid AI-based knowledge. This allows us to deliver your quality AI-driven solutions faster.

With Sencury’s expertise, you can receive:

  • Natural Language Processing

  • Computer Vision

  • Neural Networks

  • Cognitive Computing

  • Deep Learning

  • ML Model Development

  • Data Engineering

  • Data Analysis

  • Predictive Analytics

  • Chatbots Development

  • Data Mining

  • Marketing Automation Solutions

Tell us about your business needs and let’s work together to implement AI into your workflows!


19 views0 comments

Recent Posts

See All

Комментарии

Оценка: 0 из 5 звезд.
Еще нет оценок

Добавить рейтинг
bottom of page