Tampere
04 May, Saturday
9° C

The library of essays of Proakatemia

GPT-5 not in training yet – Scared of something?



Kirjoittanut: Sanni Salokangas - tiimistä Kaaos.

Esseen tyyppi: Yksilöessee / 2 esseepistettä.
Esseen arvioitu lukuaika on 4 minuuttia.

See post also in sannisalokangas.com

Can excitement temporarily blind rational vision? Large language models (LLM) are advancing hyper-quickly, with new artificial intelligence dedicated startups, features and implementation methods coming out every single week. OpenAI CEO Sam Altman mentioned in recent Economic Times -conference that ChatGPT’s admitted next version, GPT-5 is not in training yet and will not be “for some time”. Concerns regarding the fast development of LLM are in the air. Is the artificially intelligent stone rolling too fast?
 
The latest advancements of generative AI have taken the world by storm. It has been a massive topic of discussion, not only in the tech industry but the words AI and GPT have also found their way into people everyday coffee-break discussions. Hesitation and ignorance for those not seeking out facts create confusion around the topic. Fear is a natural human response towards something that is hard to explain and a good indicator that LLM’s remain as topics that people need more education on. However, the collective decision from OpenAI to not touch GPT-5’s training for some time means that the fear looming over AI’s fast advancement is not fully pointless.
 

AI the integrated assistant

First off, there are a huge number of advantages to artificial intelligence integrated to the everyday lives of humans. The key word is ‘integrated’, referencing AI as an assistant for humans to help automate or outsource work that used to be possible only by putting in actual work. Good example of AI used in practice is automating setting up a meeting: AI is integrated with individuals’ virtual calendars where schedules are already full. To find a time for a meeting for all the individuals, AI locates the time slots that are open for all and schedules a meeting. The same task done by hand by a human takes time; AI does it in seconds. In addition, ways companies use AI go beyond performing simple tasks, often having full integrations and features to make their products better or even create new, innovative products. Google is an example of a company not only adopting AI into its existing products, but also developing totally new ones: Adding a ‘Help Me Write’ -feature on Gmail that allows users to use generative AI to make their emails lengthier and introducing a trendy chatbot, Bard, that uses conversational AI to have discussions, fetch information and perform simple tasks with its user.
 

Caution in speedy development

Like said, with the excitement for advancing LLM’s and all the possibilities that they introduce, it is important to think of the downsides and possible side effects that the speedy development of artificial intelligence brings. One spokesperson for AI safety awareness is Elon Musk, cofounder of OpenAI and founder of X.AI, his fresh company dedicated to artificial intelligence. Musk is known for giving thought-provoking statements regarding the dangers of AI, calling it a ‘double-edged sword’. In an interview with Tucker Carlson, Musk said that AI has the potential civilization destruction: “AI is more dangerous than, say, mismanaged aircraft design or production maintenance”. Musk is a vocal advocate for caution in AI development, signing an open letter of over 1,100 other signees asking OpenAI to call in a halt for GPT-5’s development. In addition, Musk’s new AI company’s name, X.AI, refers to explainable AI (XAI), that’s mission is to include transparent systems to help humans understand the reasoning behind decisions that AI makes. Explainable AI is crucial for safer development of artificial intelligence as, for example, black boxes are getting more and more common.
 

Photographer: Samuel Corum/Bloomberg via Getty Images

What is a black box?

The term black box is used when an AI model generates a response or output without any clear explanation on how it ended up with the conclusion. Black boxes are deceiving as the model can give out a response that it is highly positive with, but giving no reasoning why. Lack of transparency in AI decision-making hinders trust with humans, bringing in additional issues to the table. Using generative AI to make decisions can be effective, if the user will consider the possibility for black boxes and, for example, biased or discriminating outputs. Researchers and users have noticed major bias in generative and conversational AI models, like preferring male applicants over female in a recruiting task assigned to ChatGPT. Depending on the data used to train an AI model, its outputs can be controversial and due to black boxes, not always trustworthy.

 

Is the world lagging behind?

The reasoning behind Altman’s statement on not starting to train GPT-5 “for some time” might not be the fear of even more advanced LLM’s and what they will be able to do, but rather the pace that the rest of the world is adjusting to the advancements. With as grand invention like generative AI, the right adaptation and for example, regulations, lag majorly behind. In April 2023, Italy banned ChatGPT for privacy concerns saying that OpenAI did not have authority over the data used to train the model. The ban was lifted soon after.
 
Is OpenAI giving a six-month recess for the world to prepare for what is coming next?
Sanni S

See my trashy blog sannisalokangas.com

Post a Comment