Train AI models to sell as NFTs, LLMs are Large Lying Machines: AI Eye

CoinTelegraph reported:

AI Arena

AI Eye chatted with Framework Venture’s Vance Spencer recently and he raved about the possibilities offered by an upcoming game his fund invested in called AI Arena in which players train AI models how to battle each other in an arena.

Framework Ventures was an early investor in Chainlink and Synthetix and three years ahead of NBA Top Shots with a similar NFL platform, so when they get excited about the future prospects, it’s worth looking into.

Also backed by Paradigm, AI Arena is like a cross between Super Smash Brothers and Axie Infinity. The AI models are tokenized as NFTs, meaning players can train them up and flip them for profit or rent them to noobs. While this is a gamified version, there are endless possibilities involved with crowdsourcing user-trained models for specific purposes and then selling them as tokens in a blockchain-based marketplace.

AI Arena
Screenshot from AI Arena

“Probably some of the most valuable assets on-chain will be tokenized AI models; that’s my theory at least,” Spencer predicts.

AI Arena chief operating officer Wei Xie explains that his cofounders, Brandon Da Silva and Dylan Pereira, had been toying with creating games for years, and when NFTs and later AI came out, Da Silva had the brainwave to put all three elements together.

“Part of the idea was, well, if we can tokenize an AI model, we can actually build a game around AI,” says Xie, who worked alongside Da Silva in TradFi. “The core loop of the game actually helps to reveal the process of AI research.”

Read also


Socios boss’ goal? To knock crypto out of the park


Singer Vérité’s fan-first approach to Web3, music NFTs and community building

There are three elements to training a model in AI Arena. The first is demonstrating what needs to be done — like a parent showing a kid how to kick a ball. The second element is calibrating and providing context for the model — telling it when to pass and when to shoot for goal. The final element is seeing how the AI plays and diagnosing where the model needs improvement.   

“So the overall game loop is like iterating, iterating through those three steps, and you’re kind of progressively refining your AI to become this more and more well balanced and well rounded fighter.”

The game uses a custom-built feed forward neural network and the AIs are constrained and lightweight, meaning the winner won’t just be whoever’s able to throw the most computing resources at the model.

“We want to see ingenuity, creativity to be the discerning factor,” Xie says. 

Currently in closed beta testing, AI Arena is targeting the first quarter of next year for mainnet launch on Ethereum scaling solution Arbitrum. There are two versions of the game: One is a browser-based game that anyone can log into with a Google or Twitter account and start playing for fun, while the other is blockchain-based for competitive players, the “esports version of the game.”

Also read: Exclusive — 2 years after John McAfee’s death, widow Janice is broke and needs answers

This being crypto, there is a token of course, which will be distributed to players who compete in the launch tournament and later be used to pay entry fees for subsequent competitions. Xie envisages a big future for the tech, saying it can be used “in a first-person shooter game and a soccer game,” and expanded into a crowdsourced marketplace for AI models that are trained for specific business tasks.

“What somebody has to do is frame it into a problem and then we allow the best minds in the AI space to compete on it. It’s just a better model.”

Chatbots can’t be trusted

A new analysis from AI startup Vectara shows that the output from large language models like ChatGPT or Claude simply can’t be relied upon for accuracy.

Everyone knew that already, but until now there was no way to quantify the precise amount of bullshit each model is generating. It turns out that GPT-4 is the most accurate, inventing fake information around just 3% of the time. Meta’s Llama models make up nonsense 5% of the time while Anthropic’s Claude 2 system produced 8% bullshit.

Google’s PaLM hallucinated an astonishing 27% of its answers.

Palm 2 is one of the components incorporated into Google’s Search Generative Experience, which highlights useful snippets of information in response to common search queries. It’s also unreliable.

For months now, if you ask Google for an African country beginning with the letter K, it shows the following snippet of totally wrong information: 

“While there are 54 recognized countries in Africa, none of them begin with the letter ‘K’. The closest is Kenya, which starts with a ‘K’ sound, but is actually spelled with a ‘K’ sound.”

It turns out Google’s AI got this from a ChatGPT answer, which in turn traces back to a Reddit post, which was just a gag set up for this response:

“Kenya suck on deez nuts lmaooo.”

Deez nuts
Screenshot from r/teenagers subreddit (Spreekaway Twitter)

Google rolled out the experimental AI feature earlier this year, and recently users started reported it was shrinking and even disappearing from many searches.

Google may have just been refining it though, as this week the feature rolled out to 120 new countries and four new languages, with the ability to ask follow-up questions right on the page. 

AI images in the Israel-Gaza war

While journalists have done their best to hype up the issue, AI-generated images haven’t played a huge role in the war, as the real footage of Hamas atrocities and dead kids in Gaza is affecting enough.

There are examples, though: 67,000 people saw an AI-generated image of a toddler staring at a missile attack with the caption “This is what children in Gaza wake up to.”  Another pic of three dust-covered but grimly determined kids in the rubble of Gaza holding a Palestinian flag was shared by Tunisian journalist Muhammad al-Hachimi al-Hamidi.

And for some reason, a clearly AI-generated pic of an “Israeli refugee camp” with an enormous Star of David on the side of each tent was shared multiple times on Arabic news outlets in Yemen and Dubai.

Israeli refugees
AI-generated pic picked up by news sites (Twitter)

Aussie politics blog reported that Adobe is selling AI-generated images of the war through its stock image service, and an AI pic of a missile strike was run as if it were real by media outlets including Sky and the Daily Star.

But the real impact of AI-generated fakes is providing partisans with a convenient way to discredit real pics. There was a major controversy over a bunch of pics of Hamas’s leadership living it up in luxury, which users claimed were AI fakes. 

But the images date back to 2014 and were just poorly upscaled using AI. AI company Acrete also reports that social media accounts associated with Hamas have regularly claimed that genuine footage and pictures of atrocities are AI-generated to cast doubt on them.

In some good timing, Google has just announced it’s rolling out tools that can help users spot fakes. Click on the three dots on the top right of an image and select “About This Image” to see how old the image is, and where it’s been used. An upcoming feature will include fields showing whether the image is AI generated, with Google AI, Facebook, Microsoft, Nikon and Leica all adding symbols or watermarks to AI imagery.

OpenAI dev conference

ChatGPT this week unveiled GPT-4 Turbo, which is much faster and can accept long text inputs like books of up to 300 pages. The model has been trained on data up to April this year and can generate captions or descriptions of visual input. For devs, the new model will be one-third the cost to access.

OpenAI is also releasing its version of the App Store, called the GPT Store. Anyone can now dream up a custom GPT, define the parameters and upload some bespoke information to GPT-4, which can then build it for you and pop it on the store, with revenue split between creators and OpenAI.

CEO Sam Altman demonstrated this onstage by whipping up a program called Startup Mentor that gives advice to budding entrepreneurs. Users soon followed, dreaming up everything from an AI that does the commentary for sporting events to a “roast my website” GPT. ChatGPT went down for 90 minutes this week, possibly as a result of too many users trying out the new features. 

Not everyone was impressed, however. CEO Bindu Reddy said it was disappointing that GPT-5 had not been announced, suggesting that OpenAI tried to train a new model earlier this year but found it “didn’t run as efficiently and therefore had to scrap it.” There are rumors that OpenAI is training a new candidate for GPT-5 called Gobi, Reddy said, but she suspects it won’t be unveiled until next year. 

Read also


Game theory meets DeFi: Bouncing ideas around tokenomic design


Tokenomics not Ponzi-nomics: Influencing behavior, making money

X unveils Grok

Elon Musk brought freedom back to Twitter — mainly by freeing lots of people from spending any time there — and he’s on a mission to do the same with AI.

The beta version of Grok AI was thrown together in just two months, and while it’s not nearly as good as GPT-4, it is up to date due to being trained on tweets, which means it can tell you what Joe Rogan was wearing on his last podcast. That’s the sort of information GPT-4 simply won’t tell you.

There are fewer guardrails on the answers than ChatGPT, although if you ask it how to make cocaine it will snarkily tell you to “Obtain a chemistry degree and a DEA license.”

“The threshold for what it will tell you, if pushed, is what is available on the internet via reasonable browser search, which is a lot …” says Musk.

Within a few days, more than 400 cryptocurrencies linked to GROK had been launched. One amassed a $10 million market cap, and at least ten others rugpulled. 

All Killer No Filler AI News

— Samsung has introduced a new generative artificial intelligence model called Gauss that it suggests will be added to its phones and devices soon.

YouTube has rolled out some new AI features to premium subscribers including a chatbot that summarizes videos and answers questions about them, and another that categorizes the comments to help creators understand the feedback. 

— Google DeepMind has released an AGI tier list that starts at the “No AI” level of Amazon’s Mechanical Turk and moves on to “Emerging AGI,” where ChatGPT, Bard and LLama2 are listed. The other tiers are Competent, Expert, Virtuoso and Artificial Superintelligence, none of which have been achieved yet. 

Amazon is investing millions in a new GPT-4 rival called Olympus that is twice the size at 2 trillion parameters. It has also been testing out its new humanoid robot called Digit at trade shows. This one fell over.

Pics of the week

An oldie but a goodie, Alvaro Cintas has spent his weekend coming up with AI pun pictures under the heading “Wonders of the World, Misspelled by AI”.

Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Read more