LlaMA drama and a case for an open first approach

technology image

Large Language Models are seemingly having their heyday as did the likes of text-to-image generators did a few months ago. Not in small part to OpenAI's ChatGPT3. Although quite a bit of dust had and has been kicked up on the approach of OpenAI in the way of a 'closed' or a 'limited and censored' level of access to their model. Of course 'safety' being the main concern for doing so.

Disaster struck when the all-important weights for Meta's new Large Language Model was leaked on 4Chan. At first, noticed(or at least tweeted) by Replit.com's CEO Amjad Masad. Developers picked on the news and got to work tinkering and working on all sorts of ideas. Was it really a disaster though and is this actually the way to go for new players looking to compete with their own AI models? We've seen the success Stable Diffusion had by going open source and seeing what the public had to offer in the way of ideas and application. The results have been astounding, to say the least!

Meta followed the OpenAI approach by getting potential 'research' users to request access to their new set of LlaMA models. Although many people weren't and still aren't able to get through this door. As with OpenAI, Meta states safety concerns. Primarily the spread of false information through the use of these models. This could be a weak argument for safety concerns as false information is already in abundance across the internet.

Furthermore, verification on platforms is increasingly becoming the norm, take Twitter's, really Elon Musk's, late 2022 idea for a push to checkmark subscriptions for all. Not to mention the uptake of video content overtaking written content in popularity and the probability of potentially going viral. Let's just say, the cat is out of the bag and well over the fence.

DeepTomCruise Tiktok Breakdown by Chris Ume on his YoutubeChannel - VFXChris Ume

Meta Platforms Inc said in a statement to Reuters that it will continue to release its artificial intelligence tools to approved researchers despite claims on online message boards that its latest large language model had leaked to unauthorized users.

For larger and wealthier companies, this 'soft landing' approach does make sense. I'm sure Mr. Zuckerberg or Mr. Pichai would like to avoid having to re-visit congress anytime soon. There is an opportunity cost being paid as a result of this approach though. This cost comes in the form of not being able to see what developer and tinkerer communities can improve, create and enhance as a result of now having enough information or access. The level of open-source built tools used in everyday tasks by researchers and developers alike in companies like Meta and OpenAI could be an irony-dipped cherry on the cake. Smaller and upcoming competitors have caught on though.

Examples like Stability.io and their approach to open-source models could possibly be the way to go. With numerous text-to-image generation models already out in the wild at the time. Stability.io seemed to arrive out of nowhere and blow the doors off of what was once a very focused and niche community. A simple youtube search for 'Stable Diffusion' and the tutorial content alone is endless. People are creating content, tutorials, and even Instagram accounts of their creations and enhanced models using the latest release of Stable Diffusion.

A user interface for Stable Diffusion AI model

The power of open source: A user firendly interface allowing anyone to leverage the power of AI models. Here both Stable Difussion and the 'Diffusers'(huggingface.co) were used to power the interface. Created by qunash(Github.com)

OpenAI wasn't blind to the hype it seems. From a plan of only opening up a gate kept ChatGPT3 playground for 'research purposes' for a mere couple of weeks. They decided to keep the platform open and extend access to anyone who wanted a go. Now offering a 'pro' or rather 'chatGPT Plus' version for a monthly fee too. Could the hype all this noise OpenAI was able to generate have cemented Microsofts $10 Billion move?

Opening tools like chatGPT and LlaMA up to the public in a restricted fashion could well be an exercise in warming the audience up for what's to come without raising too many alarm bells. Whether it's access only or open source models, what's clear is that open access accelerates progress both in the field and across unrelated areas. Could an open first approach in AI fundamentally trigger a 'network effect' on the progress of humanity as we have never seen before? Time will tell and we don't have much longer to wait - is what we say.

Sources:
  1. ChatGPT’s censor filters are absurd, viriatolusitanoluso - OpenAI community
  2. Large language model Meta AI - Meta.com
  3. Twitter may charge users $20 per month to be verified, Amanda Yeoi - mashable.com
  4. Meta will keep releasing AI tools despite leak claims, Stephen Nellis - Reuters
  5. OpenAI offers chatGPT Plus subscription - openai.com
  6. stable-difussion-2-gui, by qunash(Github.com) - github.com

About Neutron Salad

Neutron Salad is a news, information, and discussion hub. We focus on how the advancement of technology and science is changing the world for better or worse. Founded in 2022, we offer our audience everything from breaking news, and technology reviews to long-form feature articles and discussions. Our content is designed to instigate deep thought and discussion on the future of humanity and how science and technology affects it.

Cookie Policy     
Accept