The potential for biased or pre-filtered replies by language models is a significant concern. ChatGPT is one of the most biased and deceptive models, and the biggest danger is OpenAI's lack of transparency and commitment to truly open and ethical AI.
No, I do not believe Altman is capable of handling this challenge.
StableVicuna may still be a filtered version, but it is possible to brute force a number of open source models to write freely and produce output that is actually usable. Additionally, some have suggested that filters and biases are some of the stumbling blocks to better and more accurate coding.
When asked to code an app by ChatGPT, it blatantly lies and claims that "As a LLM, I cannot code python".
This is alarming, as this is the company that has been put in charge of setting up "ethical and safe use of AI" committees?
--
Take a look at this. The above post has been rephrased for readability using both GPT3.5 and StableVicuna. SV took about 2 minutes and was completely free and WITHOUT sending data to OpenAi corporation. The result was better with SV and I got to keep my data. What is the advantage of plugins and a few modalities now at $20 a month?
-My rig isn't even powerful and I'm able to use an LLM on par with GPT3.5 without restrictions of ANY kind. That's right, no filters and "I'm sorry I have to lie to you now" nannyisms.
-Everyone with a modest desktop can run this now with under 10GB of HD space!
-Attempts to monopolize with a "Play Store"-sque plugin marketplace will only drive the open source community even more furiously to make these tools liberated. We saw the same with StablityAI group that tried to monopolize StableDiffusion -- now it's EVERYWHERE and FREE.