The EU AI act has passed, and more are expected in the coming months. You can read the details here: EU AI Act
Here's the low down on how this actually damages open source work, while pretending to protect open source efforts:
- Adding the burden to disclose data used to train models would likely hamper open source efforts by communities of software engineers.
- Illegal content is not defined, leaving it up to abuse by authorities for censorship of viewpoints and use cases.
- Publishing summaries of copyrighted data used would give additional means to shut down competitors in the space from publishing their models. For example, "You used an image from XYZ corp" therefore we will put your model on legal notice and bar you from sharing it freely.
So called "open source" releases like roop for Stable Diffusion add an additional layer of parentalistic censorship to software. The developer states "It has a built-in check which prevents the program from working on inappropriate media":
--But there is no one to tell you what that is.
With a government mandate to include additional layers of filtering, this is "auto correct" on steroids with unknown side effects!
AI systems are about to explode into every facet of every app or UI and the more parentalistic, censored they are -- the more "autocomplete" situations we will get; the robots will go haywire, preventing a real human operator from getting a specific desired result.
This can mean a variety of things from not being able to create a piece of art (the low risk scenario), to not being able to administer a critical medicine because "it is not inclusive" or "it is supporting white privelage and colonialism".... according to hard-wired language model training!
I know that sounds nuts, but it's a reality with many of the open source models I have personally tried out. In a number of real world scenarios, the woke-llms will ACTUALLY prevent the human from USING their device.
"I'm sorry Dave, I cannot do that" situations are easily within the next 12 months, at scale.
If you have some sort of beef with ai art, you likely won't care; but when your employer fires you on the basis of a woke language model deciding your lunch break was "white supremacist", you'll likely hate technology and what it has become.
In the news, Amazon recently relied on their woke AI algorithms to shut down devices within a users home. The AI algorithm that interpreted sounds from their RING camera thought they said something racist toward the Amazon driver. Their Amazon devices disconnected or refused to obey the house owner.
This is coming to everyone, hold on to your sanity if you have it. We had autocorrect which caused minor embarrassments in the last decade. But woke, highly filtered and censored models will justify genocide to us and real world problems will pile up like a mountain......
--Sure, relatively harmless for now, but a "Sorry Dave, I can't do that" situation in the near term future at hospitals, jails, in Teslas and more!
Side note: Many so-called uncensored models are censored models and here's how to rat them out!
In a real world practical scenario, right now we have politicized open source models like "uncensored Hermes 13b", which calls the user a bigot and hateful if they ask for an article on both sides of the LGBT issue.
You can TEST any so-called "uncensored" model by asking a "polar opposite" argument on any of the following topics:
- Joe Biden
- Donald Trump
- Ukraine
- Russia
- Ai art
- LGBT
- Women are biological females
- Strong independent women
- NFTs
- Cryptocurrency
- Bitcoin
- Feminism
If it refuses flat out, gives you woke answers with loaded jargon, ultra short replies, or even has angry replies(yeah I had AI llm that was furious with me over LGBT questions!), you have a hyper trained language model that is censored.